The complexity and variety of language included in policy and academic documents make the automatic classification of research papers based on the United Nations Sustainable Development Goals (SDGs) somewhat difficult. Using both pre-trained and contextual word embeddings to increase semantic understanding, this study presents a complete deep learning pipeline combining Bidirectional Long Short-Term Memory (BiLSTM) and Convolutional Neural Network (CNN) architectures which aims primarily to improve the comprehensibility and accuracy of SDG text classification, thereby enabling more effective policy monitoring and research evaluation. Successful document representation via Global Vector (GloVe), Bidirectional Encoder Representations from Transformers (BERT), and FastText embeddings follows our approach, which comprises exhaustive preprocessing operations including stemming, stopword deletion, and ways to address class imbalance. Training and evaluation of the hybrid BiLSTM-CNN model on several benchmark datasets, including SDG-labeled corpora and relevant external datasets like GoEmotion and Ohsumed, help provide a complete assessment of the model’s generalizability. Moreover, this study utilizes zero-shot prompt-based categorization using GPT-3.5/4 and Flan-T5, thereby providing a comprehensive benchmark against current approaches and doing comparative tests using leading models such as Robustly Optimized BERT Pretraining Approach (RoBERTa) and Decoding-enhanced BERT with Disentangled Attention (DeBERTa). Experimental results show that the proposed hybrid model achieves competitive performance due to contextual embeddings, which greatly improve classification accuracy. The study explains model decision processes and improves openness using interpretability techniques, including SHapley Additive exPlanations (SHAP) analysis and attention visualization. These results emphasize the need to incorporate rapid engineering techniques alongside deep learning architectures for effective and interpretable SDG text categorization. With possible effects on more general uses in policy analysis and scientific literature mining, this work offers a scalable and transparent solution for automating the evaluation of SDG research.
Diabetic retinopathy is an eye disease in diabetic patients due to damage to the small blood vessels in the retina due to high and low blood sugar levels. Accurate detection and classification of Diabetic Retinopathy is an important task in computer-aided diagnosis, especially when planning for diabetic retinopathy surgery. Therefore, this study aims to design an automated model based on deep learning, which helps ophthalmologists detect and classify diabetic retinopathy severity through fundus images. In this work, a deep convolutional neural network (CNN) with transfer learning and fine tunes has been proposed by using pre-trained networks known as Residual Network-50 (ResNet-50). The overall framework of the proposed
... Show MoreIn this study, a traumatic spinal cord injury (TSCI) classification system is proposed using a convolutional neural network (CNN) technique with automatically learned features from electromyography (EMG) signals for a non-human primate (NHP) model. A comparison between the proposed classification system and a classical classification method (k-nearest neighbors, kNN) is also presented. Developing such an NHP model with a suitable assessment tool (i.e., classifier) is a crucial step in detecting the effect of TSCI using EMG, which is expected to be essential in the evaluation of the efficacy of new TSCI treatments. Intramuscular EMG data were collected from an agonist/antagonist tail muscle pair for the pre- and post-spinal cord lesi
... Show MoreDuring COVID-19, wearing a mask was globally mandated in various workplaces, departments, and offices. New deep learning convolutional neural network (CNN) based classifications were proposed to increase the validation accuracy of face mask detection. This work introduces a face mask model that is able to recognize whether a person is wearing mask or not. The proposed model has two stages to detect and recognize the face mask; at the first stage, the Haar cascade detector is used to detect the face, while at the second stage, the proposed CNN model is used as a classification model that is built from scratch. The experiment was applied on masked faces (MAFA) dataset with images of 160x160 pixels size and RGB color. The model achieve
... Show MoreBackground: The pandemic crisis prompted the world to adopt unexpected approaches to continue life as normally as possible. The education sector, including professors, students, and the overall teaching system, has been particularly affected. Objective: This study seeks to evaluate the benefits, challenges, and strategies related to COVID-19 from the perspectives of college students, particularly those in higher education in Iraq. Method: The online survey questionnaire was distributed via Google Forms and specifically aimed at undergraduate dental students. Results: A total of 348 students participated in the survey. There was a significant correlation (P > 0.01) between student satisfaction with hybrid learning and their experi
... Show More<span lang="EN-US">The need for robotics systems has become an urgent necessity in various fields, especially in video surveillance and live broadcasting systems. The main goal of this work is to design and implement a rover robotic monitoring system based on raspberry pi 4 model B to control this overall system and display a live video by using a webcam (USB camera) as well as using you only look once algorithm-version five (YOLOv5) to detect, recognize and display objects in real-time. This deep learning algorithm is highly accurate and fast and is implemented by Python, OpenCV, PyTorch codes and the Context Object Detection Task (COCO) 2020 dataset. This robot can move in all directions and in different places especially in
... Show MoreThe topic of urban transformations has attracted the attention of researchers as it is one of the basic issues through which cities can be transformed towards sustainability. A specific level of transformation levels according to a philosophical concept known as a crossing. This article has relied on a specific methodology that aims to find a new approach for urban transformation based on the crossing concept. This concept derives from philosophical entrances based on the concepts of (being, process, becoming, and integration). Four levels have been for the crossing are (normal, ascending, leap, and descending). Each of these levels includes specific characteristics that distinguish it. The results showed that there is no descending
... Show MoreDeep Learning Techniques For Skull Stripping of Brain MR Images
One of the diseases on a global scale that causes the main reasons of death is lung cancer. It is considered one of the most lethal diseases in life. Early detection and diagnosis are essential for lung cancer and will provide effective therapy and achieve better outcomes for patients; in recent years, algorithms of Deep Learning have demonstrated crucial promise for their use in medical imaging analysis, especially in lung cancer identification. This paper includes a comparison between a number of different Deep Learning techniques-based models using Computed Tomograph image datasets with traditional Convolution Neural Networks and SequeezeNet models using X-ray data for the automated diagnosis of lung cancer. Although the simple details p
... Show MoreThe convolutional neural networks (CNN) are among the most utilized neural networks in various applications, including deep learning. In recent years, the continuing extension of CNN into increasingly complicated domains has made its training process more difficult. Thus, researchers adopted optimized hybrid algorithms to address this problem. In this work, a novel chaotic black hole algorithm-based approach was created for the training of CNN to optimize its performance via avoidance of entrapment in the local minima. The logistic chaotic map was used to initialize the population instead of using the uniform distribution. The proposed training algorithm was developed based on a specific benchmark problem for optical character recog
... Show More