The complexity and variety of language included in policy and academic documents make the automatic classification of research papers based on the United Nations Sustainable Development Goals (SDGs) somewhat difficult. Using both pre-trained and contextual word embeddings to increase semantic understanding, this study presents a complete deep learning pipeline combining Bidirectional Long Short-Term Memory (BiLSTM) and Convolutional Neural Network (CNN) architectures which aims primarily to improve the comprehensibility and accuracy of SDG text classification, thereby enabling more effective policy monitoring and research evaluation. Successful document representation via Global Vector (GloVe), Bidirectional Encoder Representations from Transformers (BERT), and FastText embeddings follows our approach, which comprises exhaustive preprocessing operations including stemming, stopword deletion, and ways to address class imbalance. Training and evaluation of the hybrid BiLSTM-CNN model on several benchmark datasets, including SDG-labeled corpora and relevant external datasets like GoEmotion and Ohsumed, help provide a complete assessment of the model’s generalizability. Moreover, this study utilizes zero-shot prompt-based categorization using GPT-3.5/4 and Flan-T5, thereby providing a comprehensive benchmark against current approaches and doing comparative tests using leading models such as Robustly Optimized BERT Pretraining Approach (RoBERTa) and Decoding-enhanced BERT with Disentangled Attention (DeBERTa). Experimental results show that the proposed hybrid model achieves competitive performance due to contextual embeddings, which greatly improve classification accuracy. The study explains model decision processes and improves openness using interpretability techniques, including SHapley Additive exPlanations (SHAP) analysis and attention visualization. These results emphasize the need to incorporate rapid engineering techniques alongside deep learning architectures for effective and interpretable SDG text categorization. With possible effects on more general uses in policy analysis and scientific literature mining, this work offers a scalable and transparent solution for automating the evaluation of SDG research.
On Saturday, May 13, 2023, a glorious day was engraved in the history of Al-Kindy College Medical Journal as it is the day of indexing the journal in the Scopus Database Journals. The journal has paced a strenuous journey to make that achievement.
Image retrieval is used in searching for images from images database. In this paper, content – based image retrieval (CBIR) using four feature extraction techniques has been achieved. The four techniques are colored histogram features technique, properties features technique, gray level co- occurrence matrix (GLCM) statistical features technique and hybrid technique. The features are extracted from the data base images and query (test) images in order to find the similarity measure. The similarity-based matching is very important in CBIR, so, three types of similarity measure are used, normalized Mahalanobis distance, Euclidean distance and Manhattan distance. A comparison between them has been implemented. From the results, it is conclud
... Show MoreGenome sequencing has significantly improved the understanding of HIV and AIDS through accurate data on viral transmission, evolution and anti-therapeutic processes. Deep learning algorithms, like the Fined-Tuned Gradient Descent Fused Multi-Kernal Convolutional Neural Network (FGD-MCNN), can predict strain behaviour and evaluate complex patterns. Using genotypic-phenotypic data obtained from the Stanford University HIV Drug Resistance Database, the FGD-MCNN created three files covering various antiretroviral medications for HIV predictions and drug resistance. These files include PIs, NRTIs and NNRTIs. FGD-MCNNs classify genetic sequences as vulnerable or resistant to antiretroviral drugs by analyzing chromosomal information and id
... Show MoreIn Computer-based applications, there is a need for simple, low-cost devices for user authentication. Biometric authentication methods namely keystroke dynamics are being increasingly used to strengthen the commonly knowledge based method (example a password) effectively and cheaply for many types of applications. Due to the semi-independent nature of the typing behavior it is difficult to masquerade, making it useful as a biometric. In this paper, C4.5 approach is used to classify user as authenticated user or impostor by combining unigraph features (namely Dwell time (DT) and flight time (FT)) and digraph features (namely Up-Up Time (UUT) and Down-Down Time (DDT)). The results show that DT enhances the performance of digraph features by i
... Show More
XML is being incorporated into the foundation of E-business data applications. This paper addresses the problem of the freeform information that stored in any organization and how XML with using this new approach will make the operation of the search very efficient and time consuming. This paper introduces new solution and methodology that has been developed to capture and manage such unstructured freeform information (multi information) depending on the use of XML schema technologies, neural network idea and object oriented relational database, in order to provide a practical solution for efficiently management multi freeform information system.
Malaysia's growing population and industrialisation have increased solid waste accumulation in landfills, leading to a rise in leachate production. Leachate, a highly contaminated liquid from landfills, poses environmental risks and affects water quality. Conventional leachate treatments are costly and time-consuming due to the need for additional chemicals. Therefore, the Electrocoagulation process could be used as an alternative method. Electrocoagulation is an electrochemical method of treating water by eliminating impurities by applying an electric current. In the present study, the optimisation of contaminant removal was investigated using Response Surface Methodology. Three parameters were considered for optimisation: the curr
... Show MoreA field study was conducted on a sample of the public in Baghdad to study the audience's exposure to the television promotion of pharmaceutical products and their trends in order to determine the rate of exposure of the public to the television promotion of pharmaceutical products according to the theory of uses and rumors and to determine the public's attitudes towards television promotion of pharmaceutical products. A survey of (25) a questions was distributed to a sample of the audience of 150 people. The statistical program SPSS was used to unload the data and for the calculation of frequencies and percentages and correlation coefficients. The research reached several results, Most importantly, the television promotion is well receiv
... Show MoreThis paper proposes a new approach, of Clustering Ultrasound images using the Hybrid Filter (CUHF) to determine the gender of the fetus in the early stages. The possible advantage of CUHF, a better result can be achieved when fuzzy c-mean FCM returns incorrect clusters. The proposed approach is conducted in two steps. Firstly, a preprocessing step to decrease the noise presented in ultrasound images by applying the filters: Local Binary Pattern (LBP), median, median and discrete wavelet (DWT),(median, DWT & LBP) and (median & Laplacian) ML. Secondly, implementing Fuzzy C-Mean (FCM) for clustering the resulted images from the first step. Amongst those filters, Median & Laplace has recorded a better accuracy. Our experimental evaluation on re
... Show MoreThis paper proposes a new approach, of Clustering Ultrasound images using the Hybrid Filter (CUHF) to determine the gender of the fetus in the early stages. The possible advantage of CUHF, a better result can be achieved when fuzzy c-mean FCM returns incorrect clusters. The proposed approach is conducted in two steps. Firstly, a preprocessing step to decrease the noise presented in ultrasound images by applying the filters: Local Binary Pattern (LBP), median, median and discrete wavelet (DWT), (median, DWT & LBP) and (median & Laplacian) ML. Secondly, implementing Fuzzy C-Mean (FCM) for clustering the resulted images from the first step. Amongst those filters, Median & Lap