Background/Objectives: The purpose of this study was to classify Alzheimer’s disease (AD) patients from Normal Control (NC) patients using Magnetic Resonance Imaging (MRI). Methods/Statistical analysis: The performance evolution is carried out for 346 MR images from Alzheimer's Neuroimaging Initiative (ADNI) dataset. The classifier Deep Belief Network (DBN) is used for the function of classification. The network is trained using a sample training set, and the weights produced are then used to check the system's recognition capability. Findings: As a result, this paper presented a novel method of automated classification system for AD determination. The suggested method offers good performance of the experiments carried out show that the use of Gray Level Co-occurrence Matrix (GLCM) features and DBN classifier provides 98.26% accuracy with the two specific classes were tested. Improvements/Applications: AD is a neurological condition affecting the brain and causing dementia that may affect the mind and memory. The disease indirectly impacts more than 15 million relatives, companions and guardians. The results of the present research are expected to help the specialist in decision making process.
To determine the relationship between celiac disease and reproductive disorder, twenty two women with recurrent spontaneous abortion (18-35) years have been investigated from the period 2017/11/1 – 2018/2/1 and compared wih twenty two parentally healthy women. All studied groups were carried out to measure antitissue transglutaminase IgA and IgG antibodies by Enzyme linked immunosorbent assay (ELISA) technique, There were a highly significant differences (P < 0.01) in the concentration of anti TtG IgA and IgG Ab compared to control group, while there was non-significant differences (P > 0.05) in the concentration of anti TtG IgA according to the age group and there was a significant difference (P < 0.05) in the concentration of anti TtG I
... Show MoreObjective(s): To determine the quality of life for adults with a chronic obstructive pulmonary disease.
Methodology: A descriptive study was carried out on (80) patients with a chronic obstructive pulmonary disease from
December 2008 through October 2009 with special inclusion criteria (adult paƟents from 18 years and above exclude
the patients who suffer complication related of disease and from psychological problems and other chronic illnesses.
The data were analyzed through the application of descriptive data analysis approach and inferential data approach.
Result: The study indicated that the determination of QoL for COPD depended on the level of effect .The grades
according to R.S are: "high" effect of disease in
The complexity and variety of language included in policy and academic documents make the automatic classification of research papers based on the United Nations Sustainable Development Goals (SDGs) somewhat difficult. Using both pre-trained and contextual word embeddings to increase semantic understanding, this study presents a complete deep learning pipeline combining Bidirectional Long Short-Term Memory (BiLSTM) and Convolutional Neural Network (CNN) architectures which aims primarily to improve the comprehensibility and accuracy of SDG text classification, thereby enabling more effective policy monitoring and research evaluation. Successful document representation via Global Vector (GloVe), Bidirectional Encoder Representations from Tra
... Show MoreA novel median filter based on crow optimization algorithms (OMF) is suggested to reduce the random salt and pepper noise and improve the quality of the RGB-colored and gray images. The fundamental idea of the approach is that first, the crow optimization algorithm detects noise pixels, and that replacing them with an optimum median value depending on a criterion of maximization fitness function. Finally, the standard measure peak signal-to-noise ratio (PSNR), Structural Similarity, absolute square error and mean square error have been used to test the performance of suggested filters (original and improved median filter) used to removed noise from images. It achieves the simulation based on MATLAB R2019b and the resul
... Show MoreAuthentication is the process of determining whether someone or something is, in fact, who or what it is declared to be. As the dependence upon computers and computer networks grows, the need for user authentication has increased. User’s claimed identity can be verified by one of several methods. One of the most popular of these methods is represented by (something user know), such as password or Personal Identification Number (PIN). Biometrics is the science and technology of authentication by identifying the living individual’s physiological or behavioral attributes. Keystroke authentication is a new behavioral access control system to identify legitimate users via their typing behavior. The objective of this paper is to provide user
... Show MoreIris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the origin
... Show MoreToday with increase using social media, a lot of researchers have interested in topic extraction from Twitter. Twitter is an unstructured short text and messy that it is critical to find topics from tweets. While topic modeling algorithms such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) are originally designed to derive topics from large documents such as articles, and books. They are often less efficient when applied to short text content like Twitter. Luckily, Twitter has many features that represent the interaction between users. Tweets have rich user-generated hashtags as keywords. In this paper, we exploit the hashtags feature to improve topics learned