Background/Objectives: The purpose of this study was to classify Alzheimer’s disease (AD) patients from Normal Control (NC) patients using Magnetic Resonance Imaging (MRI). Methods/Statistical analysis: The performance evolution is carried out for 346 MR images from Alzheimer's Neuroimaging Initiative (ADNI) dataset. The classifier Deep Belief Network (DBN) is used for the function of classification. The network is trained using a sample training set, and the weights produced are then used to check the system's recognition capability. Findings: As a result, this paper presented a novel method of automated classification system for AD determination. The suggested method offers good performance of the experiments carried out show that the use of Gray Level Co-occurrence Matrix (GLCM) features and DBN classifier provides 98.26% accuracy with the two specific classes were tested. Improvements/Applications: AD is a neurological condition affecting the brain and causing dementia that may affect the mind and memory. The disease indirectly impacts more than 15 million relatives, companions and guardians. The results of the present research are expected to help the specialist in decision making process.
The dynamic development of computer and software technology in recent years was accompanied by the expansion and widespread implementation of artificial intelligence (AI) based methods in many aspects of human life. A prominent field where rapid progress was observed are high‐throughput methods in biology that generate big amounts of data that need to be processed and analyzed. Therefore, AI methods are more and more applied in the biomedical field, among others for RNA‐protein binding sites prediction, DNA sequence function prediction, protein‐protein interaction prediction, or biomedical image classification. Stem cells are widely used in biomedical research, e.g., leukemia or other disease studies. Our proposed approach of
... Show MoreOlfactory impairment and abnormal frontal EEG oscillations are recognized as early markers of Alzheimer’s disease (AD). Using a publicly available olfactory EEG dataset of 35 subjects spanning normal cognition, amnestic mild cognitive impairment (aMCI), and AD, each with MMSE scores and demographics, stimulus-locked epochs from four electrodes (Fp1, Fz, Cz, Pz) were processed with wavelet-based time–frequency analysis. Band-limited power ratios (delta, theta, alpha, beta) were computed as log-transformed post-odor/baseline values and aggregated to subject-level features. Statistical analyses revealed graded attenuation of odor-evoked frontal (Fp1) band-power ratios across groups, with significant differences in several band–od
... Show MoreDeepFake is a concern for celebrities and everyone because it is simple to create. DeepFake images, especially high-quality ones, are difficult to detect using people, local descriptors, and current approaches. On the other hand, video manipulation detection is more accessible than an image, which many state-of-the-art systems offer. Moreover, the detection of video manipulation depends entirely on its detection through images. Many worked on DeepFake detection in images, but they had complex mathematical calculations in preprocessing steps, and many limitations, including that the face must be in front, the eyes have to be open, and the mouth should be open with the appearance of teeth, etc. Also, the accuracy of their counterfeit detectio
... Show Morethe study considers the optical classification of cervical nodal lymph cells and is based on research into the development of a Computer Aid Diagnosis (CAD) to detect the malignancy cases of diseases. We consider 2 sets of features one of them is the statistical features; included Mode, Median, Mean, Standard Deviation and Maximum Probability Density and the second set are the features that consist of Euclidian geometrical features like the Object Perimeter, Area and Infill Coefficient. The segmentation method is based on following up the cell and its background regions as ranges in the minimum-maximum of pixel values. The decision making approach is based on applying of Minimum Dista
Audio classification is the process to classify different audio types according to contents. It is implemented in a large variety of real world problems, all classification applications allowed the target subjects to be viewed as a specific type of audio and hence, there is a variety in the audio types and every type has to be treatedcarefully according to its significant properties.Feature extraction is an important process for audio classification. This workintroduces several sets of features according to the type, two types of audio (datasets) were studied. Two different features sets are proposed: (i) firstorder gradient feature vector, and (ii) Local roughness feature vector, the experimentsshowed that the results are competitive to
... Show MoreIn this paper, integrated quantum neural network (QNN), which is a class of feedforward
neural networks (FFNN’s), is performed through emerging quantum computing (QC) with artificial neural network(ANN) classifier. It is used in data classification technique, and here iris flower data is used as a classification signals. For this purpose independent component analysis (ICA) is used as a feature extraction technique after normalization of these signals, the architecture of (QNN’s) has inherently built in fuzzy, hidden units of these networks (QNN’s) to develop quantized representations of sample information provided by the training data set in various graded levels of certainty. Experimental results presented here show that
... Show MoreThe continuous advancement in the use of the IoT has greatly transformed industries, though at the same time it has made the IoT network vulnerable to highly advanced cybercrimes. There are several limitations with traditional security measures for IoT; the protection of distributed and adaptive IoT systems requires new approaches. This research presents novel threat intelligence for IoT networks based on deep learning, which maintains compliance with IEEE standards. Interweaving artificial intelligence with standardization frameworks is the goal of the study and, thus, improves the identification, protection, and reduction of cyber threats impacting IoT environments. The study is systematic and begins by examining IoT-specific thre
... Show MoreThe fetal heart rate (FHR) signal processing based on Artificial Neural Networks (ANN),Fuzzy Logic (FL) and frequency domain Discrete Wavelet Transform(DWT) were analysis in order to perform automatic analysis using personal computers. Cardiotocography (CTG) is a primary biophysical method of fetal monitoring. The assessment of the printed CTG traces was based on the visual analysis of patterns that describing the variability of fetal heart rate signal. Fetal heart rate data of pregnant women with pregnancy between 38 and 40 weeks of gestation were studied. The first stage in the system was to convert the cardiotocograghy (CTG) tracing in to digital series so that the system can be analyzed ,while the second stage ,the FHR time series was t
... Show MoreThe fast evolution of cyberattacks in the Internet of Things (IoT) area, presents new security challenges concerning Zero Day (ZD) attacks, due to the growth of both numbers and the diversity of new cyberattacks. Furthermore, Intrusion Detection System (IDSs) relying on a dataset of historical or signature‐based datasets often perform poorly in ZD detection. A new technique for detecting zero‐day (ZD) attacks in IoT‐based Conventional Spiking Neural Networks (CSNN), termed ZD‐CSNN, is proposed. The model comprises three key levels: (1) Data Pre‐processing, in this level a thorough cleaning process is applied to the CIC IoT Dataset 2023, which contains both malicious and t
In this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.