Biomarkers to detect Alzheimer’s disease (AD) would enable patients to gain access to appropriate services and may facilitate the development of new therapies. Given the large numbers of people affected by AD, there is a need for a low-cost, easy to use method to detect AD patients. Potentially, the electroencephalogram (EEG) can play a valuable role in this, but at present no single EEG biomarker is robust enough for use in practice. This study aims to provide a methodological framework for the development of robust EEG biomarkers to detect AD with a clinically acceptable performance by exploiting the combined strengths of key biomarkers. A large number of existing and novel EEG biomarkers associated with slowing of EEG, reduction in EEG complexity and decrease in EEG connectivity were investigated. Support vector machine and linear discriminate analysis methods were used to find the best combination of the EEG biomarkers to detect AD with significant performance. A total of 325,567 EEG biomarkers were investigated, and a panel of six biomarkers was identified and used to create a diagnostic model with high performance (≥85% for sensitivity and 100% for specificity).
Portland cement concrete is the most commonly used construction material in the world for decades. However, the searches in concrete technology are remaining growing to meet particular properties related to its strength, durability, and sustainability issue. Thus, several types of concrete have been developed to enhance concrete performance. Most of the modern concrete types have to contain supplementary cementitious materials (SCMs) as a partial replacement of cement. These materials are either by-products of waste such as fly ash, slag, rice husk ash, and silica fume or from a geological resource like natural pozzolans and metakaolin (MK). Ideally, the utilization of SCMs will enhance the concrete performance, minimize
... Show MoreA recently reported Nile red (NR) dye conjugated with benzothiadiazole species paves the way for the development of novel organic-based sensitizers used in solar cells whose structures are susceptible to modifications. Thus, six novel NR structures were derived from two previously developed structures in laboratories. In this study, density functional theory (DFT) calculations and time-dependent DFT (TD-DFT) were used to determine the optoelectronic properties of the NR-derived moieties such as absorption spectra. Various linkers were investigated in an attempt to understand the impact of π-linkers on the optoelectronic properties. According to the findings, the presence of furan species led to the planarity of the molecule and a reduction
... Show MoreAerial manipulation of objects has a number of advantages as it is not limited by the morphology of the terrain. One of the main problems of the aerial payload process is the lack of real-time prediction of the interaction between the gripper of the aerial robot and the payload. This paper introduces a digital twin (DT) approach based on impedance control of the aerial payload transmission process. The impedance control technique is implemented to develop the target impedance based on emerging the mass of the payload and the model of the gripper fingers. Tracking the position of the interactional point between the fingers of gripper and payload, inside the impedance control, is achieved using model predictive control (MPD) approach.
... Show MoreI
In this study, optical fibers were designed and implemented as a chemical sensor based on surface plasmon resonance (SPR) to estimate the age of the oil used in electrical transformers. The study depends on the refractive indices of the oil. The sensor was created by embedding the center portion of the optical fiber in a resin block, followed by polishing, and tapering to create the optical fiber sensor. The tapering time was 50 min. The multi-mode optical fiber was coated with 60 nm thickness gold metal. The deposition length was 4 cm. The sensor's resonance wavelength was 415 nm. The primary sensor parameters were calculated, including sensitivity (6.25), signal-to-noise ratio (2.38), figure of merit (4.88), and accuracy (3.2)
... Show MoreWith the increasing integration of computers and smartphones into our daily lives, in addition to the numerous benefits it offers over traditional paper-based methods of conducting affairs, it has become necessary to incorporate one of the most essential facilities into this integration; namely: colleges. The traditional approach for conducting affairs in colleges is mostly paper-based, which only increases time and workload and is relatively decentralized. This project provides educational and management services for the university environment, targeting the staff, the student body, and the lecturers, on two of the most used platforms: smartphones and reliable web applications by clo
The internet of medical things (IoMT), which is expected the lead to the biggest technology in worldwide distribution. Using 5th generation (5G) transmission, market possibilities and hazards related to IoMT are improved and detected. This framework describes a strategy for proactively addressing worries and offering a forum to promote development, alter attitudes and maintain people's confidence in the broader healthcare system without compromising security. It is combined with a data offloading system to speed up the transmission of medical data and improved the quality of service (QoS). As a result of this development, we suggested the enriched energy efficient fuzzy (EEEF) data offloading technique to enhance the delivery of dat
... Show MoreThe meniscus has a crucial function in human anatomy, and Magnetic Resonance Imaging (M.R.I.) plays an essential role in meniscus assessment. It is difficult to identify cartilage lesions using typical image processing approaches because the M.R.I. data is so diverse. An M.R.I. data sequence comprises numerous images, and the attributes area we are searching for may differ from each image in the series. Therefore, feature extraction gets more complicated, hence specifically, traditional image processing becomes very complex. In traditional image processing, a human tells a computer what should be there, but a deep learning (D.L.) algorithm extracts the features of what is already there automatically. The surface changes become valuable when
... Show More