Significant advances in the automated glaucoma detection techniques have been made through the employment of the Machine Learning (ML) and Deep Learning (DL) methods, an overview of which will be provided in this paper. What sets the current literature review apart is its exclusive focus on the aforementioned techniques for glaucoma detection using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines for filtering the selected papers. To achieve this, an advanced search was conducted in the Scopus database, specifically looking for research papers published in 2023, with the keywords "glaucoma detection", "machine learning", and "deep learning". Among the multiple found papers, the ones focusing on ML and DL techniques were selected. The best performance metrics obtained using ML recorded in the reviewed papers, were for the SVM, which achieved accuracies of 98.31%, 98.61%, 96.43%, 96.67%, 95.24%, and 98.60% in the ACRIMA, REFUGE, RIM-ONE, ORIGA-light, DRISHTI-GS, and sjchoi86-HRF databases, respectively, employing the REFUGE-trained model, while when deploying the ACRIMA-trained model, it attained accuracies of 98.92%, 99.06%, 98.27%, 97.10%, 96.97%, and 96.36%, in the same databases, respectively. The best performance metrics obtained utilizing DL recorded in the reviewed papers, were for the lightweight CNN, with an accuracy of 99.67% in the Diabetic Retinopathy (DR) and 96.5% in the Glaucoma (GL) databases. In the context of non-healthy screening, CNN achieved an accuracy of 99.03% when distinguishing between GL and DR cases. Finally, the best performance metrics were obtained using ensemble learning methods, which achieved an accuracy of 100%, specificity of 100%, and sensitivity of 100%. The current review offers valuable insights for clinicians and summarizes the recent techniques used by the ML and DL for glaucoma detection, including algorithms, databases, and evaluation criteria.
This research deals with the use of a number of statistical methods, such as the kernel method, watershed, histogram, and cubic spline, to improve the contrast of digital images. The results obtained according to the RSME and NCC standards have proven that the spline method is the most accurate in the results compared to other statistical methods.
Building Information Modeling (BIM) is extensively used in the construction industry due to its benefits throughout the Project Life Cycle (PLC). BIM can simulate buildings throughout PLC, detect and resolve problems, and improve building visualization that contributes to the representation of actual project details in the construction stage. BIM contributes to project management promotion by detecting problems that lead to conflicts, cost overruns, and time delays. This work aims to implement an effective BIM for the Iraqi construction projects’ life cycle. The methodology used is a literature review to collect the most important factors contributing to the success of BIM implementation, interview the team of the Cent
... Show MoreUropathogenic specific protein is a genotoxic protein targeting the DNA, leading to mutations and modifications in the normal cell's DNA and subsequently, cancer development. This study aims to determine the prevalence of the usp gene in Uropathogenic Escherichia coli isolated from females with urinary tract infections and study its correlation with biofilm formation. One hundred and five urine specimens were collected from female patients (20 to 55 years old) with urinary tract infections attending hospitals. Traditional laboratory methods using selective and differential culture media were used for initial bacterial isolation and identification, and molecular techniques that targeted a segment of the 16SrRNA gene with a specific primer pa
... Show MoreIts well known that understanding human facial expressions is a key component in understanding emotions and finds broad applications in the field of human-computer interaction (HCI), has been a long-standing issue. In this paper, we shed light on the utilisation of a deep convolutional neural network (DCNN) for facial emotion recognition from videos using the TensorFlow machine-learning library from Google. This work was applied to ten emotions from the Amsterdam Dynamic Facial Expression Set-Bath Intensity Variations (ADFES-BIV) dataset and tested using two datasets.
In this paper, membrane-based computing image segmentation, both region-based and edge-based, is proposed for medical images that involve two types of neighborhood relations between pixels. These neighborhood relations—namely, 4-adjacency and 8-adjacency of a membrane computing approach—construct a family of tissue-like P systems for segmenting actual 2D medical images in a constant number of steps; the two types of adjacency were compared using different hardware platforms. The process involves the generation of membrane-based segmentation rules for 2D medical images. The rules are written in the P-Lingua format and appended to the input image for visualization. The findings show that the neighborhood relations between pixels o
... Show MoreUse of Surveillance cameras in houses and markets became common, that resulted to minimize theft and make it a difficult task because it let recording and viewing what is going around. The wide application of these cameras, pushed thieves to seek new ways for abolition of the surveillance system and digital recording of events, such as cutting the signal wire between the camera and Digital video recorder or changing the direction of the camera away from the focus spot or damaging the camera or steal the device which means the loss of the recorded media. This paper focuses on such abolitions and fixed it by suggesting a way to notify the administrator immediately and automatically by Email about any violation of the system using MATLAB, whic
... Show MoreIn this paper we present an operational computer vision system for real-time motion detection and recording that can be used in surveillance system. The system captures a video of a scene and identifies the frames that contains motion and record them in such a way that only the frames that is important to us is recorded and a report is made in the form of a movie is made and can be displayed. All parts that are captured by the camera are recorded to compare both movies. This serves as both a proof-of- concept and a verification of other existing algorithms for motion detection. Motion frames are detected using frame differencing. The results of the experiments with the system indicate the ability to minimize some of the problems false detec
... Show MoreThe rehabilitation of deteriorated pavements using Asphalt Concrete (AC) overlays consistently confronts the reflection cracking challenge, where inherent cracks and joints from an existing pavement layer are mirrored in the new overlay. To address this issue, the current study evaluates the effectiveness of Engineered Cementitious Composite (ECC) and geotextile fabric as mitigation strategies. ECC, characterized by its tensile ductility, fracture resistance, and high deformation capacity, was examined in interlayer thicknesses of 7, 12, and 17 mm. Additionally, the impact of geotextile fabric positioning at the base and at 1/3 depth of the AC specimen was explored. Utilizing the Overlay Testing Machine (OTM) for evaluations, the research d
... Show MoreIn this study, the relationship between the bare soil temperature with respect to its salinity is presented, the bare soil feature is considered only by eliminating all other land features by classifying the site location by using the support vector machine algorithm, in the same time the salinity index that calculated from the spectral response from the satellite bands is calibrated using empirical salinity value calculated from field soil samples. A 2D probability density function is used to analyze the relationship between the temperature rising from the minimum temperature (from the sunrise time) due to the solar radiation duration tell the time of the satellite capturing the scene image and the calibrated salinity index is presented. T
... Show MoreThis paper proposed a new method for network self-fault management (NSFM) based on two technologies: intelligent agent to automate fault management tasks, and Windows Management Instrumentations (WMI) to identify the fault faster when resources are independent (different type of devices). The proposed network self-fault management reduced the load of network traffic by reducing the request and response between the server and client, which achieves less downtime for each node in state of fault occurring in the client. The performance of the proposed system is measured by three measures: efficiency, availability, and reliability. A high efficiency average is obtained depending on the faults occurred in the system which reaches to
... Show More