Clinical keratoconus (KCN) detection is a challenging and time-consuming task. In the diagnosis process, ophthalmologists must revise demographic and clinical ophthalmic examinations. The latter include slit-lamb, corneal topographic maps, and Pentacam indices (PI). We propose an Ensemble of Deep Transfer Learning (EDTL) based on corneal topographic maps. We consider four pretrained networks, SqueezeNet (SqN), AlexNet (AN), ShuffleNet (SfN), and MobileNet-v2 (MN), and fine-tune them on a dataset of KCN and normal cases, each including four topographic maps. We also consider a PI classifier. Then, our EDTL method combines the output probabilities of each of the five classifiers to obtain a decision based on the fusion of probabilities. Individually, the classifier based on PI achieved 93.1% accuracy, whereas the deep classifiers reached classification accuracies over 90% only in isolated cases. Overall, the average accuracy of the deep networks over the four corneal maps ranged from 86% (SfN) to 89.9% (AN). The classifier ensemble increased the accuracy of the deep classifiers based on corneal maps to values ranging (92.2% to 93.1%) for SqN and (93.1% to 94.8%) for AN. Including in the ensemble-specific combinations of corneal maps’ classifiers and PI increased the accuracy to 98.3%. Moreover, visualization of first learner filters in the networks and Grad-CAMs confirmed that the networks had learned relevant clinical features. This study shows the potential of creating ensembles of deep classifiers fine-tuned with a transfer learning strategy as it resulted in an improved accuracy while showing learnable filters and Grad-CAMs that agree with clinical knowledge. This is a step further towards the potential clinical deployment of an improved computer-assisted diagnosis system for KCN detection to help ophthalmologists to confirm the clinical decision and to perform fast and accurate KCN treatment.
In this paper we used frequentist and Bayesian approaches for the linear regression model to predict future observations for unemployment rates in Iraq. Parameters are estimated using the ordinary least squares method and for the Bayesian approach using the Markov Chain Monte Carlo (MCMC) method. Calculations are done using the R program. The analysis showed that the linear regression model using the Bayesian approach is better and can be used as an alternative to the frequentist approach. Two criteria, the root mean square error (RMSE) and the median absolute deviation (MAD) were used to compare the performance of the estimates. The results obtained showed that the unemployment rates will continue to increase in the next two decade
... Show MoreIn this paper, a robust invisible watermarking system for digital video encoded by MPEG-4 is presented. The proposed scheme provides watermark hidden by embedding a secret message (watermark) in the sprite area allocated in reference frame (I-frame). The proposed system consists of two main units: (i) Embedding unit and (ii) Extraction unit. In the embedding unit, the system allocates the sprite blocks using motion compensation information. The allocated sprite area in each I–frame is used as hosting area for embedding watermark data. In the extraction unit, the system extracts the watermark data in order to check authentication and ownership of the video. The watermark data embedding method is Blocks average modulation applied on RGB dom
... Show MoreDecision making is vital and important activity in field operations research ,engineering ,administration science and economic science with any industrial or service company or organization because the core of management process as well as improve him performance . The research includes decision making process when the objective function is fraction function and solve models fraction programming by using some fraction programming methods and using goal programming method aid programming ( win QSB )and the results explain the effect use the goal programming method in decision making process when the objective function is
fraction .
In this work, results from an optical technique (laser speckle technique) for measuring surface roughness was done by using statistical properties of speckle pattern from the point of view of computer image texture analysis. Four calibration relationships were used to cover wide range of measurement with the same laser speckle technique. The first one is based on intensity contrast of the speckle, the second is based on analysis of speckle binary image, the third is on size of speckle pattern spot, and the latest one is based on characterization of the energy feature of the gray level co-occurrence matrices for the speckle pattern. By these calibration relationships surface roughness of an object surface can be evaluated within the
... Show MoreIn this paper, an enhanced artificial potential field (EAPF) planner is introduced. This planner is proposed to rapidly find online solutions for the mobile robot path planning problems, when the underlying environment contains obstacles with unknown locations and sizes. The classical artificial potential field represents both the repulsive force due to the detected obstacle and the attractive force due to the target. These forces can be considered as the primary directional indicator for the mobile robot. However, the classical artificial potential field has many drawbacks. So, we suggest two secondary forces which are called the midpoint
... Show MoreThis paper proposes a novel method for generating True Random Numbers (TRNs) using electromechanical switches. The proposed generator is implemented using an FPGA board. The system utilizes the phenomenon of electromechanical switch bounce to produce a randomly fluctuated signal that is used to trigger a counter to generate a binary random number. Compared to other true random number generation methods, the proposed approach features a high degree of randomness using a simple circuit that can be easily built using off-the-shelf components. The proposed system is implemented using a commercial relay circuit connected to an FPGA board that is used to process and record the generated random sequences. Applying statistical testing on the exp
... Show Morein this paper the collocation method will be solve ordinary differential equations of retarted arguments also some examples are presented in order to illustrate this approach
In this research estimated the parameters of Gumbel distribution Type 1 for Maximum values through the use of two estimation methods:- Moments (MoM) and Modification Moments(MM) Method. the Simulation used for comparison between each of the estimation methods to reach the best method to estimate the parameters where the simulation was to generate random data follow Gumbel distributiondepending on three models of the real values of the parameters for different sample sizes with samples of replicate (R=500).The results of the assessment were put in tables prepared for the purpose of comparison, which made depending on the mean squares error (MSE).