The aim of this research is to assess the validity of Detailed Micro-Modeling (DMM) as a numerical model for masonry analysis. To achieve this aim, a set of load-displacement curves obtained based on both numerical simulation and experimental results of clay masonry prisms loaded by a vertical load. The finite element method was implemented in DMM for analysis of the experimental clay masonry prism. The finite element software ABAQUS with implicit solver was used to model and analyze the clay masonry prism subjected to a vertical load. The load-displacement relationship of numerical model was found in good agreement with those drawn from experimental results. Evidence shows that load-displacement curvefound from the finite element model has almost the same shape and pattern of the experimental one. The curves in both situations become more and more resembling as the load increasing till they reach failure.
Neural cryptography deals with the problem of “key exchange” between two neural networks by using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between two communicating parties ar eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process.
Diabetes is one of the increasing chronic diseases, affecting millions of people around the earth. Diabetes diagnosis, its prediction, proper cure, and management are compulsory. Machine learning-based prediction techniques for diabetes data analysis can help in the early detection and prediction of the disease and its consequences such as hypo/hyperglycemia. In this paper, we explored the diabetes dataset collected from the medical records of one thousand Iraqi patients. We applied three classifiers, the multilayer perceptron, the KNN and the Random Forest. We involved two experiments: the first experiment used all 12 features of the dataset. The Random Forest outperforms others with 98.8% accuracy. The second experiment used only five att
... Show MoreShadow detection and removal is an important task when dealing with color outdoor images. Shadows are generated by a local and relative absence of light. Shadows are, first of all, a local decrease in the amount of light that reaches a surface. Secondly, they are a local change in the amount of light rejected by a surface toward the observer. Most shadow detection and segmentation methods are based on image analysis. However, some factors will affect the detection result due to the complexity of the circumstances. In this paper a method of segmentation test present to detect shadows from an image and a function concept is used to remove the shadow from an image.
Eye Detection is used in many applications like pattern recognition, biometric, surveillance system and many other systems. In this paper, a new method is presented to detect and extract the overall shape of one eye from image depending on two principles Helmholtz & Gestalt. According to the principle of perception by Helmholz, any observed geometric shape is perceptually "meaningful" if its repetition number is very small in image with random distribution. To achieve this goal, Gestalt Principle states that humans see things either through grouping its similar elements or recognize patterns. In general, according to Gestalt Principle, humans see things through genera
... Show MoreThe penalized least square method is a popular method to deal with high dimensional data ,where the number of explanatory variables is large than the sample size . The properties of penalized least square method are given high prediction accuracy and making estimation and variables selection
At once. The penalized least square method gives a sparse model ,that meaning a model with small variables so that can be interpreted easily .The penalized least square is not robust ,that means very sensitive to the presence of outlying observation , to deal with this problem, we can used a robust loss function to get the robust penalized least square method ,and get robust penalized estimator and
... Show MoreCompressing the speech reduces the data storage requirements, leading to reducing the time of transmitting the digitized speech over long-haul links like internet. To obtain best performance in speech compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry.The MCT bases functions are derived from GHM bases function using 2D linear convolution .The fast computation algorithm methods introduced here added desirable features to the current transform. We further assess the performance of the MCT in speech compression application. This paper discusses the effect of using DWT and MCT (one and two dimension) on speech compression. DWT and MCT performances in terms of comp
... Show MoreThis study investigates the effects of Al-Doura oil refinery effluent, in Baghdad city, on the water quality of the Tigris River using the Canadian Water Quality Index (CCME WQI) and Rivers Maintaining System (1967). Water samples were collected monthly from Tigris River at three stations, which are Al-Muthanna Bridge (upstream), Al-Doura Refinery (point source), and Al–Zafaraniya city (downstream) from October 2020 to April 2021. Fourteen water quality parameters were studied, namely pH (6.50-8.10), Water Temperature (WT) (5.00-27.00 °C), Electrical Conductivity (EC) (877.00-1192.00 μs/cm), Dissolved Oxygen (DO) (5.03-7.57 mg/L), Biological Oxygen demand (BOD) (0.53-2.23 mg/L), Total Dissolved S
In the present work theoretical relations are derived for the efficiency evaluation for the generation of the third and the fourth harmonics u$ing crystal cascading configuration. These relations can be applied to a wide class of nonlinear optical materials. Calculations are made for beta barium borate (BBO) crystal with ruby laser /.=694.3 nm . The case study involves producing the third harmonics at X. =231.4 nm of the fundamental beam. The formula of efficiency involves many parameters, which can be changed to enhance the efficiency. The results showed that the behavior of the efficiency is not linear with the crystal length. It is found that the efficiency increases when the input power increases. 'I'he walk-off length is calculated for
... Show MoreIn this study, the adsorption of Zn (NO3)2 is carried out by using surfaces of malvaparviflora. The validity of the adsorption is evaluated by using atomic absorption Spectrophotometry through determination the amount of adsorbed Zn (NO3)2. Various parameters such as PH, adsorbent weight and contact time are studied in terms of their effect on the reaction progress. Furthermore, Lagergren’s equation is used to determine adsorption kinetics. It is observed that high removal of Zn (NO3)2 is obtained at PH=2. High removal of Zn (NO3)2 is at the time equivalent of 60 min and reaches equilibrium,where 0.25gm is the best weight of adsorbant . For kinetics the reaction onto malvaparviflora follows pseudo first order Lagergren’s equation.
In the present work, pattern recognition is carried out by the contrast and relative variance of clouds. The K-mean clustering process is then applied to classify the cloud type; also, texture analysis being adopted to extract the textural features and using them in cloud classification process. The test image used in the classification process is the Meteosat-7 image for the D3 region.The K-mean method is adopted as an unsupervised classification. This method depends on the initial chosen seeds of cluster. Since, the initial seeds are chosen randomly, the user supply a set of means, or cluster centers in the n-dimensional space.The K-mean cluster has been applied on two bands (IR2 band) and (water vapour band).The textural analysis is used
... Show More