Background: Radiopacity is one of the prerequisites for dental materials, especially for composite restorations. It's essential for easy detection of secondary dental caries as well as observation of the radiographic interface between the materials and tooth structure. The aim of this study to assess the difference in radiopacity of different resin composites using a digital x-ray system. Materials and methods: Ten specimens (6mm diameter and 1mm thickness) of three types of composite resins (Evetric, Estelite Sigma Quick,and G-aenial) were fabricated using Teflon mold. The radiopacity was assessed using dental radiography equipment in combination with a phosphor plate digital system and a grey scale value aluminum step wedge with thickness varying from 1mm to 10mm in steps of 1mm each. The tested materials were radiographed, we used Image J software, on a computer screen to evaluate the degree of radiopacity for each individual material and compare with the aluminum step wedge. Radiopacity was expressed in mm of equivalent aluminum step wedge. Analysis of varience (ANOVA) and Least Significant Difference (LSD) were used to investigate the significance of differences among the tested groups. Results: Statistical analysis showed highly significant difference among the tested groups (p≤0.01). Amongst, G-aenial composite shows the most radiopaque and it is above or equivalent to that of enamel, while Estelite Sigma Quick composite has the lowest radiopacity value and is equivalent to that of dentin. Conclusion: In line with previous studies, and within the limitation of our study, considerable variations in radiopacity values were found among materials depending on the radiopaque elements incorporated into the matrix. All composite materials tested complied with the ISO 4049 standard.
Assimilation is defined ,by many phoneticians like Schane ,Roach ,and many others, as a phonological process when there is a change of one sound into another because of neighboring sounds.This study investigates the phoneme assimilation as a phonological process in English and Arabic and it is concerned specifically with the differences and similarities in both languages. Actually ,this study reflects the different terms which are used in Arabic to refer to this phenomenon and in this way it shows whether the term 'assimilation ' can have the same meaning of 'idgham' in Arabic or not . Besides, in Arabic , this phenomenon is discussed from&nb
... Show MoreBN Rashid
Diabetes is one of the increasing chronic diseases, affecting millions of people around the earth. Diabetes diagnosis, its prediction, proper cure, and management are compulsory. Machine learning-based prediction techniques for diabetes data analysis can help in the early detection and prediction of the disease and its consequences such as hypo/hyperglycemia. In this paper, we explored the diabetes dataset collected from the medical records of one thousand Iraqi patients. We applied three classifiers, the multilayer perceptron, the KNN and the Random Forest. We involved two experiments: the first experiment used all 12 features of the dataset. The Random Forest outperforms others with 98.8% accuracy. The second experiment used only five att
... Show MoreShadow detection and removal is an important task when dealing with color outdoor images. Shadows are generated by a local and relative absence of light. Shadows are, first of all, a local decrease in the amount of light that reaches a surface. Secondly, they are a local change in the amount of light rejected by a surface toward the observer. Most shadow detection and segmentation methods are based on image analysis. However, some factors will affect the detection result due to the complexity of the circumstances. In this paper a method of segmentation test present to detect shadows from an image and a function concept is used to remove the shadow from an image.
The penalized least square method is a popular method to deal with high dimensional data ,where the number of explanatory variables is large than the sample size . The properties of penalized least square method are given high prediction accuracy and making estimation and variables selection
At once. The penalized least square method gives a sparse model ,that meaning a model with small variables so that can be interpreted easily .The penalized least square is not robust ,that means very sensitive to the presence of outlying observation , to deal with this problem, we can used a robust loss function to get the robust penalized least square method ,and get robust penalized estimator and
... Show MoreCompressing the speech reduces the data storage requirements, leading to reducing the time of transmitting the digitized speech over long-haul links like internet. To obtain best performance in speech compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry.The MCT bases functions are derived from GHM bases function using 2D linear convolution .The fast computation algorithm methods introduced here added desirable features to the current transform. We further assess the performance of the MCT in speech compression application. This paper discusses the effect of using DWT and MCT (one and two dimension) on speech compression. DWT and MCT performances in terms of comp
... Show MoreAs we live in the era of the fourth technological revolution, it has become necessary to use artificial intelligence to generate electric power through sustainable solar energy, especially in Iraq and what it has gone through in terms of crises and what it suffers from a severe shortage of electric power because of the wars and calamities it went through. During that period of time, its impact is still evident in all aspects of daily life experienced by Iraqis because of the remnants of wars, siege, terrorism, wrong policies ruling before and later, regional interventions and their consequences, such as the destruction of electric power stations and the population increase, which must be followed by an increase in electric power stations,
... Show MoreNeural cryptography deals with the problem of “key exchange” between two neural networks by using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between two communicating parties ar eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process.
Eye Detection is used in many applications like pattern recognition, biometric, surveillance system and many other systems. In this paper, a new method is presented to detect and extract the overall shape of one eye from image depending on two principles Helmholtz & Gestalt. According to the principle of perception by Helmholz, any observed geometric shape is perceptually "meaningful" if its repetition number is very small in image with random distribution. To achieve this goal, Gestalt Principle states that humans see things either through grouping its similar elements or recognize patterns. In general, according to Gestalt Principle, humans see things through genera
... Show Moreتعد الرماية واحدة من الألعاب التي يقاس مستوى الانجاز فيها بالأرقام إذ ارتفعت الأرقام المسجلة بالنسبة لسلاح المسدس الهوائي حتى وصلت عند بعض الأبطال العالميين إلى الانجاز النهائي ، وقد لجأ المدربون إلى مختلف الطرق التدريبية, وان ضعف الثبات والتركيز هما اللذان يقودان إلى هبوط مستوى الانجاز في الرماية ، ولتحقيق التركيز والثبات المطلوبين أثناء الرمي يجب التدريب على مطاولة حمل السلاح التي تعد من أصعب تدريبات الر
... Show More