As is known that the consumer price index (CPI) is one of the most important price indices because of its direct effect on the welfare of the individual and his living.
We have been address the problem of Strongly seasonal commodities in calculating (CPI) and identifying some of the solution.
We have used an actual data for a set of commodities (including strongly seasonal commodities) to calculate the index price by using (Annual Basket With Carry Forward Prices method) . Although this method can be successfully used in the context of seasonal&nbs
... Show MoreABSTRACT— In primary teeth, root canal treatment is a time consuming and challenging procedure, particularly during the most important step in endodontic treatment which is the preparation of the canal. Pulpectomy is the treatment of choice in all the necrotic primary teeth. For better treatment protocol, advancing technology brought the rotary system to reduce the manual dexterity and improve the quality of treatment for pulpectomy. This study aimed to compare and assess the efficacy of cleaning and the time required for the instrumentation during the preparation of root canals of the primary molars using the rotary and the manual (conventional) systems. Thirty root canals of primary teeth were selected. These teeth submitted to a
... Show MoreThis research basically gives an introduction about the multiple intelligence
theory and its implication into the classroom. It presents a unit plan based upon the
MI theory followed by a report which explains the application of the plan by the
researcher on the first class student of computer department in college of sciences/
University of Al-Mustansiryia and the teacher's and the students' reaction to it.
The research starts with a short introduction about the MI theory is a great
theory that could help students to learn better in a relaxed learning situation. It is
presented by Howard Gardener first when he published his book "Frames of
Minds" in 1983 in which he describes how the brain has multiple intelligen
The key objective of the study is to understand the best processes that are currently used in managing talent in Australian higher education (AHE) and design a quantitative measurement of talent management processes (TMPs) for the higher education (HE) sector.
The three qualitative multi-method studies that are commonly used in empirical studies, namely, brainstorming, focus group discussions and semi-structured individual interviews were considered. Twenty
The vast advantages of 3D modelling industry have urged competitors to improve capturing techniques and processing pipelines towards minimizing labour requirements, saving time and reducing project risk. When it comes to digital 3D documentary and conserving projects, laser scanning and photogrammetry are compared to choose between the two. Since both techniques have pros and cons, this paper approaches the potential issues of individual techniques in terms of time, budget, accuracy, density, methodology and ease to use. Terrestrial laser scanner and close-range photogrammetry are tested to document a unique invaluable artefact (Lady of Hatra) located in Iraq for future data fusion sc
Today, there are large amounts of geospatial data available on the web such as Google Map (GM), OpenStreetMap (OSM), Flickr service, Wikimapia and others. All of these services called open source geospatial data. Geospatial data from different sources often has variable accuracy due to different data collection methods; therefore data accuracy may not meet the user requirement in varying organization. This paper aims to develop a tool to assess the quality of GM data by comparing it with formal data such as spatial data from Mayoralty of Baghdad (MB). This tool developed by Visual Basic language, and validated on two different study areas in Baghdad / Iraq (Al-Karada and Al- Kadhumiyah). The positional accuracy was asses
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for