Crime is considered as an unlawful activity of all kinds and it is punished by law. Crimes have an impact on a society's quality of life and economic development. With a large rise in crime globally, there is a necessity to analyze crime data to bring down the rate of crime. This encourages the police and people to occupy the required measures and more effectively restricting the crimes. The purpose of this research is to develop predictive models that can aid in crime pattern analysis and thus support the Boston department's crime prevention efforts. The geographical location factor has been adopted in our model, and this is due to its being an influential factor in several situations, whether it is traveling to a specific area or living in it to assist people in recognizing between a secured and an unsecured environment. Geo-location, combined with new approaches and techniques, can be extremely useful in crime investigation. The aim is focused on comparative study between three supervised learning algorithms. Where learning used data sets to train and test it to get desired results on them. Various machine learning algorithms on the dataset of Boston city crime are Decision Tree, Naïve Bayes and Logistic Regression classifiers have been used here to predict the type of crime that happens in the area. The outputs of these methods are compared to each other to find the one model best fits this type of data with the best performance. From the results obtained, the Decision Tree demonstrated the highest result compared to Naïve Bayes and Logistic Regression.
A three-stage learning algorithm for deep multilayer perceptron (DMLP) with effective weight initialisation based on sparse auto-encoder is proposed in this paper, which aims to overcome difficulties in training deep neural networks with limited training data in high-dimensional feature space. At the first stage, unsupervised learning is adopted using sparse auto-encoder to obtain the initial weights of the feature extraction layers of the DMLP. At the second stage, error back-propagation is used to train the DMLP by fixing the weights obtained at the first stage for its feature extraction layers. At the third stage, all the weights of the DMLP obtained at the second stage are refined by error back-propagation. Network structures an
... Show MoreThe research aims to demonstrate the impact of TDABC as a strategic technology compatible with the rapid developments and changes in the contemporary business environment) on pricing decisions. As TDABC provides a new philosophy in the process of allocating indirect costs through time directives of resources and activities to the goal of cost, identifying unused energy and associated costs, which provides the management of economic units with financial and non-financial information that helps them in the complex and dangerous decision-making process. Of pricing decisions. To achieve better pricing decisions in light of the endeavor to maintain customers in a highly competitive environment and a variety of alternatives, the resear
... Show MoreThe current study aims to compare between the assessments of the Rush model’s parameters to the missing and completed data in various ways of processing the missing data. To achieve the aim of the present study, the researcher followed the following steps: preparing Philip Carter test for the spatial capacity which consists of (20) items on a group of (250) sixth scientific stage students in the directorates of Baghdad Education at Al–Rusafa (1st, 2nd and 3rd) for the academic year (2018-2019). Then, the researcher relied on a single-parameter model to analyze the data. The researcher used Bilog-mg3 model to check the hypotheses, data and match them with the model. In addition
... Show MoreThe sustainable competitive advantage for organizations is one of the requirements for value creation, which centered on the possession of scarce resources that achieve maximum flows to invest in intellectual capital, if what has been interest in them, measured and employed the way properly and style, so I figured the need for new technologies to enable organizations to measure the intellectual and physical assets and to assess its performance accordingly, so it sheds search light on the measurement of the added value of existing knowledge using the standard value-added factor is the intellectual (value added intellectual coefficient) (VAIC) and to develop a set of assumptions about the extent of the difference between the sample
... Show MoreThe study seeks to identify e-governance requirements in human resources management, which are (administrative requirements, technical, human, financial, and security) which is a new style differs from the traditional management pattern, where work depends on multiple forms and methods of Tools, (e. g. computers and various electronic software), by relying on powerful information systems that help to make management decisions as quickly and less effortless and less expensive to keep up with the tremendous technological advancements in the field of information and communication technology revolution.
The sample included 132 Person who constitute a position (Director General, assistant Director General, section managers and d
... Show MoreThe constructivist learning model is one of the models of constructivist theory in learning, as it generally emphasizes the active role of the learner during learning, in addition to that the intellectual and actual participation in the various activities to help students gain the skills of analyzing artistic works. The current research aims to know the effectiveness of the constructivist learning model in the acquisition of the skills of the Institute of Fine Arts for the skills of (technical work analysis). To achieve the goal, the researcher formulated the following hypothesis: There are no statistically significant differences between the average scores of the experimental group students in the skill test for analyzing artworks befor
... Show MoreThis research aims to test the causal relationship long-and short-run between the price of gold the global crude oil price and the exchange rate of the dollar and how you can take advantage of the nature of this relationship, particularly in the Arab oil states that achieve huge surpluses, including Iraq and how to keep on the purchasing power of these surpluses or reduce the levels of risk.
The problem is that the Arab oil countries, adversely affected, as a result of that relationship, due to the fact that its role confined to the sale of crude oil only. They do not have control in the dollar, then they are not able to take advantage of its impact on the price of gold the fact that gold is effective pr
... Show MoreRemote sensing data are increasingly being used in digital archaeology for the potential non-invasive detection of archaeological remains. The purpose of this research is to evaluate the capability of standalone (LiDAR and aerial photogrammetry) and integration/fusion remote sensing approaches in improving the prospecting and interpretation of archaeological remains in Cahokia’s Grand Plaza. Cahokia Mounds is an ancient area; it was the largest settlement of the Mississippian culture located in southwestern Illinois, USA. There are a limited number of studies combining LiDAR and aerial photogrammetry to extract archaeological features. This article, therefore, combines LiDAR with photogrammetric data to create new datasets and inv
... Show MoreThe current research aims to study the extent to which the Independent High Electoral Commission applies to information security risk management by the international standard (ISO / IEC27005) in terms of policies, administrative and technical procedures, and techniques used in managing information security risks, based on the opinions of experts in the sector who occupy positions (General Manager The directorate, department heads and their agents, project managers, heads of divisions, and those authorized to access systems and software). The importance of the research comes by giving a clear picture of the field of information security risk management in the organization in question because of its significant role in identifying risks and s
... Show MoreDeepFake is a concern for celebrities and everyone because it is simple to create. DeepFake images, especially high-quality ones, are difficult to detect using people, local descriptors, and current approaches. On the other hand, video manipulation detection is more accessible than an image, which many state-of-the-art systems offer. Moreover, the detection of video manipulation depends entirely on its detection through images. Many worked on DeepFake detection in images, but they had complex mathematical calculations in preprocessing steps, and many limitations, including that the face must be in front, the eyes have to be open, and the mouth should be open with the appearance of teeth, etc. Also, the accuracy of their counterfeit detectio
... Show More