Crime is considered as an unlawful activity of all kinds and it is punished by law. Crimes have an impact on a society's quality of life and economic development. With a large rise in crime globally, there is a necessity to analyze crime data to bring down the rate of crime. This encourages the police and people to occupy the required measures and more effectively restricting the crimes. The purpose of this research is to develop predictive models that can aid in crime pattern analysis and thus support the Boston department's crime prevention efforts. The geographical location factor has been adopted in our model, and this is due to its being an influential factor in several situations, whether it is traveling to a specific area or living in it to assist people in recognizing between a secured and an unsecured environment. Geo-location, combined with new approaches and techniques, can be extremely useful in crime investigation. The aim is focused on comparative study between three supervised learning algorithms. Where learning used data sets to train and test it to get desired results on them. Various machine learning algorithms on the dataset of Boston city crime are Decision Tree, Naïve Bayes and Logistic Regression classifiers have been used here to predict the type of crime that happens in the area. The outputs of these methods are compared to each other to find the one model best fits this type of data with the best performance. From the results obtained, the Decision Tree demonstrated the highest result compared to Naïve Bayes and Logistic Regression.
This work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show MoreOne of the most important features of the Amazon Web Services (AWS) cloud is that the program can be run and accessed from any location. You can access and monitor the result of the program from any location, saving many images and allowing for faster computation. This work proposes a face detection classification model based on AWS cloud aiming to classify the faces into two classes: a non-permission class, and a permission class, by training the real data set collected from our cameras. The proposed Convolutional Neural Network (CNN) cloud-based system was used to share computational resources for Artificial Neural Networks (ANN) to reduce redundant computation. The test system uses Internet of Things (IoT) services through our ca
... Show MoreThe research utilizes data produced by the Local Urban Management Directorate in Najaf and the imagery data from the Landsat 9 satellite, after being processed by the GIS tool. The research follows a descriptive and analytical approach; we integrated the Markov chain analysis and the cellular automation approach to predict transformations in city structure as a result of changes in land utilization. The research also aims to identify approaches to detect post-classification transformations in order to determine changes in land utilization. To predict the future land utilization in the city of Kufa, and to evaluate data accuracy, we used the Kappa Indicator to determine the potential applicability of the probability matrix that resulted from
... Show MoreThe research aims to identify the tax policy strategy adopted in Iraq after the change of the tax system in 2003 and beyond, and then make a comparison of the two strategies on corporate data whether they are charged with progressive tax rates and after the change of the system as the tax rates became fixed, and then indicate the changes In the tax proceeds, and knowing the dimensions of the approved tax policy, is it a tax reform strategy or a strategy to attract investments.
The research started from the problem of exposure of the Iraqi tax system to several changes in the tax strategy from 2003 until now, as this led to a reflection on the technical organization of taxes, in terms of the tax exemption.And these many amendments
... Show MoreFor many years controlled shot peening was considered as a surface treatment. It is now clear that the performance of control shot peening in terms of fatigue depends on the balance between its beneficial (compressive residual stress and work hardening) and beneficial effects (surface hardening).
The overall aim of this paper is to study the effects of aggressive shot peening on fatigue life of 7075 – T6 aluminum alloy. The fatigue life reduction factor (LRF) due to the aggressive shot peening was established and empirical relations were proposed to describe the behavior of LRF, roughness and fatigue life. The benefits of shot peering in terms of fatigue life are dependent on the shot peening time (SPT).
... Show MoreAbstract
The current research aims to identify the analysis of the questions for the book of literary criticism for the preparatory stage according to Bloom's classification. The research community consists of (34) exercises and (45) questions. The researcher used the method of analyzing questions and prepared a preliminary list that includes criteria that are supposed to measure exercises, which were selected based on Bloom's classification and the extant literature related to the topic. The scales were exposed to a jury of experts and specialists in curricula and methods of teaching the Arabic language. The scales obtained a complete agreement. Thus, it was adapted to become a reliable instrument in this
... Show MoreThis paper presents a method to classify colored textural images of skin tissues. Since medical images havehighly heterogeneity, the development of reliable skin-cancer detection process is difficult, and a mono fractaldimension is not sufficient to classify images of this nature. A multifractal-based feature vectors are suggested hereas an alternative and more effective tool. At the same time multiple color channels are used to get more descriptivefeatures.Two multifractal based set of features are suggested here. The first set measures the local roughness property, whilethe second set measure the local contrast property.A combination of all the extracted features from the three colormodels gives a highest classification accuracy with 99.4
... Show MoreThe researchers of the present study have conducted a genre analysis of two political debates between American presidential nominees in the 2016 and 2020 elections. The current study seeks to analyze the cognitive construction of political debates to evaluate the typical moves and strategies politicians use to express their communicative intentions and to reveal the language manifestations of those moves and strategies. To achieve the study’s aims, the researchers adopt Bhatia’s (1993) framework of cognitive construction supported by van Emeren’s (2010) pragma-dialectic framework. The study demonstrates that both presidents adhere to this genre structuring to further their political agendas. For a positive and promising image
... Show MoreEarly detection of brain tumors is critical for enhancing treatment options and extending patient survival. Magnetic resonance imaging (MRI) scanning gives more detailed information, such as greater contrast and clarity than any other scanning method. Manually dividing brain tumors from many MRI images collected in clinical practice for cancer diagnosis is a tough and time-consuming task. Tumors and MRI scans of the brain can be discovered using algorithms and machine learning technologies, making the process easier for doctors because MRI images can appear healthy when the person may have a tumor or be malignant. Recently, deep learning techniques based on deep convolutional neural networks have been used to analyze med
... Show MoreSurvival analysis is one of the types of data analysis that describes the time period until the occurrence of an event of interest such as death or other events of importance in determining what will happen to the phenomenon studied. There may be more than one endpoint for the event, in which case it is called Competing risks. The purpose of this research is to apply the dynamic approach in the analysis of discrete survival time in order to estimate the effect of covariates over time, as well as modeling the nonlinear relationship between the covariates and the discrete hazard function through the use of the multinomial logistic model and the multivariate Cox model. For the purpose of conducting the estimation process for both the discrete
... Show More