The Sebkha is considered the evaporative geomorphological features, where climate plays an active role. It forms part of the surface features in Mesopotamia plain of Iraqi, which is the most fertile lands, and because of complimentary natural and human factors turned most of the arable land to the territory of Sebkha lands. The use satellite image (Raw Data), Landsat 30M Mss for the year 1976 Landsat 7 ETM, and the Landsat 8 for year 2013 (LDCM) for the summer Landsat Data Continuity Mission and perform geometric correction, enhancements, and Subset image And a visual analysis Space visuals based on the analysis of spectral fingerprints earth's This study has shown that the best in the discrimination of Sebkha Remote sensing techniques and Geographic information system(GIS) proved the efficiency in determining the spatial distribution of the crust of salt sebkha and arable soil moisture content by different visual interpretation and digital advanced classification (statistical), Then the expense of space, time and conduct analysis and matching process conducted between the years of study in geographic information systems program after the application of water guide NDWI using a statistical formula To isolate the Pixels to extract water only own, to determine the change in the water area during the period of study todemonstrate the impact on the spread of salt Sebkha Besides the salinity and poor amount water surface and slow flow, climatic conditions suitable for the occurrence of the area under investigation, within the dry and semi-dry climate, which is characterized by high temperatures and lack of rainfall that cause increased evaporation from water bodies and in low land areas, In addition to the role of human factors of agricultural, industrial and urban activities. And analysis of the positive and negative of basic elements and heavy elements of surface and ground water was performed, besides the soil with regard.
In this research, a factorial experiment (4*4) was studied, applied in a completely random block design, with a size of observations, where the design of experiments is used to study the effect of transactions on experimental units and thus obtain data representing experiment observations that The difference in the application of these transactions under different environmental and experimental conditions It causes noise that affects the observation value and thus an increase in the mean square error of the experiment, and to reduce this noise, multiple wavelet reduction was used as a filter for the observations by suggesting an improved threshold that takes into account the different transformation levels based on the logarithm of the b
... Show MoreTwo unsupervised classifiers for optimum multithreshold are presented; fast Otsu and k-means. The unparametric methods produce an efficient procedure to separate the regions (classes) by select optimum levels, either on the gray levels of image histogram (as Otsu classifier), or on the gray levels of image intensities(as k-mean classifier), which are represent threshold values of the classes. In order to compare between the experimental results of these classifiers, the computation time is recorded and the needed iterations for k-means classifier to converge with optimum classes centers. The variation in the recorded computation time for k-means classifier is discussed.
This work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show MoreIn this paper, a handwritten digit classification system is proposed based on the Discrete Wavelet Transform and Spike Neural Network. The system consists of three stages. The first stage is for preprocessing the data and the second stage is for feature extraction, which is based on Discrete Wavelet Transform (DWT). The third stage is for classification and is based on a Spiking Neural Network (SNN). To evaluate the system, two standard databases are used: the MADBase database and the MNIST database. The proposed system achieved a high classification accuracy rate with 99.1% for the MADBase database and 99.9% for the MNIST database
The support vector machine, also known as SVM, is a type of supervised learning model that can be used for classification or regression depending on the datasets. SVM is used to classify data points by determining the best hyperplane between two or more groups. Working with enormous datasets, on the other hand, might result in a variety of issues, including inefficient accuracy and time-consuming. SVM was updated in this research by applying some non-linear kernel transformations, which are: linear, polynomial, radial basis, and multi-layer kernels. The non-linear SVM classification model was illustrated and summarized in an algorithm using kernel tricks. The proposed method was examined using three simulation datasets with different sample
... Show MoreDeep learning convolution neural network has been widely used to recognize or classify voice. Various techniques have been used together with convolution neural network to prepare voice data before the training process in developing the classification model. However, not all model can produce good classification accuracy as there are many types of voice or speech. Classification of Arabic alphabet pronunciation is a one of the types of voice and accurate pronunciation is required in the learning of the Qur’an reading. Thus, the technique to process the pronunciation and training of the processed data requires specific approach. To overcome this issue, a method based on padding and deep learning convolution neural network is proposed to
... Show MoreText categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accuracy th
... Show MoreThe research deals with the formal characteristics of folklore in contemporary Iraqi ceramics as a comparative study between the two potters Siham Al-Saudi and Sajida Al-Mashayikhi.
The research consists of four chapters. The first chapter is concerned with the research problem, the importance, the objectives, and the limits of the research, while the aim of the research answers the following question: The method and mechanisms of employing the traditional elements and vocabulary in the products of the potters “Seham Al-Saudi and Sajida Al-Mashaikhi”.
The second chapter came in the theoretical framework of the investigations and includes: The first topic: The second topic the movement in shaping contemporary Iraqi ceramics, whi