Earth’s climate changes rapidly due to the increases in human demands and rapid economic growth. These changes will affect the entire biosphere, mostly in negative ways. Predicting future changes will put us in a better position to minimize their catastrophic effects and to understand how humans can cope with the new changes beforehand. In this research, previous global climate data set observations from 1961-1990 have been used to predict the future climate change scenario for 2010-2039. The data were processed with Idrisi Andes software and the final Köppen-Geiger map was created with ArcGIS software. Based on Köppen climate classification, it was found that areas of Equator, Arid Steppes, and Snow will decrease by 3.9 %, 2.96%, and 0.09%, respectively. While the areas of Warm Temperature and Dessert will increase by 4.5% and 0.75%, respectively. The results of this study provide useful information on future climate Köppen-Geiger maps and areas that will most likely be affected by climate change in the following decades
Deep learning convolution neural network has been widely used to recognize or classify voice. Various techniques have been used together with convolution neural network to prepare voice data before the training process in developing the classification model. However, not all model can produce good classification accuracy as there are many types of voice or speech. Classification of Arabic alphabet pronunciation is a one of the types of voice and accurate pronunciation is required in the learning of the Qur’an reading. Thus, the technique to process the pronunciation and training of the processed data requires specific approach. To overcome this issue, a method based on padding and deep learning convolution neural network is proposed to
... Show MoreOne of the most important features of the Amazon Web Services (AWS) cloud is that the program can be run and accessed from any location. You can access and monitor the result of the program from any location, saving many images and allowing for faster computation. This work proposes a face detection classification model based on AWS cloud aiming to classify the faces into two classes: a non-permission class, and a permission class, by training the real data set collected from our cameras. The proposed Convolutional Neural Network (CNN) cloud-based system was used to share computational resources for Artificial Neural Networks (ANN) to reduce redundant computation. The test system uses Internet of Things (IoT) services th
... Show MoreThis work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show MoreOne of the most important features of the Amazon Web Services (AWS) cloud is that the program can be run and accessed from any location. You can access and monitor the result of the program from any location, saving many images and allowing for faster computation. This work proposes a face detection classification model based on AWS cloud aiming to classify the faces into two classes: a non-permission class, and a permission class, by training the real data set collected from our cameras. The proposed Convolutional Neural Network (CNN) cloud-based system was used to share computational resources for Artificial Neural Networks (ANN) to reduce redundant computation. The test system uses Internet of Things (IoT) services through our ca
... Show MoreTwo unsupervised classifiers for optimum multithreshold are presented; fast Otsu and k-means. The unparametric methods produce an efficient procedure to separate the regions (classes) by select optimum levels, either on the gray levels of image histogram (as Otsu classifier), or on the gray levels of image intensities(as k-mean classifier), which are represent threshold values of the classes. In order to compare between the experimental results of these classifiers, the computation time is recorded and the needed iterations for k-means classifier to converge with optimum classes centers. The variation in the recorded computation time for k-means classifier is discussed.
The support vector machine, also known as SVM, is a type of supervised learning model that can be used for classification or regression depending on the datasets. SVM is used to classify data points by determining the best hyperplane between two or more groups. Working with enormous datasets, on the other hand, might result in a variety of issues, including inefficient accuracy and time-consuming. SVM was updated in this research by applying some non-linear kernel transformations, which are: linear, polynomial, radial basis, and multi-layer kernels. The non-linear SVM classification model was illustrated and summarized in an algorithm using kernel tricks. The proposed method was examined using three simulation datasets with different sample
... Show Moreالمستودع الرقمي العراقي. مركز المعلومات الرقمية التابع لمكتبة العتبة العباسية المقدسة
This article is an endeavour to highlight the relationship between social media and language evolution. It reviews the current theoretical efforts on communication and language change. The descriptive design, which is theoretically based on technological determision, is used. The assumption behind this review is that the social media plays a significant role in language evolution. Moreover, different platforms of social media are characterized by being the easiest and fastest means of communication. It concludes that the current theoretical efforts have paid much attention to the relationship between social media and language evolution. Such efforts have highlighted the fact that social media platforms are awash with a lot of acronyms, cybe
... Show MoreVideo copyright protection is the most generally acknowledged method of preventing data piracy. This paper proposes a blind video copyright protection technique based on the Fast Walsh Hadamard Transform (FWHT), Discrete Wavelet Transform (DWT), and Arnold Map. The proposed method chooses only frames with maximum and minimum energy features to host the watermark. It also exploits the advantages of both the fast Walsh Hadamard transform (FWHT) and discrete wavelet transforms (DWT) for watermark embedding. The Arnold map encrypts watermarks before the embedding process and decrypts watermarks after extraction. The results show that the proposed method can achieve a fast embedding time, good transparency, and robustness against various
... Show More