Image classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven classifiers. A hybrid supervised learning system that takes advantage of rich intermediate features extracted from deep learning compared to traditional feature extraction to boost classification accuracy and parameters is suggested. They provide the same set of characteristics to discover and verify which classifier yields the best classification with our new proposed approach of “hybrid learning.” To achieve this, the performance of classifiers was assessed depending on a genuine dataset that was taken by our camera system. The simulation results show that the support vector machine (SVM) has a mean square error of 0.011, a total accuracy ratio of 98.80%, and an F1 score of 0.99. Moreover, the results show that the LR classifier has a mean square error of 0.035 and a total ratio of 96.42%, and an F1 score of 0.96 comes in the second place. The ANN classifier has a mean square error of 0.047 and a total ratio of 95.23%, and an F1 score of 0.94 comes in the third place. Furthermore, RF, WKNN, DT, and NB with a mean square error and an F1 score advance to the next stage with accuracy ratios of 91.66%, 90.47%, 79.76%, and 75%, respectively. As a result, the main contribution is the enhancement of the classification performance parameters with images of varying brightness and clarity using the proposed hybrid learning approach.
The support vector machine, also known as SVM, is a type of supervised learning model that can be used for classification or regression depending on the datasets. SVM is used to classify data points by determining the best hyperplane between two or more groups. Working with enormous datasets, on the other hand, might result in a variety of issues, including inefficient accuracy and time-consuming. SVM was updated in this research by applying some non-linear kernel transformations, which are: linear, polynomial, radial basis, and multi-layer kernels. The non-linear SVM classification model was illustrated and summarized in an algorithm using kernel tricks. The proposed method was examined using three simulation datasets with different sample
... Show MoreInvestigating the thermal and electrical gains and efficiencies influence the designed photovoltaic thermal hybrid collector (PVT) under different weather conditions. The designed system was manufactured by attaching a fabricated cooling system made of serpentine tubes to a single PV panel and connecting it to an automatic controlling system for measuring, monitoring, and simultaneously collecting the required data. A removable glass cover had been used to study the effects of glazed and unglazed PVT panel situations. The research was conducted in February (winter) and July (summer), and March for daily solar radiation effects on efficiencies. The results indicated that electrical and thermal gains increased by the incre
... Show MoreThe proliferation of many editing programs based on artificial intelligence techniques has contributed to the emergence of deepfake technology. Deepfakes are committed to fabricating and falsifying facts by making a person do actions or say words that he never did or said. So that developing an algorithm for deepfakes detection is very important to discriminate real from fake media. Convolutional neural networks (CNNs) are among the most complex classifiers, but choosing the nature of the data fed to these networks is extremely important. For this reason, we capture fine texture details of input data frames using 16 Gabor filters indifferent directions and then feed them to a binary CNN classifier instead of using the red-green-blue
... Show MoreOpenStreetMap (OSM), recognised for its current and readily accessible spatial database, frequently serves regions lacking precise data at the necessary granularity. Global collaboration among OSM contributors presents challenges to data quality and uniformity, exacerbated by the sheer volume of input and indistinct data annotation protocols. This study presents a methodological improvement in the spatial accuracy of OSM datasets centred over Baghdad, Iraq, utilising data derived from OSM services and satellite imagery. An analytical focus was placed on two geometric correction methods: a two-dimensional polynomial affine transformation and a two-dimensional polynomial conformal transformation. The former involves twelve coefficients for ad
... Show MoreNeurolinguistics is a new science, which studies the close relationship between language and neuroscience, and this new interdisciplinary field confirms the functional integration between language and the nervous system, that is, the movement of linguistic information in the brain in receiving, acquiring and producing to achieve linguistic communication; Because language is in fact a mental process that takes place only through the nervous system, and this research shows the benefit of each of these two fields to the other, and this science includes important topics, including: language acquisition, the linguistic abilities of the two hemispheres of the brain, the linguistic responsibility of the brain centers, and the time limit for langua
... Show MoreThe purpose of the current investigation is to distinguish between working memory ( ) in five patients with vascular dementia ( ), fifteen post-stroke patients with mild cognitive impairment ( ), and fifteen healthy control individuals ( ) based on background electroencephalography (EEG) activity. The elimination of EEG artifacts using wavelet (WT) pre-processing denoising is demonstrated in this study. In the current study, spectral entropy ( ), permutation entropy ( ), and approximation entropy ( ) were all explored. To improve the classification using the k-nearest neighbors ( NN) classifier scheme, a comparative study of using fuzzy neighbourhood preserving analysis with -decomposition ( ) as a dimensionality reduction technique an
... Show MoreThis work reports the development of an analytical method for the simultaneous analysis of three fluoroquinolones; ciprofloxacin (CIP), norfloxacin (NOR) and ofloxacin (OFL) in soil matrix. The proposed method was performed by using microwave-assisted extraction (MAE), solid-phase extraction (SPE) for samples purification, and finally the pre-concentrated samples were analyzed by HPLC detector. In this study, various organic solvents were tested to extract the test compounds, and the extraction performance was evaluated by testing various parameters including extraction solvent, solvent volume, extraction time, temperature and number of the extraction cycles. The current method showed a good linearity over the concentration ranging from
... Show MoreLiquid-Liquid Extraction of Cu(II) ion in aqueous solution by dicyclohexyl-18-crown-6 as extractant in dichloroethane was studied .The extraction efficiency was investigated by a spectrophometric method. The reagent form a coloured complex which has been a quantitatively extracted at pH 6.3. The method obeys Beer`s law over range from (2.5-22.5) ppm with the correlation coefficient of 0.9989. The molar absorptivity the stoichiometry of extracted complex is found to be 1:2. the proposed method is very sensitive and selective.