Image classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven classifiers. A hybrid supervised learning system that takes advantage of rich intermediate features extracted from deep learning compared to traditional feature extraction to boost classification accuracy and parameters is suggested. They provide the same set of characteristics to discover and verify which classifier yields the best classification with our new proposed approach of “hybrid learning.” To achieve this, the performance of classifiers was assessed depending on a genuine dataset that was taken by our camera system. The simulation results show that the support vector machine (SVM) has a mean square error of 0.011, a total accuracy ratio of 98.80%, and an F1 score of 0.99. Moreover, the results show that the LR classifier has a mean square error of 0.035 and a total ratio of 96.42%, and an F1 score of 0.96 comes in the second place. The ANN classifier has a mean square error of 0.047 and a total ratio of 95.23%, and an F1 score of 0.94 comes in the third place. Furthermore, RF, WKNN, DT, and NB with a mean square error and an F1 score advance to the next stage with accuracy ratios of 91.66%, 90.47%, 79.76%, and 75%, respectively. As a result, the main contribution is the enhancement of the classification performance parameters with images of varying brightness and clarity using the proposed hybrid learning approach.
The performance of a synergistic combination of electrocoagulation (EC) and electro-oxidation (EO) for oilfield wastewater treatment has been studied. The effect of operative variables such as current density, pH, and electrolyte concentration on the reduction of chemical oxygen demand (COD) was studied and optimized based on Response Surface Methodology (RSM). The results showed that the current density had the highest impact on the COD removal with a contribution of 64.07% while pH, NaCl addition and other interactions affects account for only 34.67%. The optimized operating parameters were a current density of 26.77 mA/cm2 and a pH of 7.6 with no addition of NaCl which results in a COD removal efficiency of 93.43% and a specific energy c
... Show MoreDespite the great economic and commercial importance given to real estate by virtue of its view of the landscape or public roads, US courts have differed in their position on compensation for damages resulting from blocking that view or vision by public projects. Some courts compensated for such damages, other courts approved such compensation. Hence, this research came to shed light on the extent of the possibility of compensation for blocking the view or vision as a result of public projects, and the research has supported us with many judicial decisions.
The effect of using three different interpolation methods (nearest neighbour, linear and non-linear) on a 3D sinogram to restore the missing data due to using angular difference greater than 1° (considered as optimum 3D sinogram) is presented. Two reconstruction methods are adopted in this study, the back-projection method and Fourier slice theorem method, from the results the second reconstruction proven to be a promising reconstruction with the linear interpolation method when the angular difference is less than 20°.
The searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time. Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle to involve four types of binary code books (i.e. Pour when , Flat when , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding procedure, with very small distortion per block, by designing s
... Show MoreSemantic segmentation is an exciting research topic in medical image analysis because it aims to detect objects in medical images. In recent years, approaches based on deep learning have shown a more reliable performance than traditional approaches in medical image segmentation. The U-Net network is one of the most successful end-to-end convolutional neural networks (CNNs) presented for medical image segmentation. This paper proposes a multiscale Residual Dilated convolution neural network (MSRD-UNet) based on U-Net. MSRD-UNet replaced the traditional convolution block with a novel deeper block that fuses multi-layer features using dilated and residual convolution. In addition, the squeeze and execution attention mechanism (SE) and the s
... Show MoreThe recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital med
... Show More<span lang="EN-US">The fundamental of a downlink massive multiple-input multiple-output (MIMO) energy- issue efficiency strategy is known as minimum mean squared error (MMSE) implementation degrades the performance of a downlink massive MIMO energy-efficiency scheme, so some improvements are adding for this precoding scheme to improve its workthat is called our proposal solution as a proposed improved MMSE precoder (PIMP). The energy efficiency (EE) study has also taken into mind drastically lowering radiated power while maintaining high throughput and minimizing interference issues. We further find the tradeoff between spectral efficiency (SE) and EE although they coincide at the beginning but later their interests become con
... Show More<span lang="EN-US">The fundamental of a downlink massive multiple-input multiple-output (MIMO) energy- issue efficiency strategy is known as minimum mean squared error (MMSE) implementation degrades the performance of a downlink massive MIMO energy-efficiency scheme, so some improvements are adding for this precoding scheme to improve its workthat is called our proposal solution as a proposed improved MMSE precoder (PIMP). The energy efficiency (EE) study has also taken into mind drastically lowering radiated power while maintaining high throughput and minimizing interference issues. We further find the tradeoff between spectral efficiency (SE) and EE although they coincide at the beginning but later their interests become con
... Show More