Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eyes' observation of the different colors and features of images. We propose a multi-layer hybrid system for deep learning using the unsupervised CAE architecture and using the color clustering of the K-mean algorithm to compress images and determine their size and color intensity. The system is implemented using Kodak and Challenge on Learned Image Compression (CLIC) dataset for deep learning. Experimental results show that our proposed method is superior to the traditional compression methods of the autoencoder, and the proposed work has better performance in terms of performance speed and quality measures Peak Signal To Noise Ratio (PSNR) and Structural Similarity Index (SSIM) where the results achieved better performance and high efficiency With high compression bit rates and low Mean Squared Error (MSE) rate the results recorded the highest compression ratios that ranged between (0.7117 to 0.8707) for the Kodak dataset and (0.7191 to 0.9930) for CLIC dataset. The system achieved high accuracy and quality in comparison to the error coefficient, which was recorded (0.0126 to reach 0.0003) below, and this system is onsidered the most quality and accurate compared to the methods of deep learning compared to the deep learning methods of the autoencoder
One of the most important features of the Amazon Web Services (AWS) cloud is that the program can be run and accessed from any location. You can access and monitor the result of the program from any location, saving many images and allowing for faster computation. This work proposes a face detection classification model based on AWS cloud aiming to classify the faces into two classes: a non-permission class, and a permission class, by training the real data set collected from our cameras. The proposed Convolutional Neural Network (CNN) cloud-based system was used to share computational resources for Artificial Neural Networks (ANN) to reduce redundant computation. The test system uses Internet of Things (IoT) services th
... Show MoreBlockchain is an innovative technology that has gained interest in all sectors in the era of digital transformation where it manages transactions and saves them in a database. With the increasing financial transactions and the rapidly developed society with growing businesses many people looking for the dream of a better financially independent life, stray from large corporations and organizations to form startups and small businesses. Recently, the increasing demand for employees or institutes to prepare and manage contracts, papers, and the verifications process, in addition to human mistakes led to the emergence of a smart contract. The smart contract has been developed to save time and provide more confidence while dealing, as well a
... Show MoreOne of the most important features of the Amazon Web Services (AWS) cloud is that the program can be run and accessed from any location. You can access and monitor the result of the program from any location, saving many images and allowing for faster computation. This work proposes a face detection classification model based on AWS cloud aiming to classify the faces into two classes: a non-permission class, and a permission class, by training the real data set collected from our cameras. The proposed Convolutional Neural Network (CNN) cloud-based system was used to share computational resources for Artificial Neural Networks (ANN) to reduce redundant computation. The test system uses Internet of Things (IoT) services through our ca
... Show MoreThe research aim was to observe the distribution pattern of
Photonic Crystal Fiber (PCF) based on the Surface Plasmon Resonance (SPR) effect has been proposed to detect polluted water samples. The sensing characteristics are illustrated using the finite element method. The right hole of the right side of PCF core has been coated with chemically stable gold material to achieve the practical sensing approach. The performance parameter of the proposed sensor is investigated in terms of wavelength sensitivity, amplitude sensitivity, sensor resolution, and linearity of the resonant wavelength with the variation of refractive index of analyte. In the sensing range of 1.33 to 1.3624, maximum sensitivities of 1360.2 nm ∕ RIU and 184 RIU−1 are achieved with the high sensor resolutions of 7
... Show MoreElectrospun nanofiber membranes are employed in a variety of applications due to its unique features. the nanofibers' characterizations are effected by the polymer solution. The used solvent for dissolving the polymer powder is critical in preparing the precursor solution. In this paper, the Polyacrylonitrile (PAN)-based nanofibers were prepared in a concentration of 10 wt.% using various solvents (NMP, DMF, and DMSO). The surface morphology, porosity, and the mechanical strength of the three prepared 10 wt.% PAN-based nanofibers membranes (PAN/NMP, PAN/DMF, and PAN/DMSO) were characterized using the Scanning Electron Microscopy (SEM), Dry-wet Weights method, and Dynamic Mechanical Analyzer (DMA). Using DMF as a solvent resulted in a lon
... Show MoreThis article proposes a new strategy based on a hybrid method that combines the gravitational search algorithm (GSA) with the bat algorithm (BAT) to solve a single-objective optimization problem. It first runs GSA, followed by BAT as the second step. The proposed approach relies on a parameter between 0 and 1 to address the problem of falling into local research because the lack of a local search mechanism increases intensity search, whereas diversity remains high and easily falls into the local optimum. The improvement is equivalent to the speed of the original BAT. Access speed is increased for the best solution. All solutions in the population are updated before the end of the operation of the proposed algorithm. The diversification f
... Show MoreVarious speech enhancement Algorithms (SEA) have been developed in the last few decades. Each algorithm has its advantages and disadvantages because the speech signal is affected by environmental situations. Distortion of speech results in the loss of important features that make this signal challenging to understand. SEA aims to improve the intelligibility and quality of speech that different types of noise have degraded. In most applications, quality improvement is highly desirable as it can reduce listener fatigue, especially when the listener is exposed to high noise levels for extended periods (e.g., manufacturing). SEA reduces or suppresses the background noise to some degree, sometimes called noise suppression alg
... Show More