Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eyes' observation of the different colors and features of images. We propose a multi-layer hybrid system for deep learning using the unsupervised CAE architecture and using the color clustering of the K-mean algorithm to compress images and determine their size and color intensity. The system is implemented using Kodak and Challenge on Learned Image Compression (CLIC) dataset for deep learning. Experimental results show that our proposed method is superior to the traditional compression methods of the autoencoder, and the proposed work has better performance in terms of performance speed and quality measures Peak Signal To Noise Ratio (PSNR) and Structural Similarity Index (SSIM) where the results achieved better performance and high efficiency With high compression bit rates and low Mean Squared Error (MSE) rate the results recorded the highest compression ratios that ranged between (0.7117 to 0.8707) for the Kodak dataset and (0.7191 to 0.9930) for CLIC dataset. The system achieved high accuracy and quality in comparison to the error coefficient, which was recorded (0.0126 to reach 0.0003) below, and this system is onsidered the most quality and accurate compared to the methods of deep learning compared to the deep learning methods of the autoencoder
Classification of imbalanced data is an important issue. Many algorithms have been developed for classification, such as Back Propagation (BP) neural networks, decision tree, Bayesian networks etc., and have been used repeatedly in many fields. These algorithms speak of the problem of imbalanced data, where there are situations that belong to more classes than others. Imbalanced data result in poor performance and bias to a class without other classes. In this paper, we proposed three techniques based on the Over-Sampling (O.S.) technique for processing imbalanced dataset and redistributing it and converting it into balanced dataset. These techniques are (Improved Synthetic Minority Over-Sampling Technique (Improved SMOTE), Border
... Show MoreFuture wireless systems aim to provide higher transmission data rates, improved spectral efficiency and greater capacity. In this paper a spectral efficient two dimensional (2-D) parallel code division multiple access (CDMA) system is proposed for generating and transmitting (2-D CDMA) symbols through 2-D Inter-Symbol Interference (ISI) channel to increase the transmission speed. The 3D-Hadamard matrix is used to generate the 2-D spreading codes required to spread the two-dimensional data for each user row wise and column wise. The quadrature amplitude modulation (QAM) is used as a data mapping technique due to the increased spectral efficiency offered. The new structure simulated using MATLAB and a comparison of performance for ser
... Show MoreCO2 Gas is considered one of the unfavorable gases and it causes great air pollution. It’s possible to decrease this pollution by injecting gas in the oil reservoirs to provide a good miscibility and to increase the oil recovery factor. MMP was estimated by Peng Robinson equation of state (PR-EOS). South Rumila-63 (SULIAY) is involved for which the miscible displacement by is achievable based on the standard criteria for success EOR processes. A PVT report was available for the reservoir under study. It contains deferential liberation (DL) and constant composition expansion (CCE) tests. PVTi software is one of the (Eclipse V.2010) software’s packages, it has been used to achieve the goal. Many trials have been done to ma
... Show MoreIn the field of data security, the critical challenge of preserving sensitive information during its transmission through public channels takes centre stage. Steganography, a method employed to conceal data within various carrier objects such as text, can be proposed to address these security challenges. Text, owing to its extensive usage and constrained bandwidth, stands out as an optimal medium for this purpose. Despite the richness of the Arabic language in its linguistic features, only a small number of studies have explored Arabic text steganography. Arabic text, characterized by its distinctive script and linguistic features, has gained notable attention as a promising domain for steganographic ventures. Arabic text steganography harn
... Show MoreCO2 Gas is considered one of the unfavorable gases and it causes great air pollution. It’s possible to decrease this pollution by injecting gas in the oil reservoirs to provide a good miscibility and to increase the oil recovery factor. MMP was estimated by Peng Robinson equation of state (PR-EOS). South Rumila-63 (SULIAY) is involved for which the miscible displacement by is achievable based on the standard criteria for success EOR processes. A PVT report was available for the reservoir under study. It contains deferential liberation (DL) and constant composition expansion (CCE) tests. PVTi software is one of the (Eclipse V.2010) software’s packages, it has been used to achieve the goal.
... Show MoreThis paper proposes and tests a computerized approach for constructing a 3D model of blood vessels from angiogram images. The approach is divided into two steps, image features extraction and solid model formation. In the first step, image morphological operations and post-processing techniques are used for extracting geometrical entities from the angiogram image. These entities are the middle curve and outer edges of the blood vessel, which are then passed to a computer-aided graphical system for the second phase of processing. The system has embedded programming capabilities and pre-programmed libraries for automating a sequence of events that are exploited to create a solid model of the blood vessel. The gradient of the middle c
... Show MoreThis work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show MoreRecently, wireless communication environments with high speeds and low complexity have become increasingly essential. Free-space optics (FSO) has emerged as a promising solution for providing direct connections between devices in such high-spectrum wireless setups. However, FSO communications are susceptible to weather-induced signal fluctuations, leading to fading and signal weakness at the receiver. To mitigate the effects of these challenges, several mathematical models have been proposed to describe the transition from weak to strong atmospheric turbulence, including Rayleigh, lognormal, Málaga, Nakagami-m, K-distribution, Weibull, Negative-Exponential, Inverse-Gaussian, G-G, and Fisher-Snedecor F distributions. This paper extensive
... Show More