Home New Trends in Information and Communications Technology Applications Conference paper Audio Compression Using Transform Coding with LZW and Double Shift Coding Zainab J. Ahmed & Loay E. George Conference paper First Online: 11 January 2022 126 Accesses Part of the Communications in Computer and Information Science book series (CCIS,volume 1511) Abstract The need for audio compression is still a vital issue, because of its significance in reducing the data size of one of the most common digital media that is exchanged between distant parties. In this paper, the efficiencies of two audio compression modules were investigated; the first module is based on discrete cosine transform and the second module is based on discrete wavelet transform. The proposed audio compression system consists of the following steps: (1) load digital audio data, (2) transformation (i.e., using bi-orthogonal wavelet or discrete cosine transform) to decompose the audio signal, (3) quantization (depend on the used transform), (4) quantization of the quantized data that separated into two sequence vectors; runs and non-zeroes decomposition to apply the run length to reduce the long-run sequence. Each resulted vector is passed into the entropy encoder technique to implement a compression process. In this paper, two entropy encoders are used; the first one is the lossless compression method LZW and the second one is an advanced version for the traditional shift coding method called the double shift coding method. The proposed system performance is analyzed using distinct audio samples of different sizes and characteristics with various audio signal parameters. The performance of the compression system is evaluated using Peak Signal to Noise Ratio and Compression Ratio. The outcomes of audio samples show that the system is simple, fast and it causes better compression gain. The results show that the DSC encoding time is less than the LZW encoding time.
The objective of this paper is to improve the general quality of infrared images by proposes an algorithm relying upon strategy for infrared images (IR) enhancement. This algorithm was based on two methods: adaptive histogram equalization (AHE) and Contrast Limited Adaptive Histogram Equalization (CLAHE). The contribution of this paper is on how well contrast enhancement improvement procedures proposed for infrared images, and to propose a strategy that may be most appropriate for consolidation into commercial infrared imaging applications.
The database for this paper consists of night vision infrared images were taken by Zenmuse camera (FLIR Systems, Inc) attached on MATRIC100 drone in Karbala city. The experimental tests showed sign
The proposal of nonlinear models is one of the most important methods in time series analysis, which has a wide potential for predicting various phenomena, including physical, engineering and economic, by studying the characteristics of random disturbances in order to arrive at accurate predictions.
In this, the autoregressive model with exogenous variable was built using a threshold as the first method, using two proposed approaches that were used to determine the best cutting point of [the predictability forward (forecasting) and the predictability in the time series (prediction), through the threshold point indicator]. B-J seasonal models are used as a second method based on the principle of the two proposed approaches in dete
... Show MoreThe esterification of oleic acid with 2-ethylhexanol in presence of sulfuric acid as homogeneous catalyst was investigated in this work to produce 2-ethylhexyl oleate (biodiesel) by using semi batch reactive distillation. The effect of reaction temperature (100 to 130°C), 2-ethylhexanol:oleic acid molar ratio (1:1 to 1:3) and catalysts concentration (0.2 to 1wt%) were studied. Higher conversion of 97% was achieved with operating conditions of reaction temperature of 130°C, molar ratio of free fatty acid to alcohol of 1:2 and catalyst concentration of 1wt%. A simulation was adopted from basic principles of the reactive distillation using MATLAB to describe the process. Good agreement was achieved.
The present paper addresses cultivation of Chlorella vulgaris microalgae using airlift photobioreactor that sparged with 5% CO 2 /air. The experimental data were compared with that obtained from bioreactor aerated with air and unsparged bioreactor. The results showed that the concentration of biomass is 0.36 g l -1 in sparged bioreactor with CO2/air, while, the concentration of biomass reached to 0.069 g l -1 in the unsparged bioreactor. They showed also that aerated ioreactor.with CO2/air gives more biomass production even the bioreactor was aerated with air. This study proved that application of sparging system for ultivation of Chlorella vulgaris microalgae using either CO2/air mixture or air has a significant
... Show MoreThis research aims to study the methods of reduction of dimensions that overcome the problem curse of dimensionality when traditional methods fail to provide a good estimation of the parameters So this problem must be dealt with directly . Two methods were used to solve the problem of high dimensional data, The first method is the non-classical method Slice inverse regression ( SIR ) method and the proposed weight standard Sir (WSIR) method and principal components (PCA) which is the general method used in reducing dimensions, (SIR ) and (PCA) is based on the work of linear combinations of a subset of the original explanatory variables, which may suffer from the problem of heterogeneity and the problem of linear
... Show MoreAbstract
The nuclear structure of 28-40Si isotopes toward neutron dripline has been investigated in framework of shell model with Skyrme-Hrtree-Fock method using certain Skyrme parameterizations. Moreover, investigations of static properties such as nuclear densities for proton, neutron, mass, and, charge densities with their corresponding rms radii, neutron skin thicknesses, binding energies, separation energies, shell gap, and pairing gap have been performed using the most recent Skyrme parameterization. The calculated results have been compared with available experimental data to identify which of these parameterizations introduced equivalent results with the ex
... Show MoreEach phenomenon contains several variables. Studying these variables, we find mathematical formula to get the joint distribution and the copula that are a useful and good tool to find the amount of correlation, where the survival function was used to measure the relationship of age with the level of cretonne in the remaining blood of the person. The Spss program was also used to extract the influencing variables from a group of variables using factor analysis and then using the Clayton copula function that is used to find the shared binary distributions using multivariate distributions, where the bivariate distribution was calculated, and then the survival function value was calculated for a sample size (50) drawn from Yarmouk Ho
... Show More This research aims to estimate stock returns, according to the Rough Set Theory approach, test its effectiveness and accuracy in predicting stock returns and their potential in the field of financial markets, and rationalize investor decisions. The research sample is totaling (10) companies traded at Iraq Stock Exchange. The results showed a remarkable Rough Set Theory application in data reduction, contributing to the rationalization of investment decisions. The most prominent conclusions are the capability of rough set theory in dealing with financial data and applying it for forecasting stock returns.The research provides those interested in investing stocks in financial
... Show MoreWeibull distribution is considered as one of the most widely distribution applied in real life, Its similar to normal distribution in the way of applications, it's also considered as one of the distributions that can applied in many fields such as industrial engineering to represent replaced and manufacturing time ,weather forecasting, and other scientific uses in reliability studies and survival function in medical and communication engineering fields.
In this paper, The scale parameter has been estimated for weibull distribution using Bayesian method based on Jeffery prior information as a first method , then enhanced by improving Jeffery prior information and then used as a se
... Show More