A new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of the compressed signal relative to the size of the uncompressed signal. The proposed algorithms where fulfilled with the use of Matlab package
Abstract
In this research we study the wavelet characteristics for the important time series known as Sunspot, on the aim of verifying the periodogram that other researchers had reached by the spectral transform, and noticing the variation in the period length on one side and the shifting on another.
A continuous wavelet analysis is done for this series and the periodogram in it is marked primarily. for more accuracy, the series is partitioned to its the approximate and the details components to five levels, filtering these components by using fixed threshold on one time and independent threshold on another, finding the noise series which represents the difference between
... Show MoreThis work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show MoreCare and attention to the structure in the sixties of the last century replaced the mark, and if the structure of Ms. pampered in research and studies, it has become the mark is also a spoiled lady .. But the relationship between the structure and the mark was not a break and break, but the relationship of integration, His themes are structural analysis, and these are intellectual themes that can not be surpassed in contemporary research, especially since semiotics have emerged from the linguistic inflection.
We have tried to distinguish between text and speech, which is a daunting task, as it seems that whenever the difference between them is clear and clear, we come back to wonder whether the text is the same discourse, and is
... Show MoreIn this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.
Many production companies suffers from big losses because of high production cost and low profits for several reasons, including raw materials high prices and no taxes impose on imported goods also consumer protection law deactivation and national product and customs law, so most of consumers buy imported goods because it is characterized by modern specifications and low prices.
The production company also suffers from uncertainty in the cost, volume of production, sales, and availability of raw materials and workers number because they vary according to the seasons of the year.
I had adopted in this research fuzzy linear program model with fuzzy figures
... Show MoreThis paper is dealing with non-polynomial spline functions "generalized spline" to find the approximate solution of linear Volterra integro-differential equations of the second kind and extension of this work to solve system of linear Volterra integro-differential equations. The performance of generalized spline functions are illustrated in test examples
The aerodynamic characteristics of general three-dimensional rectangular wings are considered using non-linear interaction between two-dimensional viscous-inviscid panel method and vortex ring method. The potential flow of a two-dimensional airfoil by the pioneering Hess & Smith method was used with viscous laminar, transition and turbulent boundary layer to solve flow about complex configuration of airfoils including stalling effect. Viterna method was used to extend the aerodynamic characteristics of the specified airfoil to high angles of attacks. A modified vortex ring method was used to find the circulation values along span wise direction of the wing and then interacted with sectional circulation obtained by Kutta-Joukowsky theorem of
... Show MoreThe support vector machine, also known as SVM, is a type of supervised learning model that can be used for classification or regression depending on the datasets. SVM is used to classify data points by determining the best hyperplane between two or more groups. Working with enormous datasets, on the other hand, might result in a variety of issues, including inefficient accuracy and time-consuming. SVM was updated in this research by applying some non-linear kernel transformations, which are: linear, polynomial, radial basis, and multi-layer kernels. The non-linear SVM classification model was illustrated and summarized in an algorithm using kernel tricks. The proposed method was examined using three simulation datasets with different sample
... Show MoreIn this paper, a handwritten digit classification system is proposed based on the Discrete Wavelet Transform and Spike Neural Network. The system consists of three stages. The first stage is for preprocessing the data and the second stage is for feature extraction, which is based on Discrete Wavelet Transform (DWT). The third stage is for classification and is based on a Spiking Neural Network (SNN). To evaluate the system, two standard databases are used: the MADBase database and the MNIST database. The proposed system achieved a high classification accuracy rate with 99.1% for the MADBase database and 99.9% for the MNIST database
In this paper, we present multiple bit error correction coding scheme based on extended Hamming product code combined with type II HARQ using shared resources for on chip interconnect. The shared resources reduce the hardware complexity of the encoder and decoder compared to the existing three stages iterative decoding method for on chip interconnects. The proposed method of decoding achieves 20% and 28% reduction in area and power consumption respectively, with only small increase in decoder delay compared to the existing three stage iterative decoding scheme for multiple bit error correction. The proposed code also achieves excellent improvement in residual flit error rate and up to 58% of total power consumption compared to the other err
... Show More