Electrocardiogram (ECG) is an important physiological signal for cardiac disease diagnosis. With the increasing use of modern electrocardiogram monitoring devices that generate vast amount of data requiring huge storage capacity. In order to decrease storage costs or make ECG signals suitable and ready for transmission through common communication channels, the ECG data
volume must be reduced. So an effective data compression method is required. This paper presents an efficient technique for the compression of ECG signals. In this technique, different transforms have been used to compress the ECG signals. At first, a 1-D ECG data was segmented and aligned to a 2-D data array, then 2-D mixed transform was implemented to compress the ECG data in the 2-
D form. The compression algorithms were implemented and tested using multiwavelet, wavelet and slantlet transforms to form the proposed method based on mixed transforms. Then vector quantization technique was employed to extract the mixed transform coefficients. Some selected records from MIT/BIH arrhythmia database were tested contrastively and the performance of the
proposed methods was analyzed and evaluated using MATLAB package. Simulation results showed that the proposed methods gave a high compression ratio (CR) for the ECG signals comparing with other available methods. For example, the compression of one record (record 100) yielded CR of 24.4 associated with percent root mean square difference (PRD) of 2.56% was achieved.
In this research velocity of moving airplane from its recorded digital sound is introduced. The data of sound file is sliced into several frames using overlapping partitions. Then the array of each frame is transformed from time domain to frequency domain using Fourier Transform (FT). To determine the characteristic frequency of the sound, a moving window mechanics is used, the size of that window is made linearly proportional with the value of the tracked frequency. This proportionality is due to the existing linear relationship between the frequency and its Doppler shift. An algorithm was introduced to select the characteristic frequencies, this algorithm allocates the frequencies which satisfy the Doppler relation, beside that the tra
... Show MoreOne study whose importance has significantly grown in recent years is lip-reading, particularly with the widespread of using deep learning techniques. Lip reading is essential for speech recognition in noisy environments or for those with hearing impairments. It refers to recognizing spoken sentences using visual information acquired from lip movements. Also, the lip area, especially for males, suffers from several problems, such as the mouth area containing the mustache and beard, which may cover the lip area. This paper proposes an automatic lip-reading system to recognize and classify short English sentences spoken by speakers using deep learning networks. The input video extracts frames and each frame is passed to the Viola-Jone
... Show MoreThis research include design and implementation of an Iraqi cities database using spatial data structure for storing data in two or more dimension called k-d tree .The proposed system should allow records to be inserted, deleted and searched by name or coordinate. All the programming of the proposed system written using Delphi ver. 7 and performed on personal computer (Intel core i3).
The necessities of steganography methods for hiding secret message into images have been ascend. Thereby, this study is to generate a practical steganography procedure to hide text into image. This operation allows the user to provide the system with both text and cover image, and to find a resulting image that comprises the hidden text inside. The suggested technique is to hide a text inside the header formats of a digital image. Least Significant Bit (LSB) method to hide the message or text, in order to keep the features and characteristics of the original image are used. A new method is applied via using the whole image (header formats) to hide the image. From the experimental results, suggested technique that gives a higher embe
... Show MoreComputer vision seeks to mimic the human visual system and plays an essential role in artificial intelligence. It is based on different signal reprocessing techniques; therefore, developing efficient techniques becomes essential to achieving fast and reliable processing. Various signal preprocessing operations have been used for computer vision, including smoothing techniques, signal analyzing, resizing, sharpening, and enhancement, to reduce reluctant falsifications, segmentation, and image feature improvement. For example, to reduce the noise in a disturbed signal, smoothing kernels can be effectively used. This is achievedby convolving the distributed signal with smoothing kernels. In addition, orthogonal moments (OMs) are a cruc
... Show MoreThis paper deals with constructing mixed probability distribution from exponential with scale parameter (β) and also Gamma distribution with (2,β), and the mixed proportions are ( .first of all, the probability density function (p.d.f) and also cumulative distribution function (c.d.f) and also the reliability function are obtained. The parameters of mixed distribution, ( ,β) are estimated by three different methods, which are maximum likelihood, and Moments method,as well proposed method (Differential Least Square Method)(DLSM).The comparison is done using simulation procedure, and all the results are explained in tables.