During the two last decades ago, audio compression becomes the topic of many types of research due to the importance of this field which reflecting on the storage capacity and the transmission requirement. The rapid development of the computer industry increases the demand for audio data with high quality and accordingly, there is great importance for the development of audio compression technologies, lossy and lossless are the two categories of compression. This paper aims to review the techniques of the lossy audio compression methods, summarize the importance and the uses of each method.
In this paper, a compression system with high synthetic architect is introduced, it is based on wavelet transform, polynomial representation and quadtree coding. The bio-orthogonal (tap 9/7) wavelet transform is used to decompose the image signal, and 2D polynomial representation is utilized to prune the existing high scale variation of image signal. Quantization with quadtree coding are followed by shift coding are applied to compress the detail band and the residue part of approximation subband. The test results indicate that the introduced system is simple and fast and it leads to better compression gain in comparison with the case of using first order polynomial approximation.
The undetected error probability is an important measure to assess the communication reliability provided by any error coding scheme. Two error coding schemes namely, Joint crosstalk avoidance and Triple Error Correction (JTEC) and JTEC with Simultaneous Quadruple Error Detection (JTEC-SQED), provide both crosstalk reduction and multi-bit error correction/detection features. The available undetected error probability model yields an upper bound value which does not give accurate estimation on the reliability provided. This paper presents an improved mathematical model to estimate the undetected error probability of these two joint coding schemes. According to the decoding algorithm the errors are classified into patterns and their decoding
... Show MoreThe presence of different noise sources and continuous increase in crosstalk in the deep submicrometer technology raised concerns for on-chip communication reliability, leading to the incorporation of crosstalk avoidance techniques in error control coding schemes. This brief proposes joint crosstalk avoidance with adaptive error control scheme to reduce the power consumption by providing appropriate communication resiliency based on runtime noise level. By switching between shielding and duplication as the crosstalk avoidance technique and between hybrid automatic repeat request and forward error correction as the error control policies, three modes of error resiliencies are provided. The results show that, in reduced mode, the scheme achie
... Show MoreAction films employ many artistic and literary elements that contribute greatly to building the general meaning of the film and push the wheel of the film forward. The element of mystery and suspense is used as two basic elements in action films. The cinematic language in action films depends on global coding, which is not models as it might be. It is based on logic, rather as units that aspire to morphology and not their homogeneity as the physical sense, but as the logical harmony of interpretive authority and enlightenment and in action films as a field of communication and a field in its origin in which the signifier contrasts with the perceptions of the meaning and in it takes a certain number of units preventing each other and thro
... Show MoreA mathematical method with a new algorithm with the aid of Matlab language is proposed to compute the linear equivalence (or the recursion length) of the pseudo-random key-stream periodic sequences using Fourier transform. The proposed method enables the computation of the linear equivalence to determine the degree of the complexity of any binary or real periodic sequences produced from linear or nonlinear key-stream generators. The procedure can be used with comparatively greater computational ease and efficiency. The results of this algorithm are compared with Berlekamp-Massey (BM) method and good results are obtained where the results of the Fourier transform are more accurate than those of (BM) method for computing the linear equivalenc
... Show MoreMany researchers have tackled the shear behavior of Reinforced Concrete (RC) beams by using different kinds of strengthening in the shear regions and steel fibers. In the current paper, the effect of multiple parameters, such as using one percentage of Steel Fibers (SF) with and without stirrups, without stirrups and steel fibers, on the shear behavior of RC beams, has been studied and compared by using Finite Element analysis (FE). Three-dimensional (3D) models of (RC) beams are developed and analyzed using ABAQUS commercial software. The models were validated by comparing their results with the experimental test. The total number of beams that were modeled for validation purposes was four. Extensive pa
... Show MoreIn this research, Artificial Neural Networks (ANNs) technique was applied in an attempt to predict the water levels and some of the water quality parameters at Tigris River in Wasit Government for five different sites. These predictions are useful in the planning, management, evaluation of the water resources in the area. Spatial data along a river system or area at different locations in a catchment area usually have missing measurements, hence an accurate prediction. model to fill these missing values is essential.
The selected sites for water quality data prediction were Sewera, Numania , Kut u/s, Kut d/s, Garaf observation sites. In these five sites models were built for prediction of the water level and water quality parameters.
Groupwise non-rigid image alignment is a difficult non-linear optimization problem involving many parameters and often large datasets. Previous methods have explored various metrics and optimization strategies. Good results have been previously achieved with simple metrics, requiring complex optimization, often with many unintuitive parameters that require careful tuning for each dataset. In this chapter, the problem is restructured to use a simpler, iterative optimization algorithm, with very few free parameters. The warps are refined using an iterative Levenberg-Marquardt minimization to the mean, based on updating the locations of a small number of points and incorporating a stiffness constraint. This optimization approach is eff
... Show More