During the two last decades ago, audio compression becomes the topic of many types of research due to the importance of this field which reflecting on the storage capacity and the transmission requirement. The rapid development of the computer industry increases the demand for audio data with high quality and accordingly, there is great importance for the development of audio compression technologies, lossy and lossless are the two categories of compression. This paper aims to review the techniques of the lossy audio compression methods, summarize the importance and the uses of each method.
<p>In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on
... Show MoreColor image compression is a good way to encode digital images by decreasing the number of bits wanted to supply the image. The main objective is to reduce storage space, reduce transportation costs and maintain good quality. In current research work, a simple effective methodology is proposed for the purpose of compressing color art digital images and obtaining a low bit rate by compressing the matrix resulting from the scalar quantization process (reducing the number of bits from 24 to 8 bits) using displacement coding and then compressing the remainder using the Mabel ZF algorithm Welch LZW. The proposed methodology maintains the quality of the reconstructed image. Macroscopic and
The study includes collection of data about cholera disease from six health centers from nine locations with 2500km2 and a population of 750000individual. The average of infection for six centers during the 2000-2003 was recorded. There were 3007 cases of diarrhea diagnosed as cholera caused by Vibrio cholerae. The percentage of male infection was 14. 7% while for female were 13. 2%. The percentage of infection for children (less than one year) was 6.1%, it while for the age (1-5 years) was 6.9%and for the ages more than 5 years was 14.5%.The total percentage of the patients stayed in hospital was 7.7%(4.2%for male and 3.4%for female). The bacteria was isolated and identified from 7cases in the Central Laboratory for Health in Baghdad. In
... Show MoreInterest in belowground plant growth is increasing, especially in relation to arguments that shallow‐rooted cultivars are efficient at exploiting soil phosphorus while deep‐rooted ones will access water at depth. However, methods for assessing roots in large numbers of plants are diverse and direct comparisons of methods are rare. Three methods for measuring root growth traits were evaluated for utility in discriminating rice cultivars: soil‐filled rhizotrons, hydroponics and soil‐filled pots whose bottom was sealed with a non‐woven fabric (a potential method for assessing root penetration ability). A set of 38 rice genotypes including the Oryza
This article studies a comprehensive methods of edge detection and algorithms in digital images which is reflected a basic process in the field of image processing and analysis. The purpose of edge detection technique is discovering the borders that distinct diverse areas of an image, which donates to refining the understanding of the image contents and extracting structural information. The article starts by clarifying the idea of an edge and its importance in image analysis and studying the most noticeable edge detection methods utilized in this field, (e.g. Sobel, Prewitt, and Canny filters), besides other schemes based on distinguishing unexpected modifications in light intensity and color gradation. The research as well discuss
... Show MoreThis paper aims to explain the effect of workplace respect on employee performance at Abu Ghraib Dairy Factory (AGDF). For achieving the research aim, the analytical and descriptive approach was chosen using a questionnaire tool for collecting data. It covers 22 items; ten of them for the workplace respect variable and twelve items for the employee performance variable. The research population involved human resources who work at AGDF in Baghdad within two administrative levels (top and middle). We conducted a purposive stratified sample approach. It was distributed 70 questionnaire forms, and 65 forms were received. However, six of them had missing data and did not include in the final data analysis. The main results are t
... Show MoreVehicular Ad Hoc Networks (VANETs) are integral to Intelligent Transportation Systems (ITS), enabling real-time communication between vehicles and infrastructure to enhance traffic flow, road safety, and passenger experience. However, the open and dynamic nature of VANETs presents significant privacy and security challenges, including data eavesdropping, message manipulation, and unauthorized access. This study addresses these concerns by leveraging advancements in Fog Computing (FC), which offers lowlatency, distributed data processing near-end devices to enhance the resilience and security of VANET communications. The paper comprehensively analyzes the security frameworks for fog-enabled VANETs, introducing a novel taxonomy that c
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for