This paper presents the application of a framework of fast and efficient compressive sampling based on the concept of random sampling of sparse Audio signal. It provides four important features. (i) It is universal with a variety of sparse signals. (ii) The number of measurements required for exact reconstruction is nearly optimal and much less then the sampling frequency and below the Nyquist frequency. (iii) It has very low complexity and fast computation. (iv) It is developed on the provable mathematical model from which we are able to quantify trade-offs among streaming capability, computation/memory requirement and quality of reconstruction of the audio signal. Compressed sensing CS is an attractive compression scheme due to its universality and lack of complexity on the sensor side. In this paper a study of applying compressed sensing on audio signals was presented. The performance of different bases and its reconstruction are investigated, as well as exploring its performance. Simulations results are present to show the efficient reconstruction of sparse audio signal. The results shows that compressed sensing can dramatically reduce the number of samples below the Nyquist rate keeping with a good PSNR.
Background: The study of human leukocytes (HLA) alleles, and haplotype frequencies within populations provide an important source of information for anthropological investigation, organ and hematopoietic stem cell transplantation as well as disease association, certain diseases showed association with specific alleles specially those of known or suspected hereditary origin or immunological basis, whether simple renal cyst is congenital or acquired is still unclear and need to be investigated.Objectives: To study the genetic aspect of simple renal cysts by detecting the gene frequency and the haplotype of HLA class I of patients with simple renal cysts, and to find the presence of these cysts in other family members.Method: Thirty patient
... Show MoreThree hundred and fifty five patients with hepatitis were investigated in this study all cases gave negative result with HBs Ag , IgM-anti HCV , IgM-anti HEV, IgM-anti HDV and anti-HIV tests . The frequency of IgM-anti HAV was 113 and the percentage was 32 % in all ages but when these patients divided into five groups dependent on ages. The highest percentage of IgM-anti HAV was (45%) in age <10 and the percentage declined with age increase till to 9% in age >41 year.
The main intention of this study was to investigate the development of a new optimization technique based on the differential evolution (DE) algorithm, for the purpose of linear frequency modulation radar signal de-noising. As the standard DE algorithm is a fixed length optimizer, it is not suitable for solving signal de-noising problems that call for variability. A modified crossover scheme called rand-length crossover was designed to fit the proposed variable-length DE, and the new DE algorithm is referred to as the random variable-length crossover differential evolution (rvlx-DE) algorithm. The measurement results demonstrate a highly efficient capability for target detection in terms of frequency response and peak forming that was isola
... Show MoreIn this work, the relationship between the ionospheric parameters (Maximum Usable Frequency (MUF), Lowest Usable Frequency (LUF) and Optimum working Frequency (OWF)) has been studied for the ionosphere layer over the Iraqi zone. The capital Baghdad (44.42oE, 33.32oN) has been selected to represent the transmitter station and many other cities that spread over Iraqi region have represented as receiver stations. The REC533 communication model considered as one of the modern radio broadcasting version of ITU has been used to calculate the LUF parameter, while the MUF and OWF ionospheric parameters have been generated using ASAPS international communication model which represents one of the most advanced and
... Show MoreSpatial data observed on a group of areal units is common in scientific applications. The usual hierarchical approach for modeling this kind of dataset is to introduce a spatial random effect with an autoregressive prior. However, the usual Markov chain Monte Carlo scheme for this hierarchical framework requires the spatial effects to be sampled from their full conditional posteriors one-by-one resulting in poor mixing. More importantly, it makes the model computationally inefficient for datasets with large number of units. In this article, we propose a Bayesian approach that uses the spectral structure of the adjacency to construct a low-rank expansion for modeling spatial dependence. We propose a pair of computationally efficient estimati
... Show MoreThis study employs evolutionary optimization and Artificial Intelligence algorithms to determine an individual’s age using a single-faced image as the basis for the identification process. Additionally, we used the WIKI dataset, widely considered the most comprehensive collection of facial images to date, including descriptions of age and gender attributes. However, estimating age from facial images is a recent topic of study, even though much research has been undertaken on establishing chronological age from facial photographs. Retrained artificial neural networks are used for classification after applying reprocessing and optimization techniques to achieve this goal. It is possible that the difficulty of determining age could be reduce
... Show MoreSansevieriatrifasciata was studied as a potential biosorbent for chromium, copper and nickel removal in batch process from electroplating and tannery effluents. Different parameters influencing the biosorption process such as pH, contact time, and amount of biosorbent were optimized while using the 80 mm sized particles of the biosorbent. As high as 91.3 % Ni and 92.7 % Cu were removed at pH of 6 and 4.5 respectively, while optimum Cr removal of 91.34 % from electroplating and 94.6 % from tannery effluents was found at pH 6.0 and 4.0 respectively. Pseudo second order model was found to best fit the kinetic data for all the metals as evidenced by their greater R2 values. FTIR characterization of biosorbent revealed the presence of carboxyl a
... Show MoreWith the proliferation of both Internet access and data traffic, recent breaches have brought into sharp focus the need for Network Intrusion Detection Systems (NIDS) to protect networks from more complex cyberattacks. To differentiate between normal network processes and possible attacks, Intrusion Detection Systems (IDS) often employ pattern recognition and data mining techniques. Network and host system intrusions, assaults, and policy violations can be automatically detected and classified by an Intrusion Detection System (IDS). Using Python Scikit-Learn the results of this study show that Machine Learning (ML) techniques like Decision Tree (DT), Naïve Bayes (NB), and K-Nearest Neighbor (KNN) can enhance the effectiveness of an Intrusi
... Show MorePolyacrylonitrile nanofiber (PANFS), a well-known polymers, has been extensively employed in the manufacturing of carbon nanofibers (CNFS), which have recently gained substantial attention due to their excellent features, such as spinnability, environmental friendliness, and commercial feasibility. Because of their high carbon yield and versatility in tailoring the final CNFS structure, In addition to the simple formation of ladder structures through nitrile polymerization to yield stable products, CNFS and PAN have been the focus of extensive research as potential production precursors. For instance, the development of biomedical and high-performance composites has now become achievable. PAN homopolymer or PAN-based precursor copolymer can
... Show MoreBig data analysis is essential for modern applications in areas such as healthcare, assistive technology, intelligent transportation, environment and climate monitoring. Traditional algorithms in data mining and machine learning do not scale well with data size. Mining and learning from big data need time and memory efficient techniques, albeit the cost of possible loss in accuracy. We have developed a data aggregation structure to summarize data with large number of instances and data generated from multiple data sources. Data are aggregated at multiple resolutions and resolution provides a trade-off between efficiency and accuracy. The structure is built once, updated incrementally, and serves as a common data input for multiple mining an
... Show More