Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the original image. A lossless Hexadata encoding method is then applied to the data, which is based on reducing each set of six data items to a single encoded value. The tested results achieved acceptable saving bytes performance for the 21 iris square images of sizes 256x256 pixels which is about 22.4 KB on average with 0.79 sec decompression average time, with high saving bytes performance for 2 iris non-square images of sizes 640x480/2048x1536 that reached 76KB/2.2 sec, 1630 KB/4.71 sec respectively, Finally, the proposed promising techniques standard lossless JPEG2000 compression techniques with reduction about 1.2 and more in KB saving that implicitly demonstrating the power and efficiency of the suggested lossless biometric techniques.
In this research want to make analysis for some indicators and it's classifications that related with the teaching process and the scientific level for graduate studies in the university by using analysis of variance for ranked data for repeated measurements instead of the ordinary analysis of variance . We reach many conclusions for the
important classifications for each indicator that has affected on the teaching process. &nb
... Show MoreMultiple eliminations (de-multiple) are one of seismic processing steps to remove their effects and delineate the correct primary refractors. Using normal move out to flatten primaries is the way to eliminate multiples through transforming these data to frequency-wavenumber domain. The flatten primaries are aligned with zero axis of the frequency-wavenumber domain and any other reflection types (multiples and random noise) are distributed elsewhere. Dip-filter is applied to pass the aligned data and reject others will separate primaries from multiple after transforming the data back from frequency-wavenumber domain to time-distance domain. For that, a suggested name for this technique as normal move out- frequency-wavenumber domain
... Show MoreThis study has been accomplished by testing three different models to determine rocks type, pore throat radius, and flow units for Mishrif Formation in West Qurna oilfield in Southern Iraq based on Mishrif full diameter cores from 20 wells. The three models that were used in this study were Lucia rocks type classification, Winland plot was utilized to determine the pore throat radius depending on the mercury injection test (r35), and (FZI) concepts to identify flow units which enabled us to recognize the differences between Mishrif units in these three categories. The study of pore characteristics is very significant in reservoir evaluation. It controls the storage mechanism and reservoir fluid prope
In the presence of deep submicron noise, providing reliable and energy‐efficient network on‐chip operation is becoming a challenging objective. In this study, the authors propose a hybrid automatic repeat request (HARQ)‐based coding scheme that simultaneously reduces the crosstalk induced bus delay and provides multi‐bit error protection while achieving high‐energy savings. This is achieved by calculating two‐dimensional parities and duplicating all the bits, which provide single error correction and six errors detection. The error correction reduces the performance degradation caused by retransmissions, which when combined with voltage swing reduction, due to its high error detection, high‐energy savings are achieved. The res
... Show MoreError control schemes became a necessity in network-on-chip (NoC) to improve reliability as the on-chip interconnect errors increase with the continuous shrinking of geometry. Accordingly, many researchers are trying to present multi-bit error correction coding schemes that perform a high error correction capability with the simplest design possible to minimize area and power consumption. A recent work, Multi-bit Error Correcting Coding with Reduced Link Bandwidth (MECCRLB), showed a huge reduction in area and power consumption compared to a well-known scheme, namely, Hamming product code (HPC) with Type-II HARQ. Moreover, the authors showed that the proposed scheme can correct 11 random errors which is considered a high
... Show MoreImage compression is very important in reducing the costs of data storage transmission in relatively slow channels. Wavelet transform has received significant attention because their multiresolution decomposition that allows efficient image analysis. This paper attempts to give an understanding of the wavelet transform using two more popular examples for wavelet transform, Haar and Daubechies techniques, and make compression between their effects on the image compression.
This paper presents the application of a framework of fast and efficient compressive sampling based on the concept of random sampling of sparse Audio signal. It provides four important features. (i) It is universal with a variety of sparse signals. (ii) The number of measurements required for exact reconstruction is nearly optimal and much less then the sampling frequency and below the Nyquist frequency. (iii) It has very low complexity and fast computation. (iv) It is developed on the provable mathematical model from which we are able to quantify trade-offs among streaming capability, computation/memory requirement and quality of reconstruction of the audio signal. Compressed sensing CS is an attractive compression scheme due to its uni
... Show More