Preferred Language
Articles
/
ZRdnMI8BVTCNdQwCBV8p
Iris Data Compression Based on Hexa-Data Coding
...Show More Authors

Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the original image. A lossless Hexadata encoding method is then applied to the data, which is based on reducing each set of six data items to a single encoded value. The tested results achieved acceptable saving bytes performance for the 21 iris square images of sizes 256x256 pixels which is about 22.4 KB on average with 0.79 sec decompression  average time, with high saving bytes performance for 2 iris non-square images of sizes 640x480/2048x1536 that reached 76KB/2.2 sec, 1630 KB/4.71 sec respectively, Finally, the proposed promising techniques standard lossless JPEG2000 compression techniques with reduction about 1.2 and more in KB saving that implicitly demonstrating the power and efficiency of the suggested lossless biometric techniques.

Crossref
View Publication
Publication Date
Thu Sep 01 2011
Journal Name
Journal Of Economics And Administrative Sciences
Analysis of the indicators of the educational process and scientific levelUsing the analysis of variance of ordered data in repeated measurements
...Show More Authors

In this research want to make analysis for some indicators and it's classifications that related with the teaching process and the            scientific level for graduate studies in the university by using analysis of variance for ranked data for repeated measurements instead of the ordinary analysis of variance . We reach many conclusions  for the                         

important classifications for each indicator that has affected on   the teaching process.         &nb

... Show More
View Publication Preview PDF
Crossref
Publication Date
Thu Nov 30 2023
Journal Name
Iraqi Geological Journal
Multiple and Coherent Noise Removal from X-Profile 2D Seismic Data of Southern Iraq Using Normal Move Out-Frequency Wavenumber Technique
...Show More Authors

Multiple eliminations (de-multiple) are one of seismic processing steps to remove their effects and delineate the correct primary refractors. Using normal move out to flatten primaries is the way to eliminate multiples through transforming these data to frequency-wavenumber domain. The flatten primaries are aligned with zero axis of the frequency-wavenumber domain and any other reflection types (multiples and random noise) are distributed elsewhere. Dip-filter is applied to pass the aligned data and reject others will separate primaries from multiple after transforming the data back from frequency-wavenumber domain to time-distance domain. For that, a suggested name for this technique as normal move out- frequency-wavenumber domain

... Show More
View Publication
Scopus (2)
Scopus Crossref
Publication Date
Fri Sep 17 2021
Journal Name
Journal Of Petroleum Exploration And Production Technology
Characterization of flow units, rock and pore types for Mishrif Reservoir in West Qurna oilfield, Southern Iraq by using lithofacies data
...Show More Authors
Abstract<p>This study has been accomplished by testing three different models to determine rocks type, pore throat radius, and flow units for Mishrif Formation in West Qurna oilfield in Southern Iraq based on Mishrif full diameter cores from 20 wells. The three models that were used in this study were Lucia rocks type classification, Winland plot was utilized to determine the pore throat radius depending on the mercury injection test (r35), and (FZI) concepts to identify flow units which enabled us to recognize the differences between Mishrif units in these three categories. The study of pore characteristics is very significant in reservoir evaluation. It controls the storage mechanism and reservoir fluid prope</p> ... Show More
View Publication
Scopus (24)
Crossref (21)
Scopus Clarivate Crossref
Publication Date
Fri Dec 20 2019
Journal Name
Iet Circuits, Devices &amp; Systems
Multi‐bit error control coding with limited correction for high‐performance and energy‐efficient network on chip
...Show More Authors

In the presence of deep submicron noise, providing reliable and energy‐efficient network on‐chip operation is becoming a challenging objective. In this study, the authors propose a hybrid automatic repeat request (HARQ)‐based coding scheme that simultaneously reduces the crosstalk induced bus delay and provides multi‐bit error protection while achieving high‐energy savings. This is achieved by calculating two‐dimensional parities and duplicating all the bits, which provide single error correction and six errors detection. The error correction reduces the performance degradation caused by retransmissions, which when combined with voltage swing reduction, due to its high error detection, high‐energy savings are achieved. The res

... Show More
View Publication
Scopus (2)
Crossref (1)
Scopus Clarivate Crossref
Publication Date
Mon Jun 01 2020
Journal Name
Journal Of Engineering
Reliability Analysis of Multibit Error Correcting Coding and Comparison to Hamming Product Code for On-Chip Interconnect
...Show More Authors

Error control schemes became a necessity in network-on-chip (NoC) to improve reliability as the on-chip interconnect errors increase with the continuous shrinking of geometry. Accordingly, many researchers are trying to present multi-bit error correction coding schemes that perform a high error correction capability with the simplest design possible to minimize area and power consumption. A recent work, Multi-bit Error Correcting Coding with Reduced Link Bandwidth (MECCRLB), showed a huge reduction in area and power consumption compared to a well-known scheme, namely, Hamming product code (HPC) with Type-II HARQ. Moreover, the authors showed that the proposed scheme can correct 11 random errors which is considered a high

... Show More
View Publication Preview PDF
Crossref
Publication Date
Fri Apr 01 2016
Journal Name
Iosr Journal Of Computer Engineering
Lossless and Lossy Polynomial Image Compression
...Show More Authors

Crossref (1)
Crossref
Publication Date
Fri Apr 01 2016
Journal Name
Iosr Journal Of Computer Engineering
Lossless and Lossy Polynomial Image Compression
...Show More Authors

View Publication
Crossref (1)
Crossref
Publication Date
Sat Jul 01 2017
Journal Name
Diyala Journal For Pure Science
Correlated Hierarchical Autoregressive Models Image Compression
...Show More Authors

View Publication
Crossref
Publication Date
Sat Dec 01 2012
Journal Name
Iraqi Journal Of Physics
Wavelet compression for remotely sensed images
...Show More Authors

Image compression is very important in reducing the costs of data storage transmission in relatively slow channels. Wavelet transform has received significant attention because their multiresolution decomposition that allows efficient image analysis. This paper attempts to give an understanding of the wavelet transform using two more popular examples for wavelet transform, Haar and Daubechies techniques, and make compression between their effects on the image compression.

View Publication Preview PDF
Publication Date
Fri Aug 31 2012
Journal Name
Al-khwarizmi Engineering Journal
Sub–Nyquist Frequency Efficient Audio Compression
...Show More Authors

This paper presents the application of a framework of fast and efficient compressive sampling based on the concept of random sampling of sparse Audio signal. It provides four important features. (i) It is universal with a variety of sparse signals. (ii) The number of measurements required for exact reconstruction is nearly optimal and much less then the sampling frequency and below the Nyquist frequency. (iii) It has very low complexity and fast computation. (iv) It is developed on the provable mathematical model from which we are able to quantify trade-offs among streaming capability, computation/memory requirement and quality of reconstruction of the audio signal. Compressed sensing CS is an attractive compression scheme due to its uni

... Show More
View Publication Preview PDF