Methods of speech recognition have been the subject of several studies over the past decade. Speech recognition has been one of the most exciting areas of the signal processing. Mixed transform is a useful tool for speech signal processing; it is developed for its abilities of improvement in feature extraction. Speech recognition includes three important stages, preprocessing, feature extraction, and classification. Recognition accuracy is so affected by the features extraction stage; therefore different models of mixed transform for feature extraction were proposed. The properties of the recorded isolated word will be 1-D, which achieve the conversion of each 1-D word into a 2-D form. The second step of the word recognizer requires, the application of 2-D FFT, Radon transform, the 1-D IFFT,and 1-D discrete wavelet transforms were used in the first proposed model, while discrete multicircularlet transform was used in the second proposed model. The final stage of the proposed models includes the use of the dynamic time warping algorithm for recognition tasks. The performance of the proposed systems was evaluated using forty different isolated Arabic words that are recorded fifteen times in a studio for speaker dependant. The result shows recognition accuracy of (91% and 89%) using discrete wavelet transform type Daubechies (Db1) and (Db4) respectively, and the accuracy score between (87%-93%) was achieved using
discrete multicircularlet transform for 9 sub bands.
Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eye
... Show MoreProjects suspensions are between the most insistent tasks confronted by the construction field accredited to the sector’s difficulty and its essential delay risk foundations’ interdependence. Machine learning provides a perfect group of techniques, which can attack those complex systems. The study aimed to recognize and progress a wellorganized predictive data tool to examine and learn from delay sources depend on preceding data of construction projects by using decision trees and naïve Bayesian classification algorithms. An intensive review of available data has been conducted to explore the real reasons and causes of construction project delays. The results show that the postpo
Variable selection is an essential and necessary task in the statistical modeling field. Several studies have triedto develop and standardize the process of variable selection, but it isdifficultto do so. The first question a researcher needs to ask himself/herself what are the most significant variables that should be used to describe a given dataset’s response. In thispaper, a new method for variable selection using Gibbs sampler techniqueshas beendeveloped.First, the model is defined, and the posterior distributions for all the parameters are derived.The new variable selection methodis tested usingfour simulation datasets. The new approachiscompared with some existingtechniques: Ordinary Least Squared (OLS), Least Absolute Shrinkage
... Show MoreIn this work, the emission spectra and atomic structure of the aluminum target had been studied theoretically using Cowan code. Cowan code was used to calculate the transitions of electrons between atomic configuration interactions using the mathematical method called (Hartree-Fock). The aluminum target can give a good emission spectrum in the XUV region at 10 nm with oscillator strength of 1.82.
The hydrodynamic properties of laser produced plasma (LPP) were investigated for the purpose of creating a light source working in the EUV region. Such a light source is very important for lithography (semiconductor manufacturing). The improved MEDUSA (Med103) code can calculate the plasma hydrodynamic properties (velocity, electron density,
This paper describes a research effort that aims of developing solar models for housing suitable for the Arabian region since the Arabian Peninsula is excelled with very high levels of solar radiation.
The current paper is focused on achieving energy efficiency through utilizing solar energy and conserving energy. This task can be accomplished by implementation the major elements related to energy efficiency in housing design , such as embark on an optimum photovoltaic system orientation to maximize seize solar energy and produce solar electricity. All the precautions were taken to minimizing the consumption of solar energy for providing the suitable air-condition to the inhibitor of the solar house in addition to use of energy effici
The term "tight reservoir" is commonly used to refer to reservoirs with low permeability. Tight oil reservoirs have caused worry owing to its considerable influence upon oil output throughout the petroleum sector. As a result of its low permeability, producing from tight reservoirs presents numerous challenges. Because of their low permeability, producing from tight reservoirs is faced with a variety of difficulties. The research aim is to performing hydraulic fracturing treatment in single vertical well in order to study the possibility of fracking in the Saady reservoir. Iraq's Halfaya oil field's Saady B reservoir is the most important tight reservoir. The diagnostic fracture injection test is determined for HF55using GOHFER soft
... Show MoreIn this work, the study of
Database is characterized as an arrangement of data that is sorted out and disseminated in a way that allows the client to get to the data being put away in a simple and more helpful way. However, in the era of big-data the traditional methods of data analytics may not be able to manage and process the large amount of data. In order to develop an efficient way of handling big-data, this work studies the use of Map-Reduce technique to handle big-data distributed on the cloud. This approach was evaluated using Hadoop server and applied on EEG Big-data as a case study. The proposed approach showed clear enhancement for managing and processing the EEG Big-data with average of 50% reduction on response time. The obtained results provide EEG r
... Show More