Artificial Neural networks (ANN) are powerful and effective tools in time-series applications. The first aim of this paper is to diagnose better and more efficient ANN models (Back Propagation, Radial Basis Function Neural networks (RBF), and Recurrent neural networks) in solving the linear and nonlinear time-series behavior. The second aim is dealing with finding accurate estimators as the convergence sometimes is stack in the local minima. It is one of the problems that can bias the test of the robustness of the ANN in time series forecasting. To determine the best or the optimal ANN models, forecast Skill (SS) employed to measure the efficiency of the performance of ANN models. The mean square error and the absolute mean square error were also used to measure the accuracy of the estimation for methods used. The important result obtained in this paper is that the optimal neural network was the Backpropagation (BP) and Recurrent neural networks (RNN) to solve time series, whether linear, semilinear, or non-linear. Besides, the result proved that the inefficiency and inaccuracy (failure) of RBF in solving nonlinear time series. However, RBF shows good efficiency in the case of linear or semi-linear time series only. It overcomes the problem of local minimum. The results showed improvements in the modern methods for time series forecasting.
Various speech enhancement Algorithms (SEA) have been developed in the last few decades. Each algorithm has its advantages and disadvantages because the speech signal is affected by environmental situations. Distortion of speech results in the loss of important features that make this signal challenging to understand. SEA aims to improve the intelligibility and quality of speech that different types of noise have degraded. In most applications, quality improvement is highly desirable as it can reduce listener fatigue, especially when the listener is exposed to high noise levels for extended periods (e.g., manufacturing). SEA reduces or suppresses the background noise to some degree, sometimes called noise suppression alg
... Show MoreThe support vector machine, also known as SVM, is a type of supervised learning model that can be used for classification or regression depending on the datasets. SVM is used to classify data points by determining the best hyperplane between two or more groups. Working with enormous datasets, on the other hand, might result in a variety of issues, including inefficient accuracy and time-consuming. SVM was updated in this research by applying some non-linear kernel transformations, which are: linear, polynomial, radial basis, and multi-layer kernels. The non-linear SVM classification model was illustrated and summarized in an algorithm using kernel tricks. The proposed method was examined using three simulation datasets with different sample
... Show MoreIn this paper, a fusion of K models of full-rank weighted nonnegative tensor factor two-dimensional deconvolution (K-wNTF2D) is proposed to separate the acoustic sources that have been mixed in an underdetermined reverberant environment. The model is adapted in an unsupervised manner under the hybrid framework of the generalized expectation maximization and multiplicative update algorithms. The derivation of the algorithm and the development of proposed full-rank K-wNTF2D will be shown. The algorithm also encodes a set of variable sparsity parameters derived from Gibbs distribution into the K-wNTF2D model. This optimizes each sub-model in K-wNTF2D with the required sparsity to model the time-varying variances of the sources in the s
... Show MoreIn high-dimensional semiparametric regression, balancing accuracy and interpretability often requires combining dimension reduction with variable selection. This study intro- duces two novel methods for dimension reduction in additive partial linear models: (i) minimum average variance estimation (MAVE) combined with the adaptive least abso- lute shrinkage and selection operator (MAVE-ALASSO) and (ii) MAVE with smoothly clipped absolute deviation (MAVE-SCAD). These methods leverage the flexibility of MAVE for sufficient dimension reduction while incorporating adaptive penalties to en- sure sparse and interpretable models. The performance of both methods is evaluated through simulations using the mean squared error and variable selection cri
... Show MoreRecently, wireless communication environments with high speeds and low complexity have become increasingly essential. Free-space optics (FSO) has emerged as a promising solution for providing direct connections between devices in such high-spectrum wireless setups. However, FSO communications are susceptible to weather-induced signal fluctuations, leading to fading and signal weakness at the receiver. To mitigate the effects of these challenges, several mathematical models have been proposed to describe the transition from weak to strong atmospheric turbulence, including Rayleigh, lognormal, Málaga, Nakagami-m, K-distribution, Weibull, Negative-Exponential, Inverse-Gaussian, G-G, and Fisher-Snedecor F distributions. This paper extensive
... Show MoreHigh temperature superconductor with nominal composition Bi1.6Pb0.4Sr1.8Ba0.2Ca2 Cu3O10+? was prepared by solid state reaction method. Two sets of samples have been prepared .The first one was quenched in air; the second set was quenched in liquid nitrogen. X-ray diffraction analyses showed an orthorhombic structure with two phases, high –Tc phase (2223) and low-Tc phase (2212) in addition to that impure phase was found. It has been observed that quenched in air samples display a sharp superconducting transition and a higher-Tc phase than that of the quenched in liquid nitrogen samples.
Computer systems and networks are being used in almost every aspect of our daily life; as a result the security threats to computers and networks have also increased significantly. Traditionally, password-based user authentication is widely used to authenticate legitimate user in the current system0T but0T this method has many loop holes such as password sharing, shoulder surfing, brute force attack, dictionary attack, guessing, phishing and many more. The aim of this paper is to enhance the password authentication method by presenting a keystroke dynamics with back propagation neural network as a transparent layer of user authentication. Keystroke Dynamics is one of the famous and inexpensive behavioral biometric technologies, which identi
... Show MoreThe major objective of this study is to establish a network of Ground Control Points-GCPs which can use it as a reference for any engineering project. Total Station (type: Nikon Nivo 5.C), Optical Level and Garmin Navigator GPS were used to perform traversing. Traversing measurement was achieved by using nine points covered the selected area irregularly. Near Civil Engineering Department at Baghdad University Al-jadiriya, an attempt has been made to assess the accuracy of GPS by comparing the data obtained from the Total Station. The average error of this method is 3.326 m with the highest coefficient of determination (R2) is 0.077 m observed in Northing. While in