The purpose of this article is to improve and minimize noise from the signal by studying wavelet transforms and showing how to use the most effective ones for processing and analysis. As both the Discrete Wavelet Transformation method was used, we will outline some transformation techniques along with the methodology for applying them to remove noise from the signal. Proceeds based on the threshold value and the threshold functions Lifting Transformation, Wavelet Transformation, and Packet Discrete Wavelet Transformation. Using AMSE, A comparison was made between them , and the best was selected. When the aforementioned techniques were applied to actual data that was represented by each of the prices, it became evident that the lift
... Show MoreThis research aims to review the importance of estimating the nonparametric regression function using so-called Canonical Kernel which depends on re-scale the smoothing parameter, which has a large and important role in Kernel and give the sound amount of smoothing .
We has been shown the importance of this method through the application of these concepts on real data refer to international exchange rates to the U.S. dollar against the Japanese yen for the period from January 2007 to March 2010. The results demonstrated preference the nonparametric estimator with Gaussian on the other nonparametric and parametric regression estima
... Show MoreIn this research, we present a nonparametric approach for the estimation of a copula density using different kernel density methods. Different functions were used: Gaussian, Gumbel, Clayton, and Frank copula, and through various simulation experiments we generated the standard bivariate normal distribution at samples sizes (50, 100, 250 and 500), in both high and low dependency. Different kernel methods were used to estimate the probability density function of the copula with marginal of this bivariate distribution: Mirror – Reflection (MR), Beta Kernel (BK) and transformation kernel (KD) method, then a comparison was carried out between the three methods with all the experiments using the integrated mean squared error. Furthermore, some
... Show MoreSemi-parametric regression models have been studied in a variety of applications and scientific fields due to their high flexibility in dealing with data that has problems, as they are characterized by the ease of interpretation of the parameter part while retaining the flexibility of the non-parametric part. The response variable or explanatory variables can have outliers, and the OLS approach have the sensitivity to outliers. To address this issue, robust (resistance) methods were used, which are less sensitive in the presence of outlier values in the data. This study aims to estimate the partial regression model using the robust estimation method with the wavel
... Show MoreThis study employs wavelet transforms to address the issue of boundary effects. Additionally, it utilizes probit transform techniques, which are based on probit functions, to estimate the copula density function. This estimation is dependent on the empirical distribution function of the variables. The density is estimated within a transformed domain. Recent research indicates that the early implementations of this strategy may have been more efficient. Nevertheless, in this work, we implemented two novel methodologies utilizing probit transform and wavelet transform. We then proceeded to evaluate and contrast these methodologies using three specific criteria: root mean square error (RMSE), Akaike information criterion (AIC), and log
... Show MoreAbstract The wavelet shrink estimator is an attractive technique when estimating the nonparametric regression functions, but it is very sensitive in the case of a correlation in errors. In this research, a polynomial model of low degree was used for the purpose of addressing the boundary problem in the wavelet reduction in addition to using flexible threshold values in the case of Correlation in errors as it deals with those transactions at each level separately, unlike the comprehensive threshold values that deal with all levels simultaneously, as (Visushrink) methods, (False Discovery Rate) method, (Improvement Thresholding) and (Sureshrink method), as the study was conducted on real monthly data represented in the rates of theft crimes f
... Show MoreAbstract
The problem of missing data represents a major obstacle before researchers in the process of data analysis in different fields since , this problem is a recurrent one in all fields of study including social , medical , astronomical and clinical experiments .
The presence of such a problem within the data to be studied may influence negatively on the analysis and it may lead to misleading conclusions , together with the fact that these conclusions that result from a great bias caused by that problem in spite of the efficiency of wavelet methods but they are also affected by the missing of data , in addition to the impact of the problem of miss of accuracy estimation
... Show MoreThe use of non-parametric models and subsequent estimation methods requires that many of the initial conditions that must be met to represent those models of society under study are appropriate, prompting researchers to look for more flexible models, which are represented by non-parametric models
In this study, the most important and most widespread estimations of the estimation of the nonlinear regression function were investigated using Nadaraya-Watson and Regression Local Ploynomial, which are one of the types of non-linear
... Show MoreIn this research, we use fuzzy nonparametric methods based on some smoothing techniques, were applied to real data on the Iraqi stock market especially the data about Baghdad company for soft drinks for the year (2016) for the period (1/1/2016-31/12/2016) .A sample of (148) observations was obtained in order to construct a model of the relationship between the stock prices (Low, high, modal) and the traded value by comparing the results of the criterion (G.O.F.) for three techniques , we note that the lowest value for this criterion was for the K-Nearest Neighbor at Gaussian function .