This study investigates asset returns within the Iraq Stock Exchange by employing both the Fama-MacBeth regression model and the Fama-French three-factor model. The research involves the estimation of cross-sectional regressions wherein model parameters are subject to temporal variation, and the independent variables function as proxies. The dataset comprises information from the first quarter of 2010 to the first quarter of 2024, encompassing 22 publicly listed companies across six industrial sectors. The study explores methodological advancements through the application of the Single Index Model (SIM) and Kernel Weighted Regression (KWR) in both time series and cross-sectional analyses. The SIM outperformed the KWR approach in estimating time-varying beta coefficients, yielding a mean Root Mean Squared Error (RMSE) of 0.14316. Furthermore, the integrated KWR-SIM methodology achieved the lowest Adjusted Root Mean Squared Error (ARMSE) value of 0.08152 when modelling the association between risk factors and asset returns within the cross-sectional analytical framework. Statistical tests for significance produced heterogeneous responses of the returns on assets in the Iraqi financial market to the Fama-French posited economic variables. The estimated coefficients for the betas showed significant oscillations for all assets, confirming changes in economic conditions. The results add to our knowledge of the risk-reward relationship in the context of emerging markets and provide methodological insights into financial asset pricing. The evidence indicates that the KWR-SIM method has better capabilities for model fitting
The problem of Bi-level programming is to reduce or maximize the function of the target by having another target function within the constraints. This problem has received a great deal of attention in the programming community due to the proliferation of applications and the use of evolutionary algorithms in addressing this kind of problem. Two non-linear bi-level programming methods are used in this paper. The goal is to achieve the optimal solution through the simulation method using the Monte Carlo method using different small and large sample sizes. The research reached the Branch Bound algorithm was preferred in solving the problem of non-linear two-level programming this is because the results were better.
The estimation of the parameters of linear regression is based on the usual Least Square method, as this method is based on the estimation of several basic assumptions. Therefore, the accuracy of estimating the parameters of the model depends on the validity of these hypotheses. The most successful technique was the robust estimation method which is minimizing maximum likelihood estimator (MM-estimator) that proved its efficiency in this purpose. However, the use of the model becomes unrealistic and one of these assumptions is the uniformity of the variance and the normal distribution of the error. These assumptions are not achievable in the case of studying a specific problem that may include complex data of more than one model. To
... Show MoreLet R be a Г-ring, and σ, τ be two automorphisms of R. An additive mapping d from a Γ-ring R into itself is called a (σ,τ)-derivation on R if d(aαb) = d(a)α σ(b) + τ(a)αd(b), holds for all a,b ∈R and α∈Γ. d is called strong commutativity preserving (SCP) on R if [d(a), d(b)]α = [a,b]α(σ,τ) holds for all a,b∈R and α∈Γ. In this paper, we investigate the commutativity of R by the strong commutativity preserving (σ,τ)-derivation d satisfied some properties, when R is prime and semi prime Г-ring.
In this paper, the class of semi
The nature of the dark sector of the Universe remains one of the outstanding problems in modern cosmology, with the search for new observational probes guiding the development of the next generation of observational facilities. Clues come from tension between the predictions from Λ cold dark matter (ΛCDM) and observations of gravitationally lensed galaxies. Previous studies showed that galaxy clusters in the ΛCDM are not strong enough to reproduce the observed number of lensed arcs. This work aims to constrain the warm dark matter (WDM) cosmologies by means of the lensing efficiency of galaxy clusters drawn from these alternative models. The lensing characteristics of two samples of simulated clusters in the Λ warm dark matter and ΛCDM
... Show MoreIn many scientific fields, Bayesian models are commonly used in recent research. This research presents a new Bayesian model for estimating parameters and forecasting using the Gibbs sampler algorithm. Posterior distributions are generated using the inverse gamma distribution and the multivariate normal distribution as prior distributions. The new method was used to investigate and summaries Bayesian statistics' posterior distribution. The theory and derivation of the posterior distribution are explained in detail in this paper. The proposed approach is applied to three simulation datasets of 100, 300, and 500 sample sizes. Also, the procedure was extended to the real dataset called the rock intensity dataset. The actual dataset is collecte
... Show MoreOften times, especially in practical applications, it is difficult to obtain data that is not tainted by a problem that may be related to the inconsistency of the variance of error or any other problem that impedes the use of the usual methods represented by the method of the ordinary least squares (OLS), To find the capabilities of the features of the multiple linear models, This is why many statisticians resort to the use of estimates by immune methods Especially with the presence of outliers, as well as the problem of error Variance instability, Two methods of horsepower were adopted, they are the robust weighted least square(RWLS)& the two-step robust weighted least square method(TSRWLS), and their performance was verifie
... Show MoreThe objective of the research , is to shed light on the most important treatment of the problem of missing values of time series data and its influence in simple linear regression. This research deals with the effect of the missing values in independent variable only. This was carried out by proposing missing value from time series data which is complete originally and testing the influence of the missing value on simple regression analysis of data of an experiment related with the effect of the quantity of consumed ration on broilers weight for 15 weeks. The results showed that the missing value had not a significant effect as the estimated model after missing value was consistent and significant statistically. The results also
... Show MoreChemical pollution is a very important issue that people suffer from and it often affects the nature of health of society and the future of the health of future generations. Consequently, it must be considered in order to discover suitable models and find descriptions to predict the performance of it in the forthcoming years. Chemical pollution data in Iraq take a great scope and manifold sources and kinds, which brands it as Big Data that need to be studied using novel statistical methods. The research object on using Proposed Nonparametric Procedure NP Method to develop an (OCMT) test procedure to estimate parameters of linear regression model with large size of data (Big Data) which comprises many indicators associated with chemi
... Show More