In this research، a comparison has been made between the robust estimators of (M) for the Cubic Smoothing Splines technique، to avoid the problem of abnormality in data or contamination of error، and the traditional estimation method of Cubic Smoothing Splines technique by using two criteria of differentiation which are (MADE، WASE) for different sample sizes and disparity levels to estimate the chronologically different coefficients functions for the balanced longitudinal data which are characterized by observations obtained through (n) from the independent subjects، each one of them is measured repeatedly by group of specific time points (m)،since the frequent measurements within the subjects are almost connected and independent among the different subjects
Abstract
The Non - Homogeneous Poisson process is considered as one of the statistical subjects which had an importance in other sciences and a large application in different areas as waiting raws and rectifiable systems method , computer and communication systems and the theory of reliability and many other, also it used in modeling the phenomenon that occurred by unfixed way over time (all events that changed by time).
This research deals with some of the basic concepts that are related to the Non - Homogeneous Poisson process , This research carried out two models of the Non - Homogeneous Poisson process which are the power law model , and Musa –okumto , to estimate th
... Show MoreThis paper is concerned with finding solutions to free-boundary inverse coefficient problems. Mathematically, we handle a one-dimensional non-homogeneous heat equation subject to initial and boundary conditions as well as non-localized integral observations of zeroth and first-order heat momentum. The direct problem is solved for the temperature distribution and the non-localized integral measurements using the Crank–Nicolson finite difference method. The inverse problem is solved by simultaneously finding the temperature distribution, the time-dependent free-boundary function indicating the location of the moving interface, and the time-wise thermal diffusivity or advection velocities. We reformulate the inverse problem as a non-
... Show MoreThe two-frequency shell model approach is used to calculate the
ground state matter density distribution and the corresponding root
mean square radii of the two-proton17Ne halo nucleus with the
assumption that the model space of 15O core nucleus differ from the
model space of extra two loosely bound valence protons. Two
different size parameters bcore and bhalo of the single particle wave
functions of the harmonic oscillator potential are used. The
calculations are carried out for different configurations of the outer
halo protons in 17Ne nucleus and the structure of this halo nucleus
shows that the dominant configuration when the two halo protons in
the 1d5/2 orbi
Lowpass spatial filters are adopted to match the noise statistics of the degradation seeking
good quality smoothed images. This study imply different size and shape of smoothing
windows. The study shows that using a window square frame shape gives good quality
smoothing and at the same time preserving a certain level of high frequency components in
comparsion with standard smoothing filters.
In this research , design and study a (beam expander) for the Nd – YAG laser with (1.06 ?m) Wavelength has been studied at 5X zoom with narrow diversion in the room temperature. by using (ZEMAX) to study the system. Evaluate its performance via (ZEMAX) outputs, as bright Spot Diagram via (RMS), Ray Fan Plot, Geometric Encircled Energy and the value of Focal shift. Then study the effect of field of view on the outputs in the room temperature.
This research aims to analyze and simulate biochemical real test data for uncovering the relationships among the tests, and how each of them impacts others. The data were acquired from Iraqi private biochemical laboratory. However, these data have many dimensions with a high rate of null values, and big patient numbers. Then, several experiments have been applied on these data beginning with unsupervised techniques such as hierarchical clustering, and k-means, but the results were not clear. Then the preprocessing step performed, to make the dataset analyzable by supervised techniques such as Linear Discriminant Analysis (LDA), Classification And Regression Tree (CART), Logistic Regression (LR), K-Nearest Neighbor (K-NN), Naïve Bays (NB
... Show MoreThe Weibull distribution is considered one of the Type-I Generalized Extreme Value (GEV) distribution, and it plays a crucial role in modeling extreme events in various fields, such as hydrology, finance, and environmental sciences. Bayesian methods play a strong, decisive role in estimating the parameters of the GEV distribution due to their ability to incorporate prior knowledge and handle small sample sizes effectively. In this research, we compare several shrinkage Bayesian estimation methods based on the squared error and the linear exponential loss functions. They were adopted and compared by the Monte Carlo simulation method. The performance of these methods is assessed based on their accuracy and computational efficiency in estimati
... Show MoreThis paper deals with constructing a model of fuzzy linear programming with application on fuels product of Dura- refinery , which consist of seven products that have direct effect ondaily consumption . After Building the model which consist of objective function represents the selling prices ofthe products and fuzzy productions constraints and fuzzy demand constraints addition to production requirements constraints , we used program of ( WIN QSB ) to find the optimal solution