Many of the dynamic processes in different sciences are described by models of differential equations. These models explain the change in the behavior of the studied process over time by linking the behavior of the process under study with its derivatives. These models often contain constant and time-varying parameters that vary according to the nature of the process under study in this We will estimate the constant and time-varying parameters in a sequential method in several stages. In the first stage, the state variables and their derivatives are estimated in the method of penalized splines(p- splines) . In the second stage we use pseudo lest square to estimate constant parameters, For the third stage, the remaining constant parameters and time-varying parameters are estimated by using a semi-parametric regression model and then comparing this method with methods based on numerical discretization methods, which includes two stages. In the first stage we estimate the state variables and their derivatives by (p spline) , In the second stage we use Methods of numerical discretization methods (the Euler discretization method and the trapezoidal discretization method), where the comparison was done using simulations and showed the results superior to the trapezoidal method of numerical differentiation where it gave the best estimations to balance between accuracy in estimation And high arithmetic cost.
In this study, the stress-strength model R = P(Y < X < Z) is discussed as an important parts of reliability system by assuming that the random variables follow Invers Rayleigh Distribution. Some traditional estimation methods are used to estimate the parameters namely; Maximum Likelihood, Moment method, and Uniformly Minimum Variance Unbiased estimator and Shrinkage estimator using three types of shrinkage weight factors. As well as, Monte Carlo simulation are used to compare the estimation methods based on mean squared error criteria.
In this research, the one of the most important model and widely used in many and applications is linear mixed model, which widely used to analysis the longitudinal data that characterized by the repeated measures form .where estimating linear mixed model by using two methods (parametric and nonparametric) and used to estimate the conditional mean and marginal mean in linear mixed model ,A comparison between number of models is made to get the best model that will represent the mean wind speed in Iraq.The application is concerned with 8 meteorological stations in Iraq that we selected randomly and then we take a monthly data about wind speed over ten years Then average it over each month in corresponding year, so we g
... Show MoreResearch Summary
In The Name of Allah Most Gracious Most Merciful
The word injustice and its derivatives were repeated in the Holy Qur’an in several places, approximately (154) times. This is due to the severity of its danger, and that the most dangerous thing that our Islamic nation suffers from in our time is; It is injustice in all its forms and types, so we should all have an honest review of the sincere change in the right direction, and uncover cases of injustice and explain their causes and causes, and work to treat them and rid the wrongdoers of their injustice, and help them to correct their condition. To reveal their grievances and explain their causes and causes, and work to remedy them, and support them and mi
... Show MoreAn efficient combination of Adomian Decomposition iterative technique coupled with Laplace transformation to solve non-linear Random Integro differential equation (NRIDE) is introduced in a novel way to get an accurate analytical solution. This technique is an elegant combination of theLaplace transform, and the Adomian polynomial. The suggested method will convert differential equations into iterative algebraic equations, thus reducing processing and analytical work. The technique solves the problem of calculating the Adomian polynomials. The method’s efficiency was investigated using some numerical instances, and the findings demonstrate that it is easier to use than many other numerical procedures. It has also been established that (LT
... Show MoreThe tagged research is concerned with observation and investigating the concepts of consistency and harmony in contemporary Iraqi painting (selected models) in order to reveal the mechanisms and rules of these two concepts in the artistic field and their mechanisms of operation. How reflected tools Consistency and harmony in contemporary Iraqi painting? What is consistency and what are its mechanisms and principles? Is consistency a unit product quality? Are there similarities between consistency and harmony? What is harmony and its principles and rules? As for the second chapter, it included two topics that dealt with the first topic - consistency and harmony between concept and significance, while the second topic meant - histor
... Show MoreIn this paper, the methods of weighted residuals: Collocation Method (CM), Least Squares Method (LSM) and Galerkin Method (GM) are used to solve the thin film flow (TFF) equation. The weighted residual methods were implemented to get an approximate solution to the TFF equation. The accuracy of the obtained results is checked by calculating the maximum error remainder functions (MER). Moreover, the outcomes were examined in comparison with the 4th-order Runge-Kutta method (RK4) and good agreements have been achieved. All the evaluations have been successfully implemented by using the computer system Mathematica®10.
: In this study, a linear synchronous machine is compared with a linear transverse flux machine. Both machines have been designed and built with the intention of being used as the power take off in a free piston engine. As both topologies are cylindrical, it is not possible to construct either using just flat laminations and so alternative methods are described and demonstrated. Despite the difference in topology and specification, the machines are compared on a common base in terms of rated force and suitability for use as a generator. Experience gained during the manufacture of two prototypes is described.
Encryption of data is translating data to another shape or symbol which enables people only with an access to the secret key or a password that can read it. The data which are encrypted are generally referred to as cipher text, while data which are unencrypted are known plain text. Entropy can be used as a measure which gives the number of bits that are needed for coding the data of an image. As the values of pixel within an image are dispensed through further gray-levels, the entropy increases. The aim of this research is to compare between CAST-128 with proposed adaptive key and RSA encryption methods for video frames to determine the more accurate method with highest entropy. The first method is achieved by applying the "CAST-128" and
... Show More