Many of the dynamic processes in different sciences are described by models of differential equations. These models explain the change in the behavior of the studied process over time by linking the behavior of the process under study with its derivatives. These models often contain constant and time-varying parameters that vary according to the nature of the process under study in this We will estimate the constant and time-varying parameters in a sequential method in several stages. In the first stage, the state variables and their derivatives are estimated in the method of penalized splines(p- splines) . In the second stage we use pseudo lest square to estimate constant parameters, For the third stage, the remaining constant parameters and time-varying parameters are estimated by using a semi-parametric regression model and then comparing this method with methods based on numerical discretization methods, which includes two stages. In the first stage we estimate the state variables and their derivatives by (p spline) , In the second stage we use Methods of numerical discretization methods (the Euler discretization method and the trapezoidal discretization method), where the comparison was done using simulations and showed the results superior to the trapezoidal method of numerical differentiation where it gave the best estimations to balance between accuracy in estimation And high arithmetic cost.
The main focus of this research is to examine the Travelling Salesman Problem (TSP) and the methods used to solve this problem where this problem is considered as one of the combinatorial optimization problems which met wide publicity and attention from the researches for to it's simple formulation and important applications and engagement to the rest of combinatorial problems , which is based on finding the optimal path through known number of cities where the salesman visits each city only once before returning to the city of departure n this research , the benefits of( FMOLP) algorithm is employed as one of the best methods to solve the (TSP) problem and the application of the algorithm in conjun
... Show MoreThe basic concept of diversity; where two or more inputs at the receiver are used to get uncorrelated signals. The aim of this paper is an attempt to compare some possible combinations of diversity reception and MLSE detection techniques. Various diversity combining techniques can be distinguished: Equal Gain Combining (EGC), Maximal Ratio Combining (MRC), Selection Combining and Selection Switching Combining (SS).The simulation results shows that the MRC give better performance than the other types of combining (about 1 dB compare with EGC and 2.5~3 dB compare with selection and selection switching combining).
This paper displays a survey about the laboratory routine core analysis study on ten sandstone core samples taken from Zubair Reservoir/West Quarna Oil Field. The Petrophysical properties of rock as porosity, permeability, grain's size, roundness and sorting, type of mineral and volumes of shales inside the samples were tested by many apparatus in the Petroleum Technology Department/ University of Technology such as OFITE BLP-530 Gas Porosimeter, PERG-200TM Gas Permeameter and liquid Permeameter, GeoSpec2 apparatus (NMR method), Scanning Electron Microscopy (SEM) and OFITE Spectral Gamma Ray Logger apparatus. By comparing all the results of porosity and permeability measured by these instruments, it is clear a significant vari
... Show MoreThis paper including a gravitational lens time delays study for a general family of lensing potentials, the popular singular isothermal elliptical potential (SIEP), and singular isothermal elliptical density distribution (SIED) but allows general angular structure. At first section there is an introduction for the selected observations from the gravitationally lensed systems. Then section two shows that the time delays for singular isothermal elliptical potential (SIEP) and singular isothermal elliptical density distributions (SIED) have a remarkably simple and elegant form, and that the result for Hubble constant estimations actually holds for a general family of potentials by combining the analytic results with data for the time dela
... Show MoreIntended for getting good estimates with more accurate results, we must choose the appropriate method of estimation. Most of the equations in classical methods are linear equations and finding analytical solutions to such equations is very difficult. Some estimators are inefficient because of problems in solving these equations. In this paper, we will estimate the survival function of censored data by using one of the most important artificial intelligence algorithms that is called the genetic algorithm to get optimal estimates for parameters Weibull distribution with two parameters. This leads to optimal estimates of the survival function. The genetic algorithm is employed in the method of moment, the least squares method and the weighted
... Show MoreThe main aim of this paper is to study how the different estimators of the two unknown parameters (shape and scale parameter) of a generalized exponential distribution behave for different sample sizes and for different parameter values. In particular,
. Maximum Likelihood, Percentile and Ordinary Least Square estimators had been implemented for different sample sizes (small, medium, and large) and assumed several contrasts initial values for the two parameters. Two indicators of performance Mean Square Error and Mean Percentile Error were used and the comparisons were carried out between different methods of estimation by using monte carlo simulation technique .. It was obse
... Show MoreIn this research , we study the inverse Gompertz distribution (IG) and estimate the survival function of the distribution , and the survival function was evaluated using three methods (the Maximum likelihood, least squares, and percentiles estimators) and choosing the best method estimation ,as it was found that the best method for estimating the survival function is the squares-least method because it has the lowest IMSE and for all sample sizes
There is an assumption implicit but fundamental theory behind the decline by the time series used in the estimate, namely that the time series has a sleep feature Stationary or the language of Engle Gernger chains are integrated level zero, which indicated by I (0). It is well known, for example, tables of t-statistic is designed primarily to deal with the results of the regression that uses static strings. This assumption has been previously treated as an axiom the mid-seventies, where researchers are conducting studies of applied without taking into account the properties of time series used prior to the assessment, was to accept the results of these tests Bmanueh and delivery capabilities based on the applicability of the theo
... Show MoreSome experiments need to know the extent of their usefulness to continue providing them or not. This is done through the fuzzy regression discontinuous model, where the Epanechnikov Kernel and Triangular Kernel were used to estimate the model by generating data from the Monte Carlo experiment and comparing the results obtained. It was found that the. Epanechnikov Kernel has a least mean squared error.