DEMs, thus, simply regular grids of elevation measurements over the land surface.The aim of the present work is to produce high resolution DEM for certain investigated region (i.e. Baghdad University Campus\ college of science). The easting and northing of 90 locations, including the ground-base and buildings of the studied area, have been obtained by field survey using global positioning system (GPS). The image of the investigated area has been extracted from Quick-Bird satellite sensor (with spatial resolution of 0.6 m). It has been geo-referenced and rectified using 1st order polynomial transformation. many interpolation methods have been used to estimate the elevation such as ordinary Kriging, inverse distance weighted (IDW) and natural neighbor methods. The mosaic algorithm has then been applied between the base and building layers of studied area in order to perform the final DEM. The accuracy assessments of the interpolation methods have been calculated using the root-mean-square-error (RMSE) criterion. Finally, the estimated DEMs have been used to constructing 3-D views of the original image.
Abstract:
In this research we discussed the parameter estimation and variable selection in Tobit quantile regression model in present of multicollinearity problem. We used elastic net technique as an important technique for dealing with both multicollinearity and variable selection. Depending on the data we proposed Bayesian Tobit hierarchical model with four level prior distributions . We assumed both tuning parameter are random variable and estimated them with the other unknown parameter in the model .Simulation study was used for explain the efficiency of the proposed method and then we compared our approach with (Alhamzwi 2014 & standard QR) .The result illustrated that our approach
... Show MoreIn this paper we used frequentist and Bayesian approaches for the linear regression model to predict future observations for unemployment rates in Iraq. Parameters are estimated using the ordinary least squares method and for the Bayesian approach using the Markov Chain Monte Carlo (MCMC) method. Calculations are done using the R program. The analysis showed that the linear regression model using the Bayesian approach is better and can be used as an alternative to the frequentist approach. Two criteria, the root mean square error (RMSE) and the median absolute deviation (MAD) were used to compare the performance of the estimates. The results obtained showed that the unemployment rates will continue to increase in the next two decade
... Show MoreThis study was carried out at the Poultry Research Station belong to state Board of Agricultural Research, Ministry of Agriculture, Abu Ghraib. The duration of this study was from 28 November 2013 to 15 May 2014 to evaluate the effect of supplementation different levels of organic selenium(Se-yeast) and inorganic selenium (Selenium selenite) on some productive performances of broiler breeder and selenium levels in plasma ,egg (yolk,albumin). The study included 336 female broiler breeder and 28 males (Ross 308 and arbor acres), 42 weeks old were randomly distributed to 7 dietary treatments experiment with 2 replicates / treatment (24 females and 2 male birds /replicate). The duration of the experiment was 24 weeks, and treatments were as the
... Show MoreThis study is concerned with making comparison in using different geostatistical methods for porosity distribution of upper shale member - Zubair formation in Luhais oil field which was chosen to study.
Kriging, Gaussian random function simulation and sequential Gaussian simulation geostatistical methods were adopted in this study. After preparing all needed data which are contour map, well heads of 12 wells, well tops and porosity from CPI log. Petrel software 2009 was used for porosity distribution of mentioned formation in methods that are showed above. Comparisons were made among these three methods in order to choose the best one, the comparing cri
Background: Appreciation of the crucial role of risk factors in the development of coronary artery disease (CAD) is one of the most significant advances in the understanding of this important disease. Extensive epidemiological research has established cigarette smoking, diabetes, hyperlipidemia, and hypertension as independent risk factors for CADObjective: To determine the prevalence of the 4 conventional risk factors(cigarette smoking, diabetes, hyperlipidemia, and hypertension) among patients with CAD and to determine the correlation of Thrombolysis in Myocardial Infarction (TIMI) risk score with the extent of coronary artery disease (CAD) in patients with unstable angina /non ST elevation myocardial infarction (UA/NSTEMI).Methods: We
... Show MoreA dynamic analysis method has been developed to investigate and characterize embedded delamination on the dynamic response of composite laminated structures. A nonlinear finite element model for geometrically large amplitude free vibration intact plate and delamination plate analysis is presented using higher order shear deformation theory where the nonlinearity was introduced in the Green-Lagrange sense. The governing equation of the vibrated plate were derived using the Variational approach. The effect of different orthotropicity ratio, boundary condition and delamination size on the non-dimenational fundamental frequency and frequency ratios of plate for different stacking sequences are studied. Finally th
... Show MoreThis research is concerned with the re-analysis of optical data (the imaginary part of the dielectric function as a function of photon energy E) of a-Si:H films prepared by Jackson et al. and Ferlauto et al. through using nonlinear regression fitting we estimated the optical energy gap and the deviation from the Tauc model by considering the parameter of energy photon-dependence of the momentum matrix element of the p as a free parameter by assuming that density of states distribution to be a square root function. It is observed for films prepared by Jackson et al. that the value of the parameter p for the photon energy range is is close to the value assumed by the Cody model and the optical gap energy is which is also close to the value
... Show MoreThis research aims to study the methods of reduction of dimensions that overcome the problem curse of dimensionality when traditional methods fail to provide a good estimation of the parameters So this problem must be dealt with directly . Two methods were used to solve the problem of high dimensional data, The first method is the non-classical method Slice inverse regression ( SIR ) method and the proposed weight standard Sir (WSIR) method and principal components (PCA) which is the general method used in reducing dimensions, (SIR ) and (PCA) is based on the work of linear combinations of a subset of the original explanatory variables, which may suffer from the problem of heterogeneity and the problem of linear
... Show MoreA simple and rapid high performance liquid chromatographic with fluorescence detection method for the determination of the aflatoxin B1, B2, G1 and G2 in peanuts, rice and chilli was developed. The sample was extracted using acetonitrile:water (90:10, v/v%) and then purified by using ISOLUTE multimode solid phase extraction. After the pre-column derivatisation, the analytes were separated within 3.7 min using Chromolith performance RP-18e (100–4.6 mm) monolithic column. To assess the possible effects of endogenous components in the food items, matrix-matched calibration was used for the quantification and validation. The recoveries of aflatoxins that were spiked into food samples were 86.38–104.5% and RSDs were <4.4%. The method was
... Show More