The issue of penalized regression model has received considerable critical attention to variable selection. It plays an essential role in dealing with high dimensional data. Arctangent denoted by the Atan penalty has been used in both estimation and variable selection as an efficient method recently. However, the Atan penalty is very sensitive to outliers in response to variables or heavy-tailed error distribution. While the least absolute deviation is a good method to get robustness in regression estimation. The specific objective of this research is to propose a robust Atan estimator from combining these two ideas at once. Simulation experiments and real data applications show that the proposed LAD-Atan estimator has superior performance compared with other estimators.
This article explores the process of VGI collection by assessing the relative usability and accuracy of a range of different methods (Smartphone GPS, Tablet, and analogue maps) for data collection amongst different demographic and educational groups, and in different geographical contexts. Assessments are made of positional accuracy, completeness, and data collectors’ experiences with reference to the official cadastral data and the administration system in a case-study region of Iraq. Ownership data was validated by crowd agreement. The result shows that successful VGI projects have access to varying data collection methods.
Due to the easily access to the satellite images, Google Earth (GE) images have become more popular than other online virtual globes. However, the popularity of GE is not an indication of its accuracy. A considerable amount of literature has been published on evaluating the positional accuracy of GE data; however there are few studies which have investigated the subject of improving the GE accuracy. In this paper, a practical method for enhancing the horizontal positional accuracy of GE is suggested by establishing ten reference points, in University of Baghdad main campus, using different Global Navigation Satellite System (GNSS) observation techniques: Rapid Static, Post-Processing Kinematic, and Network. Then, the GE image for the study
... Show MoreIn this work, porous silicon (PS) are fabricated using electrochemical etching (ECE) process for p-type crystalline silicon (c-Si) wafers of (100) orientation. The structural, morphological and electrical properties of PS synthesized at etching current density of (10, 20, 30) mA/cm2 at constant etching time 10 min are studied. From X-ray diffraction (XRD) measurement, the value of FWHM is in general decreases with increasing current density for p-type porous silicon (p-PS). Atomic force microscope (AFM) showed that for p-PS the average pore diameter decreases at 20 mA. Porous silicon which formed on silicon will be a junction so I-V characteristics have been studied in the dark to calculate ideality factor (n), and saturation current (Is
... Show MoreUltra-High Temperature Materials (UHTMs) are at the base of entire aerospace industry; these high stable materials at temperatures exceeding 1600 °C are used to manage the heat shielding to protect vehicles and probes during the hypersonic flight through reentry trajectory against aerodynamic heating and reducing plasma surface interaction. Those materials are also recognized as Thermal Protection System Materials (TPSMs). The structural materials used during the high-temperature oxidizing environment are mainly limited to SiC, oxide ceramics, and composites. In addition to that, silicon-based ceramic has a maximum-use at 1700 °C approximately; as it is an active oxidation process o
This article deals with the approximate algorithm for two dimensional multi-space fractional bioheat equations (M-SFBHE). The application of the collection method will be expanding for presenting a numerical technique for solving M-SFBHE based on “shifted Jacobi-Gauss-Labatto polynomials” (SJ-GL-Ps) in the matrix form. The Caputo formula has been utilized to approximate the fractional derivative and to demonstrate its usefulness and accuracy, the proposed methodology was applied in two examples. The numerical results revealed that the used approach is very effective and gives high accuracy and good convergence.
The primary objective of the current paper is to suggest and implement effective computational methods (DECMs) to calculate analytic and approximate solutions to the nonlocal one-dimensional parabolic equation which is utilized to model specific real-world applications. The powerful and elegant methods that are used orthogonal basis functions to describe the solution as a double power series have been developed, namely the Bernstein, Legendre, Chebyshev, Hermite, and Bernoulli polynomials. Hence, a specified partial differential equation is reduced to a system of linear algebraic equations that can be solved by using Mathematica®12. The techniques of effective computational methods (DECMs) have been applied to solve some s
... Show MoreThis paper introduces a non-conventional approach with multi-dimensional random sampling to solve a cocaine abuse model with statistical probability. The mean Latin hypercube finite difference (MLHFD) method is proposed for the first time via hybrid integration of the classical numerical finite difference (FD) formula with Latin hypercube sampling (LHS) technique to create a random distribution for the model parameters which are dependent on time [Formula: see text]. The LHS technique gives advantage to MLHFD method to produce fast variation of the parameters’ values via number of multidimensional simulations (100, 1000 and 5000). The generated Latin hypercube sample which is random or non-deterministic in nature is further integ
... Show MoreThis paper considers approximate solution of the hyperbolic one-dimensional wave equation with nonlocal mixed boundary conditions by improved methods based on the assumption that the solution is a double power series based on orthogonal polynomials, such as Bernstein, Legendre, and Chebyshev. The solution is ultimately compared with the original method that is based on standard polynomials by calculating the absolute error to verify the validity and accuracy of the performance.