In this article, we developed a new loss function, as the simplification of linear exponential loss function (LINEX) by weighting LINEX function. We derive a scale parameter, reliability and the hazard functions in accordance with upper record values of the Lomax distribution (LD). To study a small sample behavior performance of the proposed loss function using a Monte Carlo simulation, we make a comparison among maximum likelihood estimator, Bayesian estimator by means of LINEX loss function and Bayesian estimator using square error loss (SE) function. The consequences have shown that a modified method is the finest for valuing a scale parameter, reliability and hazard functions.
Background: The aim of the study was to evaluate the amount of changes in the horizontal and vertical maxillary arch dimensions measurements following the premature loss of primary molars. Materials and methods: The sample consist of (50) children with unilateral prematurely extracted either first or second primary molars at the mixed dentition stage. Results and Conclusions: Results shows that there was an increase in the vertical incisor to canine distance (A) with both premature loss of first & second primary molars due to distal movement of primary canines and at the same time there were a significant loss of space in the extraction space with premature loss of second primary molar due to a mesial movement of maxillary first permanent
... Show MoreIn this paper reliable computational methods (RCMs) based on the monomial stan-dard polynomials have been executed to solve the problem of Jeffery-Hamel flow (JHF). In addition, convenient base functions, namely Bernoulli, Euler and Laguerre polynomials, have been used to enhance the reliability of the computational methods. Using such functions turns the problem into a set of solvable nonlinear algebraic system that MathematicaⓇ12 can solve. The JHF problem has been solved with the help of Improved Reliable Computational Methods (I-RCMs), and a review of the methods has been given. Also, published facts are used to make comparisons. As further evidence of the accuracy and dependability of the proposed methods, the maximum error remainder
... Show MoreThe designer must find the optimum match between the object's technical and economic needs and the performance and production requirements of the various material options when choosing material for an engineering application. This study proposes an integrated (hybrid) strategy for selecting the optimal material for an engineering design depending on design requirements. The primary objective is to determine the best candidate material for the drone wings based on Ashby's performance indices and then rank the result using a grey relational technique with the entropy weight method. Aluminum alloys, titanium alloys, composites, and wood have been suggested as suitable materials for manufacturing drone wings. The requirement
... Show MoreIn this paper, the memorization capability of a multilayer interpolative neural network is exploited to estimate a mobile position based on three angles of arrival. The neural network is trained with ideal angles-position patterns distributed uniformly throughout the region. This approach is compared with two other analytical methods, the average-position method which relies on finding the average position of the vertices of the uncertainty triangular region and the optimal position method which relies on finding the nearest ideal angles-position pattern to the measured angles. Simulation results based on estimations of the mobile position of particles moving along a nonlinear path show that the interpolative neural network approach outperf
... Show MoreUtilizing the Turbo C programming language, the atmospheric earth model is created from sea level to 86 km. This model has been used to determine atmospheric Earth parameters in this study. Analytical derivations of these parameters are made using the balancing forces theory and the hydrostatic equation. The effects of altitude on density, pressure, temperature, gravitational acceleration, sound speed, scale height, and molecular weight are examined. The mass of the atmosphere is equal to about 50% between sea level and 5.5 km. g is equal to 9.65 m/s2 at 50 km altitude, which is 9% lower than 9.8 m/s2 at sea level. However, at 86 km altitude, g is close to 9.51 m/s2, which is close to 15% smaller than 9.8 m/s2. These resu
... Show MoreRecently, wireless communication environments with high speeds and low complexity have become increasingly essential. Free-space optics (FSO) has emerged as a promising solution for providing direct connections between devices in such high-spectrum wireless setups. However, FSO communications are susceptible to weather-induced signal fluctuations, leading to fading and signal weakness at the receiver. To mitigate the effects of these challenges, several mathematical models have been proposed to describe the transition from weak to strong atmospheric turbulence, including Rayleigh, lognormal, Málaga, Nakagami-m, K-distribution, Weibull, Negative-Exponential, Inverse-Gaussian, G-G, and Fisher-Snedecor F distributions. This paper extensive
... Show More
In 2020 one of the researchers in this paper, in his first research, tried to find out the Modified Weighted Pareto Distribution of Type I by using the Azzalini method for weighted distributions, which contain three parameters, two of them for scale while the third for shape.This research compared the distribution with two other distributions from the same family; the Standard Pareto Distribution of Type I and the Generalized Pareto Distribution by using the Maximum likelihood estimator which was derived by the researchers for Modified Weighted Pareto Distribution of Type I, then the Mont Carlo method was used–that is one of the simulation manners for generating random samples data in different sizes ( n= 10,30,50), and in di
... Show MoreAbstract
The research Compared two methods for estimating fourparametersof the compound exponential Weibull - Poisson distribution which are the maximum likelihood method and the Downhill Simplex algorithm. Depending on two data cases, the first one assumed the original data (Non-polluting), while the second one assumeddata contamination. Simulation experimentswere conducted for different sample sizes and initial values of parameters and under different levels of contamination. Downhill Simplex algorithm was found to be the best method for in the estimation of the parameters, the probability function and the reliability function of the compound distribution in cases of natural and contaminateddata.
... Show More
Geographic Information Systems (GIS) are obtaining a significant role in handling strategic applications in which data are organized as records of multiple layers in a database. Furthermore, GIS provide multi-functions like data collection, analysis, and presentation. Geographic information systems have assured their competence in diverse fields of study via handling various problems for numerous applications. However, handling a large volume of data in the GIS remains an important issue. The biggest obstacle is designing a spatial decision-making framework focused on GIS that manages a broad range of specific data to achieve the right performance. It is very useful to support decision-makers by providing GIS-based decision support syste
... Show More