Gumbel distribution was dealt with great care by researchers and statisticians. There are traditional methods to estimate two parameters of Gumbel distribution known as Maximum Likelihood, the Method of Moments and recently the method of re-sampling called (Jackknife). However, these methods suffer from some mathematical difficulties in solving them analytically. Accordingly, there are other non-traditional methods, like the principle of the nearest neighbors, used in computer science especially, artificial intelligence algorithms, including the genetic algorithm, the artificial neural network algorithm, and others that may to be classified as meta-heuristic methods. Moreover, this principle of nearest neighbors has useful statistical features. The objective of this paper is thus to propose a new algorithm where it allows getting the estimation of the parameters of Gumbel probability distribution directly. Furthermore, it overcomes the mathematical difficulties in this matter without need to the derivative of the likelihood function. Taking simulation approach under consideration as empirical experiments where a hybrid method performs optimization of these three traditional methods. In this regard, comparisons have been done between the new proposed method and each pair of the traditional methods mentioned above by efficiency criterion Root of Mean Squared Error (RMSE). As a result, (36) experiments of different combinations of initial values of two parameters (λ: shift parameter and θ: scale parameter) in three values that take four different sample sizes for each experiment. To conclude, the proposed algorithm showed its superiority in all simulation combinations associated with all sample sizes for the two parameters (λ and θ). In addition, the method of Moments was the best in estimating the shift parameter (λ) and the method of Maximum Likelihood was in estimating the scale parameter (θ).
The cost of pile foundations is part of the super structure cost, and it became necessary to reduce this cost by studying the pile types then decision-making in the selection of the optimal pile type in terms of cost and time of production and quality .So The main objective of this study is to solve the time–cost–quality trade-off (TCQT) problem by finding an optimal pile type with the target of "minimizing" cost and time while "maximizing" quality. There are many types In the world of piles but in this paper, the researcher proposed five pile types, one of them is not a traditional, and developed a model for the problem and then employed particle swarm optimization (PSO) algorithm, as one of evolutionary algorithms with t
... Show MoreA biconical antenna has been developed for ultra-wideband sensing. A wide impedance bandwidth of around 115% at bandwidth 3.73-14 GHz is achieved which shows that the proposed antenna exhibits a fairly sensitive sensor for microwave medical imaging applications. The sensor and instrumentation is used together with an improved version of delay and sum image reconstruction algorithm on both fatty and glandular breast phantoms. The relatively new imaging set-up provides robust reconstruction of complex permittivity profiles especially in glandular phantoms, producing results that are well matched to the geometries and composition of the tissues. Respectively, the signal-to-clutter and the signal-to-mean ratios of the improved method are consis
... Show MoreGas-lift technique plays an important role in sustaining oil production, especially from a mature field when the reservoirs’ natural energy becomes insufficient. However, optimally allocation of the gas injection rate in a large field through its gas-lift network system towards maximization of oil production rate is a challenging task. The conventional gas-lift optimization problems may become inefficient and incapable of modelling the gas-lift optimization in a large network system with problems associated with multi-objective, multi-constrained, and limited gas injection rate. The key objective of this study is to assess the feasibility of utilizing the Genetic Algorithm (GA) technique to optimize t
The low-pressure sprinklers have been widely used to replace the high-pressure impact sprinklers in the lateral move sprinkler irrigation system due to its low operating cost and high efficiency. However, runoff losses under the low-pressure sprinkler irrigation machine can be significant. This study aims to evaluate the performance of the variable pulsed irrigation algorithm (VPIA) in reducing the runoff losses under low-pressure lateral move sprinkler irrigation machine for three different soil types. The VPIA uses the ON-OFF pulsing technique to reduce the runoff losses by controlling the number and width of the pulses considering the soil and the irrigation machine properties. Als
In this study, we investigate about the run length properties of cumulative sum (Cusum) and The exponentially weighted moving average (EWMA) control charts, to detect positive shifts in the mean of the process for the poisson distribution with unknown mean. We used markov chain approach to compute the average and the standard deviation for run length for Cusum and EWMA control charts, when the variable under control follows poisson distribution. Also, we used the Cusum and the EWMA control charts for monitoring a process mean when the observations (products are selected from Al_Mamun Factory ) are identically and independently distributed (iid) from poisson distribution i
... Show MoreThis research includes the study of dual data models with mixed random parameters, which contain two types of parameters, the first is random and the other is fixed. For the random parameter, it is obtained as a result of differences in the marginal tendencies of the cross sections, and for the fixed parameter, it is obtained as a result of differences in fixed limits, and random errors for each section. Accidental bearing the characteristic of heterogeneity of variance in addition to the presence of serial correlation of the first degree, and the main objective in this research is the use of efficient methods commensurate with the paired data in the case of small samples, and to achieve this goal, the feasible general least squa
... Show MoreMassive multiple-input multiple-output (massive-MIMO) is a promising technology for next generation wireless communications systems due to its capability to increase the data rate and meet the enormous ongoing data traffic explosion. However, in non-reciprocal channels, such as those encountered in frequency division duplex (FDD) systems, channel state information (CSI) estimation using downlink (DL) training sequence is to date very challenging issue, especially when the channel exhibits a shorter coherence time. In particular, the availability of sufficiently accurate CSI at the base transceiver station (BTS) allows an efficient precoding design in the DL transmission to be achieved, and thus, reliable communication systems can be obtaine
... Show MoreAbstract
The problem of missing data represents a major obstacle before researchers in the process of data analysis in different fields since , this problem is a recurrent one in all fields of study including social , medical , astronomical and clinical experiments .
The presence of such a problem within the data to be studied may influence negatively on the analysis and it may lead to misleading conclusions , together with the fact that these conclusions that result from a great bias caused by that problem in spite of the efficiency of wavelet methods but they are also affected by the missing of data , in addition to the impact of the problem of miss of accuracy estimation
... Show More