In this research estimated the parameters of Gumbel distribution Type 1 for Maximum values through the use of two estimation methods:- Moments (MoM) and Modification Moments(MM) Method. the Simulation used for comparison between each of the estimation methods to reach the best method to estimate the parameters where the simulation was to generate random data follow Gumbel distributiondepending on three models of the real values of the parameters for different sample sizes with samples of replicate (R=500).The results of the assessment were put in tables prepared for the purpose of comparison, which made depending on the mean squares error (MSE).
Purpose: The research aims to estimate models representing phenomena that follow the logic of circular (angular) data, accounting for the 24-hour periodicity in measurement. Theoretical framework: The regression model is developed to account for the periodic nature of the circular scale, considering the periodicity in the dependent variable y, the explanatory variables x, or both. Design/methodology/approach: Two estimation methods were applied: a parametric model, represented by the Simple Circular Regression (SCR) model, and a nonparametric model, represented by the Nadaraya-Watson Circular Regression (NW) model. The analysis used real data from 50 patients at Al-Kindi Teaching Hospital in Baghdad. Findings: The Mean Circular Erro
... Show MoreSediment accumulated in sewers is a major concern source as it induces numerous operational and environmental problems. For instance, during wet weather flow, the re-suspension of this sediment accompanied by the combined sewer overflow may cause huge pollutant load to the receiving water body. The characteristics of the sewer sediment are important as it shapes its behaviour and determines the extent of the pollution load. In this paper, an investigation of sewer sediment and its characterization is done for a case study in Baghdad city. Sediment depth covers more than 50% of the sewer cross-sectional area; several operational causes are comprised to cause this huge depths of sediment depositions. The testing and analysis of the s
... Show MoreWireless channels are typically much more noisy than wired links and subjected to fading due to multipath propagation which result in ISI and hence high error rate. Adaptive modulation is a powerful technique to improve the tradeoff between spectral efficiency and Bit Error Rate (BER). In order to adjust the transmission rate, channel state information (CSI) is required at the transmitter side.
In this paper the performance enhancement of using linear prediction along with channel estimation to track the channel variations and adaptive modulation were examined. The simulation results shows that the channel estimation is sufficient for low Doppler frequency shifts (<30 Hz), while channel prediction is much more suited at
... Show MoreChannel estimation and synchronization are considered the most challenging issues in Orthogonal Frequency Division Multiplexing (OFDM) system. OFDM is highly affected by synchronization errors that cause reduction in subcarriers orthogonality, leading to significant performance degradation. The synchronization errors cause two issues: Symbol Time Offset (STO), which produces inter symbol interference (ISI) and Carrier Frequency Offset (CFO), which results in inter carrier interference (ICI). The aim of the research is to simulate Comb type pilot based channel estimation for OFDM system showing the effect of pilot numbers on the channel estimation performance and propose a modified estimation method for STO with less numb
... Show MoreThis research deals with a shrinking method concernes with the principal components similar to that one which used in the multiple regression “Least Absolute Shrinkage and Selection: LASS”. The goal here is to make an uncorrelated linear combinations from only a subset of explanatory variables that may have a multicollinearity problem instead taking the whole number say, (K) of them. This shrinkage will force some coefficients to equal zero, after making some restriction on them by some "tuning parameter" say, (t) which balances the bias and variance amount from side, and doesn't exceed the acceptable percent explained variance of these components. This had been shown by MSE criterion in the regression case and the percent explained v
... Show MoreIn this work, thermodynamic efficiency of individual cell and stack of cells (two cells) has been computed by studying the variation of voltage produced during an operation time of 30 min as a result of the affected parameters:- stoichiometric feed ratio, flow field design on single cell and feed distribution on stack of cells. The experiments were carried out by using two cells, one with serpentine flow field and the other with spiral flow field. These cells were fed with hydrogen and oxygen at low volumetric flow rates from 1 to 2 ml/sec and stoichiometric ratios of fuel (H2) to oxidant (O2) as 1:2, 1:1 and 2:1 respectively. The results showed that
... Show MoreIn this paper, the packing problem for complete ( 4)-arcs in is partially solved. The minimum and the maximum sizes of complete ( 4)-arcs in are obtained. The idea that has been used to do this classification is based on using the algorithm introduced in Section 3 in this paper. Also, this paper establishes the connection between the projective geometry in terms of a complete ( , 4)-arc in and the algebraic characteristics of a plane quartic curve over the field represented by the number of its rational points and inflexion points. In addition, some sizes of complete ( 6)-arcs in the projective plane of order thirteen are established, namely for = 53, 54, 55, 56.
In this study, we review the ARIMA (p, d, q), the EWMA and the DLM (dynamic linear moodelling) procedures in brief in order to accomdate the ac(autocorrelation) structure of data .We consider the recursive estimation and prediction algorithms based on Bayes and KF (Kalman filtering) techniques for correlated observations.We investigate the effect on the MSE of these procedures and compare them using generated data.
Estimating an individual's age from a photograph of their face is critical in many applications, including intelligence and defense, border security and human-machine interaction, as well as soft biometric recognition. There has been recent progress in this discipline that focuses on the idea of deep learning. These solutions need the creation and training of deep neural networks for the sole purpose of resolving this issue. In addition, pre-trained deep neural networks are utilized in the research process for the purpose of facial recognition and fine-tuning for accurate outcomes. The purpose of this study was to offer a method for estimating human ages from the frontal view of the face in a manner that is as accurate as possible and takes
... Show MoreThis research deals with a shrinking method concerned with the principal components similar to that one which used in the multiple regression “Least Absolute Shrinkage and Selection: LASS”. The goal here is to make an uncorrelated linear combinations from only a subset of explanatory variables that may have a multicollinearity problem instead taking the whole number say, (K) of them. This shrinkage will force some coefficients to equal zero, after making some restriction on them by some "tuning parameter" say, (t) which balances the bias and variance amount from side, and doesn't exceed the acceptable percent explained variance of these components. This had been shown by MSE criterion in the regression case and the percent explained
... Show More