Methods of estimating statistical distribution have attracted many researchers when it comes to fitting a specific distribution to data. However, when the data belong to more than one component, a popular distribution cannot be fitted to such data. To tackle this issue, mixture models are fitted by choosing the correct number of components that represent the data. This can be obvious in lifetime processes that are involved in a wide range of engineering applications as well as biological systems. In this paper, we introduce an application of estimating a finite mixture of Inverse Rayleigh distribution by the use of the Bayesian framework when considering the model as Markov chain Monte Carlo (MCMC). We employed the Gibbs sampler and Metropolis – Hastings algorithms. The proposed techniques are applied to simulated data following several scenarios. The accuracy of estimation has been examined by the average mean square error (AMSE) and the average classification success rate (ACSR). The results showed that the method was well performed in all simulation scenarios with respect to different sample sizes.
هدف البحث التعرف الى اسباب سلوك التنمر لدى طلاب الصف الاول المتوسط من وجهة نظر المدرسين والمدرسات واساليب تعديله، واستعمل الباحثان المنهج الوصفي واختيار عينة عشوائية من المدرسين والمدرسات في متوسطة أرض الرافدين ومتوسطة الرحمن للبنين وكان عددهم (46) مدرساً ومدرسة بواقع (32) مدرساً و(14) مدرسة، واعتمد الباحثان الاستبانة أداة للتعرف الى اسباب سلوك التنمر واساليب تعديله، واشارت نتائج البحث الى تنوع اسباب التن
... Show MoreThe performance quality and searching speed of Block Matching (BM) algorithm are affected by shapes and sizes of the search patterns used in the algorithm. In this paper, Kite Cross Hexagonal Search (KCHS) is proposed. This algorithm uses different search patterns (kite, cross, and hexagonal) to search for the best Motion Vector (MV). In first step, KCHS uses cross search pattern. In second step, it uses one of kite search patterns (up, down, left, or right depending on the first step). In subsequent steps, it uses large/small Hexagonal Search (HS) patterns. This new algorithm is compared with several known fast block matching algorithms. Comparisons are based on search points and Peak Signal to Noise Ratio (PSNR). According to resul
... Show More
It is considered as one of the statistical methods used to describe and estimate the relationship between randomness (Y) and explanatory variables (X). The second is the homogeneity of the variance, in which the dependent variable is a binary response takes two values (One when a specific event occurred and zero when that event did not happen) such as (injured and uninjured, married and unmarried) and that a large number of explanatory variables led to the emergence of the problem of linear multiplicity that makes the estimates inaccurate, and the method of greatest possibility and the method of declination of the letter was used in estimating A double-response logistic regression model by adopting the Jackna
... Show MoreIn this paper, previous studies about Fuzzy regression had been presented. The fuzzy regression is a generalization of the traditional regression model that formulates a fuzzy environment's relationship to independent and dependent variables. All this can be introduced by non-parametric model, as well as a semi-parametric model. Moreover, results obtained from the previous studies and their conclusions were put forward in this context. So, we suggest a novel method of estimation via new weights instead of the old weights and introduce
Paper Type: Review article.
another suggestion based on artificial neural networks.
In this research, several estimators concerning the estimation are introduced. These estimators are closely related to the hazard function by using one of the nonparametric methods namely the kernel function for censored data type with varying bandwidth and kernel boundary. Two types of bandwidth are used: local bandwidth and global bandwidth. Moreover, four types of boundary kernel are used namely: Rectangle, Epanechnikov, Biquadratic and Triquadratic and the proposed function was employed with all kernel functions. Two different simulation techniques are also used for two experiments to compare these estimators. In most of the cases, the results have proved that the local bandwidth is the best for all the
... Show More
It is considered as one of the statistical methods used to describe and estimate the relationship between randomness (Y) and explanatory variables (X). The second is the homogeneity of the variance, in which the dependent variable is a binary response takes two values (One when a specific event occurred and zero when that event did not happen) such as (injured and uninjured, married and unmarried) and that a large number of explanatory variables led to the emergence of the problem of linear multiplicity that makes the estimates inaccurate, and the method of greatest possibility and the method of declination of the letter was used in estimating A double-response logistic regression model by adopting the Jackna
... Show MoreCrude oil is one of the most important sources of energy in the world. To extract its multiple components, we need oil refineries. Refineries consist of multiple parts, including heat exchangers, furnaces, and others. It is known that one of the initial operations in the refineries is the process of gradually raising the temperature of crude oil to 370 degrees centigrade or higher. Hence, in this investigation the focus is on the furnaces and the corrosion in their tubes. The investigation was accomplished by reading the thickness of the tubes for the period from 2008 to 2020 with a test in every two year, had passed from their introduction into the work. Where the thickness of more than one point was measured on each tube in the sa
... Show More