In many scientific fields, Bayesian models are commonly used in recent research. This research presents a new Bayesian model for estimating parameters and forecasting using the Gibbs sampler algorithm. Posterior distributions are generated using the inverse gamma distribution and the multivariate normal distribution as prior distributions. The new method was used to investigate and summaries Bayesian statistics' posterior distribution. The theory and derivation of the posterior distribution are explained in detail in this paper. The proposed approach is applied to three simulation datasets of 100, 300, and 500 sample sizes. Also, the procedure was extended to the real dataset called the rock intensity dataset. The actual dataset is collected from the UCI Machine Learning Repository. The findings were discussed and summarized at the end. All calculations for this research have been done using R software (version 4.2.2). © 2024 Author(s).
This paper provides an attempt for modeling rate of penetration (ROP) for an Iraqi oil field with aid of mud logging data. Data of Umm Radhuma formation was selected for this modeling. These data include weight on bit, rotary speed, flow rate and mud density. A statistical approach was applied on these data for improving rate of penetration modeling. As result, an empirical linear ROP model has been developed with good fitness when compared with actual data. Also, a nonlinear regression analysis of different forms was attempted, and the results showed that the power model has good predicting capability with respect to other forms.
The aim of the research is to estimate the hidden population. Here، the number of drug users in Baghdad was calculated for the male age group (15-60) years old ، based on the Bayesian models. These models are used to treat some of the bias in the Killworth method Accredited in many countries of the world.
Four models were used: random degree، Barrier effects، Transmission bias، the first model being random، an extension of the Killworth model، adding random effects such as variance and uncertainty Through the size of the personal network، and when expanded by adding the fact that the respondents have different tendencies، the mixture of non-random variables with random to produce
... Show MoreThe use of deep learning.
Abstract
This research deals with Building A probabilistic Linear programming model representing, the operation of production in the Middle Refinery Company (Dura, Semawa, Najaif) Considering the demand of each product (Gasoline, Kerosene,Gas Oil, Fuel Oil ).are random variables ,follows certain probability distribution, which are testing by using Statistical programme (Easy fit), thes distribution are found to be Cauchy distribution ,Erlang distribution ,Pareto distribution ,Normal distribution ,and General Extreme value distribution . &
... Show MoreIn this paper, the maximum likelihood estimates for parameter ( ) of two parameter's Weibull are studied, as well as white estimators and (Bain & Antle) estimators, also Bayes estimator for scale parameter ( ), the simulation procedures are used to find the estimators and comparing between them using MSE. Also the application is done on the data for 20 patients suffering from a headache disease.
Scheduling considered being one of the most fundamental and essential bases of the project management. Several methods are used for project scheduling such as CPM, PERT and GERT. Since too many uncertainties are involved in methods for estimating the duration and cost of activities, these methods lack the capability of modeling practical projects. Although schedules can be developed for construction projects at early stage, there is always a possibility for unexpected material or technical shortages during construction stage. The objective of this research is to build a fuzzy mathematical model including time cost tradeoff and resource constraints analysis to be applied concurrently. The proposed model has been formulated using fuzzy the
... Show MoreDatabase is characterized as an arrangement of data that is sorted out and disseminated in a way that allows the client to get to the data being put away in a simple and more helpful way. However, in the era of big-data the traditional methods of data analytics may not be able to manage and process the large amount of data. In order to develop an efficient way of handling big-data, this work studies the use of Map-Reduce technique to handle big-data distributed on the cloud. This approach was evaluated using Hadoop server and applied on EEG Big-data as a case study. The proposed approach showed clear enhancement for managing and processing the EEG Big-data with average of 50% reduction on response time. The obtained results provide EEG r
... Show MoreRecently, the financial mathematics has been emerged to interpret and predict the underlying mechanism that generates an incident of concern. A system of differential equations can reveal a dynamical development of financial mechanism across time. Multivariate wiener process represents the stochastic term in a system of stochastic differential equations (SDE). The standard wiener process follows a Markov chain, and hence it is a martingale (kind of Markov chain), which is a good integrator. Though, the fractional Wiener process does not follow a Markov chain, hence it is not a good integrator. This problem will produce an Arbitrage (non-equilibrium in the market) in the predicted series. It is undesired property that leads to erroneous conc
... Show MoreThe purpose of this paper is applying the robustness in Linear programming(LP) to get rid of uncertainty problem in constraint parameters, and find the robust optimal solution, to maximize the profits of the general productive company of vegetable oils for the year 2019, through the modify on a mathematical model of linear programming when some parameters of the model have uncertain values, and being processed it using robust counterpart of linear programming to get robust results from the random changes that happen in uncertain values of the problem, assuming these values belong to the uncertainty set and selecting the values that cause the worst results and to depend buil
... Show More