The investigation of machine learning techniques for addressing missing well-log data has garnered considerable interest recently, especially as the oil and gas sector pursues novel approaches to improve data interpretation and reservoir characterization. Conversely, for wells that have been in operation for several years, conventional measurement techniques frequently encounter challenges related to availability, including the lack of well-log data, cost considerations, and precision issues. This study's objective is to enhance reservoir characterization by automating well-log creation using machine-learning techniques. Among the methods are multi-resolution graph-based clustering and the similarity threshold method. By using cutting-edge machine learning techniques, our methodology shows a notable improvement in the precision and effectiveness of well-log predictions. Standard well logs from a reference well were used to train machine learning models. Additionally, conventional wireline logs were used as input to estimate facies for unclassified wells lacking core data. R-squared analysis and goodness-of-fit tests provide a numerical assessment of model performance, strengthening the validation process. The multi-resolution graph-based clustering and similarity threshold approaches have demonstrated notable results, achieving an accuracy of nearly 98%. Applying these techniques to data from eighteen wells produced precise results, demonstrating the effectiveness of our approach in enhancing the reliability and quality of well-log production.
Cryptocurrency became an important participant on the financial market as it attracts large investments and interests. With this vibrant setting, the proposed cryptocurrency price prediction tool stands as a pivotal element providing direction to both enthusiasts and investors in a market that presents itself grounded on numerous complexities of digital currency. Employing feature selection enchantment and dynamic trio of ARIMA, LSTM, Linear Regression techniques the tool creates a mosaic for users to analyze data using artificial intelligence towards forecasts in real-time crypto universe. While users navigate the algorithmic labyrinth, they are offered a vast and glittering selection of high-quality cryptocurrencies to select. The
... Show MoreDue to the easily access to the satellite images, Google Earth (GE) images have become more popular than other online virtual globes. However, the popularity of GE is not an indication of its accuracy. A considerable amount of literature has been published on evaluating the positional accuracy of GE data; however there are few studies which have investigated the subject of improving the GE accuracy. In this paper, a practical method for enhancing the horizontal positional accuracy of GE is suggested by establishing ten reference points, in University of Baghdad main campus, using different Global Navigation Satellite System (GNSS) observation techniques: Rapid Static, Post-Processing Kinematic, and Network. Then, the GE image for the study
... Show MoreThe vast advantages of 3D modelling industry have urged competitors to improve capturing techniques and processing pipelines towards minimizing labour requirements, saving time and reducing project risk. When it comes to digital 3D documentary and conserving projects, laser scanning and photogrammetry are compared to choose between the two. Since both techniques have pros and cons, this paper approaches the potential issues of individual techniques in terms of time, budget, accuracy, density, methodology and ease to use. Terrestrial laser scanner and close-range photogrammetry are tested to document a unique invaluable artefact (Lady of Hatra) located in Iraq for future data fusion sc
In this paper we proposed a new method for selecting a smoothing parameter in kernel estimator to estimate a nonparametric regression function in the presence of missing values. The proposed method is based on work on the golden ratio and Surah AL-E-Imran in the Qur'an. Simulation experiments were conducted to study a small sample behavior. The results proved the superiority the proposed on the competition method for selecting smoothing parameter.
This paper presents a grey model GM(1,1) of the first rank and a variable one and is the basis of the grey system theory , This research dealt properties of grey model and a set of methods to estimate parameters of the grey model GM(1,1) is the least square Method (LS) , weighted least square method (WLS), total least square method (TLS) and gradient descent method (DS). These methods were compared based on two types of standards: Mean square error (MSE), mean absolute percentage error (MAPE), and after comparison using simulation the best method was applied to real data represented by the rate of consumption of the two types of oils a Heavy fuel (HFO) and diesel fuel (D.O) and has been applied several tests to
... Show MoreTwo oil wells were tested to find the abnormal pressure zones using sonic log technique. We found that well Abu-Jir-3 and Abu-Jir-5 had an abnormal pressure zones from depth 4340 to 4520 feet and 4200 to 4600 feet, respectively. The maximum difference between obtained results and the field measured results did not exceed 2.4%.
In this paper, the formation pressures were expressed in terms of pressure gradient which sometimes reached up to twice the normal pressure gradient.
Drilling and developing such formations were dangerous and expensive.
The plotted figures showed a clear derivation from the normal trend which confirmed the existence of abnormal pressure zones.
Deep Learning Techniques For Skull Stripping of Brain MR Images
One of the diseases on a global scale that causes the main reasons of death is lung cancer. It is considered one of the most lethal diseases in life. Early detection and diagnosis are essential for lung cancer and will provide effective therapy and achieve better outcomes for patients; in recent years, algorithms of Deep Learning have demonstrated crucial promise for their use in medical imaging analysis, especially in lung cancer identification. This paper includes a comparison between a number of different Deep Learning techniques-based models using Computed Tomograph image datasets with traditional Convolution Neural Networks and SequeezeNet models using X-ray data for the automated diagnosis of lung cancer. Although the simple details p
... Show MoreText categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accuracy th
... Show More