Amplitude variation with offset (AVO) analysis is an 1 efficient tool for hydrocarbon detection and identification of elastic rock properties and fluid types. It has been applied in the present study using reprocessed pre-stack 2D seismic data (1992, Caulerpa) from north-west of the Bonaparte Basin, Australia. The AVO response along the 2D pre-stack seismic data in the Laminaria High NW shelf of Australia was also investigated. Three hypotheses were suggested to investigate the AVO behaviour of the amplitude anomalies in which three different factors; fluid substitution, porosity and thickness (Wedge model) were tested. The AVO models with the synthetic gathers were analysed using log information to find which of these is the controlling parameter on the AVO analysis. AVO cross plots from the real pre-stack seismic data reveal AVO class IV (showing a negative intercept decreasing with offset). This result matches our modelled result of fluid substitution for the seismic synthetics. It is concluded that fluid substitution is the controlling parameter on the AVO analysis and therefore, the high amplitude anomaly on the seabed and the target horizon 9 is the result of changing the fluid content and the lithology along the target horizons. While changing the porosity has little effect on the amplitude variation with offset within the AVO cross plot. Finally, results from the wedge models show that a small change of thickness causes a change in the amplitude; however, this change in thickness gives a different AVO characteristic and a mismatch with the AVO result of the real 2D pre-stack seismic data. Therefore, a constant thin layer with changing fluids is more likely to be the cause of the high amplitude anomalies.
The oil and gas industry relies heavily on IT innovations to manage business processes, but the exponential generation of data has led to concerns about processing big data, generating valuable insights, and making timely decisions. Many companies have adopted Big Data Analytics (BDA) solutions to address these challenges. However, determining the adoption of BDA solutions requires a thorough understanding of the contextual factors influencing these decisions. This research explores these factors using a new Technology-Organisation-Environment (TOE) framework, presenting technological, organisational, and environmental factors. The study used a Delphi research method and seven heterogeneous panelists from an Oman oil and gas company
... Show MoreThe vast advantages of 3D modelling industry have urged competitors to improve capturing techniques and processing pipelines towards minimizing labour requirements, saving time and reducing project risk. When it comes to digital 3D documentary and conserving projects, laser scanning and photogrammetry are compared to choose between the two. Since both techniques have pros and cons, this paper approaches the potential issues of individual techniques in terms of time, budget, accuracy, density, methodology and ease to use. Terrestrial laser scanner and close-range photogrammetry are tested to document a unique invaluable artefact (Lady of Hatra) located in Iraq for future data fusion sc
The purpose of this paper is to apply different transportation models in their minimum and maximum values by finding starting basic feasible solution and finding the optimal solution. The requirements of transportation models were presented with one of their applications in the case of minimizing the objective function, which was conducted by the researcher as real data, which took place one month in 2015, in one of the poultry farms for the production of eggs
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for
High performance self-consolidating concrete HP-SCC is one of the most complex types of concrete which have the capacity to consolidated under its own weight, have excellent homogeneity and high durability. This study aims to focus on the possibility of using industrial by-products like Silica fumes SF in the preparation of HP-SCC enhanced with discrete steel fibers (DSF) and monofilament polypropylene fibers (PPF). From experimental results, it was found that using DSF with volume fraction of 0.50 %; a highly improvements were gained in the mechanical properties of HP-SCC. The compressive strength, splitting tensile strength, flexural strength and elastic modulus improved about 65.7 %, 70.5 %, 41.7 % and 80.3 % at 28 days age, respectively
... Show MoreBackground. Dental implantation has become a standard procedure with high success rates, relying on achieving osseointegration between the implant surface and surrounding bone tissue. Polyether ether ketone (PEEK) is a promising alternative to traditional dental implant materials like titanium, but its osseointegration capabilities are limited due to its hydrophobic nature and reduced surface roughness. Objective. The aim of the study is to increase the surface roughness and hydrophilicity of PEEK by treating the surface with piranha solution and then coating the surface with epigallocatechin-3-gallate (EGCG) by electrospraying technique. Materials and Methods. The study includes four groups intended to investigate the effect of pir
... Show MoreThe current study performed in order to detect and quantify epicatechin in two tea samples of Camellia sinensis (black and green tea) by thin layer chromatography (TLC) and high performance liquid chromatography (HPLC). Extraction of epicatechin from black and green tea was done by using two different methods: maceration (cold extraction method) and decoction (hot extraction method) involved using three different solvents which are absolute ethanol, 50% aqueous ethanol and water for both extraction methods using room temperature and direct heat respectively. Crude extracts of two tea samples that obtained from two methods were fractionated by using two solvents with different polarity (chloroform and
... Show More