This article describes how to predict different types of multiple reflections in pre-track seismic data. The characteristics of multiple reflections can be expressed as a combination of the characteristics of primary reflections. Multiple velocities always come in lower magnitude than the primaries, this is the base for separating them during Normal Move Out correction. The muting procedure is applied in Time-Velocity analysis domain. Semblance plot is used to diagnose multiples availability and judgment for muting dimensions. This processing procedure is used to eliminate internal multiples from real 2D seismic data from southern Iraq in two stages. The first is conventional Normal Move Out correction and velocity auto picking and stacking, and the second stage is muting. Many Common Depth Point gathers are tested to select the proper muting dimension, later on; the auto pick for the muted semblance is done for the whole 2D seismic data. The following step is to stack the Normal Move Out corrected data. Differences are calculated between the two stages of the process which greatly help to determine the eliminated multiple locations within the sedimentary secession. This will reduce the risk of interpreting these sequences as primary reflectors spatially within deep thin layers. Madagascar open source package is used in these processing steps. Madagascar open source package is very efficient, accurate, and easy to correct any part of the Python code used in the two stages of processing.
The advancements in Information and Communication Technology (ICT), within the previous decades, has significantly changed people’s transmit or store their information over the Internet or networks. So, one of the main challenges is to keep these information safe against attacks. Many researchers and institutions realized the importance and benefits of cryptography in achieving the efficiency and effectiveness of various aspects of secure communication.This work adopts a novel technique for secure data cryptosystem based on chaos theory. The proposed algorithm generate 2-Dimensional key matrix having the same dimensions of the original image that includes random numbers obtained from the 1-Dimensional logistic chaotic map for given con
... Show MoreMost of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve B
... Show MoreReliable data transfer and energy efficiency are the essential considerations for network performance in resource-constrained underwater environments. One of the efficient approaches for data routing in underwater wireless sensor networks (UWSNs) is clustering, in which the data packets are transferred from sensor nodes to the cluster head (CH). Data packets are then forwarded to a sink node in a single or multiple hops manners, which can possibly increase energy depletion of the CH as compared to other nodes. While several mechanisms have been proposed for cluster formation and CH selection to ensure efficient delivery of data packets, less attention has been given to massive data co
In this study, we made a comparison between LASSO & SCAD methods, which are two special methods for dealing with models in partial quantile regression. (Nadaraya & Watson Kernel) was used to estimate the non-parametric part ;in addition, the rule of thumb method was used to estimate the smoothing bandwidth (h). Penalty methods proved to be efficient in estimating the regression coefficients, but the SCAD method according to the mean squared error criterion (MSE) was the best after estimating the missing data using the mean imputation method
Modern civilization increasingly relies on sustainable and eco-friendly data centers as the core hubs of intelligent computing. However, these data centers, while vital, also face heightened vulnerability to hacking due to their role as the convergence points of numerous network connection nodes. Recognizing and addressing this vulnerability, particularly within the confines of green data centers, is a pressing concern. This paper proposes a novel approach to mitigate this threat by leveraging swarm intelligence techniques to detect prospective and hidden compromised devices within the data center environment. The core objective is to ensure sustainable intelligent computing through a colony strategy. The research primarily focusses on the
... Show MoreIn data mining, classification is a form of data analysis that can be used to extract models describing important data classes. Two of the well known algorithms used in data mining classification are Backpropagation Neural Network (BNN) and Naïve Bayesian (NB). This paper investigates the performance of these two classification methods using the Car Evaluation dataset. Two models were built for both algorithms and the results were compared. Our experimental results indicated that the BNN classifier yield higher accuracy as compared to the NB classifier but it is less efficient because it is time-consuming and difficult to analyze due to its black-box implementation.
A large number of researchers had attempted to identify the pattern of the functional relationship between fertility from a side and economic and social characteristics of the population from another, with the strength of effect of each. So, this research aims to monitor and analyze changes in the level of fertility temporally and spatially in recent decades, in addition to estimating fertility levels in Iraq for the period (1977-2011) and then make forecasting to the level of fertility in Iraq at the national level (except for the Kurdistan region), and for the period of (2012-2031). To achieve this goal has been the use of the Lee-Carter model to estimate fertility rates and predictable as well. As this is the form often has been familiar
... Show MoreThe main parameter that drives oil industry contract investment and set up economic feasibility study for approving field development plan is hydrocarbon reservoir potential. So a qualified experience should be deeply afforded to correctly evaluate hydrocarbons reserve by applying different techniques at each phase of field management, through collecting and using valid and representative data sources, starting from exploration phase and tune-up by development phase. Commonly, volumetric calculation is the main technique for estimate reservoir potential using available information at exploration stage which is quite few data; in most cases, this technique estimate big figure of reserve. In this study
Most of the world is seeking attention to the agricultural sector, which occupies a distinguished economic center, including Iraq, it is no wonder that the focus is on this sector, especially in developing countries. Descriptive analysis of the sample tables of wheat and barley crops in Iraq showed that adherence to the principles and rules of total quality, including modern irrigation methods (irrigated) has a significant impact in increasing productivity, reducing costs and improving quality compared to traditional irrigation methods, so it became necessary to take agricultural economic units All procedures and means that will help in the application of the rules of total quality to promote the reality of wheat and barley cultivation in I
... Show More