Spot panchromatic satellite image had been employed to study and know the difference Between ground and satellite data( DN ,its values varies from 0-255) where it is necessary to convert these DN values to absolute radiance values through special equations ,later it converted to spectral reflectance values .In this study a monitoring of the environmental effect resulted from throwing the sewage drainages pollutants (industrial and home) into the Tigris river water in Mosul, was achieved, which have an effect mostly on physical characters specially color and turbidity which lead to the variation in Spectral Reflectance of the river water ,and it could be detected by using many remote sensing techniques. The contaminated areas within the water of the river which represents the difference in the reflectance values were isolated and signed, as well as the field estimations, which had been done by using spectrometer device, which gave an acceptable agreement with satellite data considering time difference between these estimations. satellite imagery analysis program ERDAS version 8.4 was used to determine the values of Spectral Reflectance in the satellite image. A geographic information systems through the ARC INFO has been used to draw photo map of the study area determined it specific sites of measuring the Reflectance, which represent areas that are near the sources of pollution and the other various regions along the river.
The region-based association analysis has been proposed to capture the collective behavior of sets of variants by testing the association of each set instead of individual variants with the disease. Such an analysis typically involves a list of unphased multiple-locus genotypes with potentially sparse frequencies in cases and controls. To tackle the problem of the sparse distribution, a two-stage approach was proposed in literature: In the first stage, haplotypes are computationally inferred from genotypes, followed by a haplotype coclassification. In the second stage, the association analysis is performed on the inferred haplotype groups. If a haplotype is unevenly distributed between the case and control samples, this haplotype is labeled
... Show MoreAbstract
The research Compared two methods for estimating fourparametersof the compound exponential Weibull - Poisson distribution which are the maximum likelihood method and the Downhill Simplex algorithm. Depending on two data cases, the first one assumed the original data (Non-polluting), while the second one assumeddata contamination. Simulation experimentswere conducted for different sample sizes and initial values of parameters and under different levels of contamination. Downhill Simplex algorithm was found to be the best method for in the estimation of the parameters, the probability function and the reliability function of the compound distribution in cases of natural and contaminateddata.
... Show More
Data scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for
Quality control of well logs has always been an important objective in reservoir studies because of the key role played by well logs as input data. This study aims to make a quality control on well logs data for two wells of Yamama formation in southern Iraqi field to ensuring and enhancing the measurement accuracy. In the beginning, the calibration data of before and after surveys are applied as initial evaluation for the quality of density log in well R-1. Then, depth matching is used to fit the depth of all logs in each well. After that, the comparison between the main and repeat sections is helped to check the repeatability. Finally, all uncorrected logs are environmentally corrected to remove the effects of the borehole conditi
... Show MoreMalicious software (malware) performs a malicious function that compromising a computer system’s security. Many methods have been developed to improve the security of the computer system resources, among them the use of firewall, encryption, and Intrusion Detection System (IDS). IDS can detect newly unrecognized attack attempt and raising an early alarm to inform the system about this suspicious intrusion attempt. This paper proposed a hybrid IDS for detection intrusion, especially malware, with considering network packet and host features. The hybrid IDS designed using Data Mining (DM) classification methods that for its ability to detect new, previously unseen intrusions accurately and automatically. It uses both anomaly and misuse dete
... Show MoreAnomaly detection is still a difficult task. To address this problem, we propose to strengthen DBSCAN algorithm for the data by converting all data to the graph concept frame (CFG). As is well known that the work DBSCAN method used to compile the data set belong to the same species in a while it will be considered in the external behavior of the cluster as a noise or anomalies. It can detect anomalies by DBSCAN algorithm can detect abnormal points that are far from certain set threshold (extremism). However, the abnormalities are not those cases, abnormal and unusual or far from a specific group, There is a type of data that is do not happen repeatedly, but are considered abnormal for the group of known. The analysis showed DBSCAN using the
... Show More