Most of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve Bayesian classifier (NBC) have been enhanced as compared to the dataset before applying the proposed method. Moreover, the results indicated that issa was performed better than the statistical imputation techniques such as deleting the samples with missing values, replacing the missing values with zeros, mean, or random values.
In low-latitude areas less than 10° in latitude angle, the solar radiation that goes into the solar still increases as the cover slope approaches the latitude angle. However, the amount of water that is condensed and then falls toward the solar-still basin is also increased in this case. Consequently, the solar yield still is significantly decreased, and the accuracy of the prediction method is affected. This reduction in the yield and the accuracy of the prediction method is inversely proportional to the time in which the condensed water stays on the inner side of the condensing cover without collection because more drops will fall down into the basin of the solar-still. Different numbers of scraper motions per hour (NSM), that is
... Show MoreGrabisch and Labreuche have recently proposed a generalization of capacities, called the bi-capacities. Recently, a new approach for studying bi-capacities through introducing a notion of ternary-element sets proposed by the author. In this paper, we propose many results such as bipolar Mobius transform, importance index, and interaction index of bi-capacities based on our approach.
Akaike’s Information Criterion (AIC) is a popular method for estimation the number of sources impinging on an array of sensors, which is a problem of great interest in several applications. The performance of AIC degrades under low Signal-to-Noise Ratio (SNR). This paper is concerned with the development and application of quadrature mirror filters (QMF) for improving the performance of AIC. A new system is proposed to estimate the number of sources by applying AIC to the outputs of filter bank consisting quadrature mirror filters (QMF). The proposed system can estimate the number of sources under low signal-to-noise ratio (SNR).
Image compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless com
... Show MoreThe subject of an valuation of quality of construction projects is one of the topics which it becomes necessary of the absence of the quantity standards in measuring the control works and the quality valuation standards in constructional projects. In the time being it depends on the experience of the workers which leads to an apparent differences in the valuation.
The idea of this research came to put the standards to evaluate the quality of the projects in a special system depending on quantity scale nor quality specifying in order to prepare an expert system “ Crystal “ to apply this special system to able the engineers to valuate the quality of their projects easily and in more accurate ways.
Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eye
... Show MoreThe synthesis of new substituted cobalt Phthalocyanine (CoPc) was carried out using starting materials Naphthalene-1,4,5, tetracarbonic acid dianhydride (NDI) employing dry process method. Metal oxides (MO) alloy of (60%Ni3O4 40%-Co3O4 ) have been functionalized with multiwall carbon nanotubes (F-MWCNTs) to produce (F-MWCNTs/MO) nanocomposite (E2) and mixed with CoPc to yield (F-MWCNT/CoPc/MO) (E3). These composites were investigated using different analytical and spectrophotometric methods such as 1H-NMR (0-18 ppm), FTIR spectroscopy in the range of (400-4000cm-1), powder X-rays diffraction (PXRD, 2θ o = 10-80), Raman spectroscopy (0-4000 cm-1), and UV-Visib
... Show MoreIn regression testing, Test case prioritization (TCP) is a technique to arrange all the available test cases. TCP techniques can improve fault detection performance which is measured by the average percentage of fault detection (APFD). History-based TCP is one of the TCP techniques that consider the history of past data to prioritize test cases. The issue of equal priority allocation to test cases is a common problem for most TCP techniques. However, this problem has not been explored in history-based TCP techniques. To solve this problem in regression testing, most of the researchers resort to random sorting of test cases. This study aims to investigate equal priority in history-based TCP techniques. The first objective is to implement
... Show More