Most of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve Bayesian classifier (NBC) have been enhanced as compared to the dataset before applying the proposed method. Moreover, the results indicated that issa was performed better than the statistical imputation techniques such as deleting the samples with missing values, replacing the missing values with zeros, mean, or random values.
In the present research, a crane frame has been investigated by using finite element method. The damage is simulated by reducing the stiffness of assumed elements with ratios (10% and 20 %) in mid- span of the vertical column in crane frame. The cracked beam with a one-edge and non-propagating crack has been used. Six cases of damage are modeled for crane frame and by introducing cracked elements at different locations with ratio of depth of crack to the height of the beam (a/h) 0.1, 0.20. A FEM program coded in Matlab 6.5 was used to model the numerical simulation of the damage scenarios. The results showed a decreasing in the five natural frequencies from undamaged beam which means
... Show MoreDecision making is vital and important activity in field operations research ,engineering ,administration science and economic science with any industrial or service company or organization because the core of management process as well as improve him performance . The research includes decision making process when the objective function is fraction function and solve models fraction programming by using some fraction programming methods and using goal programming method aid programming ( win QSB )and the results explain the effect use the goal programming method in decision making process when the objective function is
fraction .
In this paper, a method based on modified adomian decomposition method for solving Seventh order integro-differential equations (MADM). The distinctive feature of the method is that it can be used to find the analytic solution without transformation of boundary value problems. To test the efficiency of the method presented two examples are solved by proposed method.
in this paper the collocation method will be solve ordinary differential equations of retarted arguments also some examples are presented in order to illustrate this approach
Characterization of the heterogonous reservoir is complex representation and evaluation of petrophysical properties and application of the relationships between porosity-permeability within the framework of hydraulic flow units is used to estimate permeability in un-cored wells. Techniques of flow unit or hydraulic flow unit (HFU) divided the reservoir into zones laterally and vertically which can be managed and control fluid flow within flow unit and considerably is entirely different with other flow units through reservoir. Each flow unit can be distinguished by applying the relationships of flow zone indicator (FZI) method. Supporting the relationship between porosity and permeability by using flow zone indictor is ca
... Show MoreThe research aimed at measuring the compatibility of Big date with the organizational Ambidexterity dimensions of the Asia cell Mobile telecommunications company in Iraq in order to determine the possibility of adoption of Big data Triple as a approach to achieve organizational Ambidexterity.
The study adopted the descriptive analytical approach to collect and analyze the data collected by the questionnaire tool developed on the Likert scale After a comprehensive review of the literature related to the two basic study dimensions, the data has been subjected to many statistical treatments in accordance with res
... Show Moren this research, several estimators concerning the estimation are introduced. These estimators are closely related to the hazard function by using one of the nonparametric methods namely the kernel function for censored data type with varying bandwidth and kernel boundary. Two types of bandwidth are used: local bandwidth and global bandwidth. Moreover, four types of boundary kernel are used namely: Rectangle, Epanechnikov, Biquadratic and Triquadratic and the proposed function was employed with all kernel functions. Two different simulation techniques are also used for two experiments to compare these estimators. In most of the cases, the results have proved that the local bandwidth is the best for all the types of the kernel boundary func
... Show MoreThis paper proposes two hybrid feature subset selection approaches based on the combination (union or intersection) of both supervised and unsupervised filter approaches before using a wrapper, aiming to obtain low-dimensional features with high accuracy and interpretability and low time consumption. Experiments with the proposed hybrid approaches have been conducted on seven high-dimensional feature datasets. The classifiers adopted are support vector machine (SVM), linear discriminant analysis (LDA), and K-nearest neighbour (KNN). Experimental results have demonstrated the advantages and usefulness of the proposed methods in feature subset selection in high-dimensional space in terms of the number of selected features and time spe
... Show More