Most of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve Bayesian classifier (NBC) have been enhanced as compared to the dataset before applying the proposed method. Moreover, the results indicated that issa was performed better than the statistical imputation techniques such as deleting the samples with missing values, replacing the missing values with zeros, mean, or random values.
In this research, the semiparametric Bayesian method is compared with the classical method to estimate reliability function of three systems : k-out of-n system, series system, and parallel system. Each system consists of three components, the first one represents the composite parametric in which failure times distributed as exponential, whereas the second and the third components are nonparametric ones in which reliability estimations depend on Kernel method using two methods to estimate bandwidth parameter h method and Kaplan-Meier method. To indicate a better method for system reliability function estimation, it has be
... Show MoreThe study aimed to recommend a new spectrophotometric-kinetic method for determination of carbamazepine (CABZ) in its pure form and pharmaceutical forms. The proposed procedure based on the coupling of CABZ with diazotized sulfanilic acid in basic medium to yield a colored azo dye. Factors affecting the reaction yield were studied and the conditions were optimized. The colored product was followed spectrophotometrically via monitoring its absorbance at 396 nm. Under the optimized conditions, two method (the initial rate and fixed time (10 minute)) were applied for constructing the calibration graphs. The graphs were linear in concentration ranges 2.0 to 18.0 µg.mL-1 for both methods. The proposed was applied successfully in
... Show MoreThe paper presents a highly accurate power flow solution, reducing the possibility of ending at local minima, by using Real-Coded Genetic Algorithm (RCGA) with system reduction and restoration. The proposed method (RCGA) is modified to reduce the total computing time by reducing the system in size to that of the generator buses, which, for any realistic system, will be smaller in number, and the load buses are eliminated. Then solving the power flow problem for the generator buses only by real-coded GA to calculate the voltage phase angles, whereas the voltage magnitudes are specified resulted in reduced computation time for the solution. Then the system is restored by calculating the voltages of the load buses in terms
... Show MoreTo identify and explore the factors nurses perceive as influencing their knowledge acquisition in relation to diabetes care and its management in Saudi Arabia.
Diabetes continues to pose major healthcare challenges despite advances in diabetes management. Nurses have a crucial role in diabetes care, but diabetes knowledge deficits deter effective collaboration with other healthcare providers in educating patients about diabetes self‐management.
An exploratory descriptive qualitative design.
In this paper, the computational complexity will be reduced using a revised version of the selected mapping (SLM) algorithm. Where a partial SLM is achieved to reduce the mathematical operations around 50%. Although the peak to average power ratio (PAPR) reduction gain has been slightly degraded, the dramatic reduction in the computational complexity is an outshining achievement. Matlab simulation is used to evaluate the results, where the PAPR result shows the capability of the proposed method.
The aim of this paper is to propose an efficient three steps iterative method for finding the zeros of the nonlinear equation f(x)=0 . Starting with a suitably chosen , the method generates a sequence of iterates converging to the root. The convergence analysis is proved to establish its five order of convergence. Several examples are given to illustrate the efficiency of the proposed new method and its comparison with other methods.
A numerical method (F.E.)was derived for incompressible viscoelastic materials, the aging and
environmental phenomena especially the temperature effect was considered in this method. A
treatment of incompressibility was made for all permissible values of poisons ratio. A
mechanical model represents the incompressible viscoelastic materials and so the properties can
be derived using the Laplace transformations technique .A comparison was made with the other
methods interested with viscoelastic materials by applying the method on a cylinder of viscoelastic material surrounding by a steel casing and subjected to a constant internal pressure, as well as a comparison with another viscoelastic method and for Asphalt Concrete pro
Estimating multivariate location and scatter with both affine equivariance and positive break down has always been difficult. Awell-known estimator which satisfies both properties is the Minimum volume Ellipsoid Estimator (MVE) Computing the exact (MVE) is often not feasible, so one usually resorts to an approximate Algorithm. In the regression setup, algorithm for positive-break down estimators like Least Median of squares typically recomputed the intercept at each step, to improve the result. This approach is called intercept adjustment. In this paper we show that a similar technique, called location adjustment, Can be applied to the (MVE). For this purpose we use the Minimum Volume Ball (MVB). In order
... Show MoreA substantial portion of today’s multimedia data exists in the form of unstructured text. However, the unstructured nature of text poses a significant task in meeting users’ information requirements. Text classification (TC) has been extensively employed in text mining to facilitate multimedia data processing. However, accurately categorizing texts becomes challenging due to the increasing presence of non-informative features within the corpus. Several reviews on TC, encompassing various feature selection (FS) approaches to eliminate non-informative features, have been previously published. However, these reviews do not adequately cover the recently explored approaches to TC problem-solving utilizing FS, such as optimization techniques.
... Show More