Most of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve Bayesian classifier (NBC) have been enhanced as compared to the dataset before applying the proposed method. Moreover, the results indicated that issa was performed better than the statistical imputation techniques such as deleting the samples with missing values, replacing the missing values with zeros, mean, or random values.
In today's world, the science of bioinformatics is developing rapidly, especially with regard to the analysis and study of biological networks. Scientists have used various nature-inspired algorithms to find protein complexes in protein-protein interaction (PPI) networks. These networks help scientists guess the molecular function of unknown proteins and show how cells work regularly. It is very common in PPI networks for a protein to participate in multiple functions and belong to many complexes, and as a result, complexes may overlap in the PPI networks. However, developing an efficient and reliable method to address the problem of detecting overlapping protein complexes remains a challenge since it is considered a complex and har
... Show MoreBlockchain technology relies on cryptographic techniques that provide various advantages, such as trustworthiness, collaboration, organization, identification, integrity, and transparency. Meanwhile, data analytics refers to the process of utilizing techniques to analyze big data and comprehend the relationships between data points to draw meaningful conclusions. The field of data analytics in Blockchain is relatively new, and few studies have been conducted to examine the challenges involved in Blockchain data analytics. This article presents a systematic analysis of how data analytics affects Blockchain performance, with the aim of investigating the current state of Blockchain-based data analytics techniques in research fields and
... Show MoreLeft bundle branch block (LBBB) is a common finding in electrocardiography, there are many causes of LBBB.
The aim of this study is to discuss the true prevalence of coronary artery disease (CAD) in patients with LBBB and associated risk factors in the form of hypertension and diabetes mellitus.
Patients with LBBB were admitted to the Iraqi heart center for cardiac disea
BACKGROUND: Diabetes Mellitus is a complex chronic illness that has increased significantly around the world and is expected to affect 628 million in 2045. Undiagnosed type 2 diabetes may affect 24% - 62% of the people with diabetes; while the prevalence of prediabetes is estimated to be 470 million cases by 2030. AIM OF STUDY: To find the percentage of undiagnosed diabetes and prediabetes in a slice of people aged ≥ 45years, and relate it with age, gender, central obesity, hypertension, and family history of diabetes. METHODS: A cross sectional study that included 712 healthy individuals living in Baghdad who accepted to take part in this study and fulfilling the inclusion and exclusion criteria.
... Show MoreAbstract:
Research Topic: Ruling on the sale of big data
Its objectives: a statement of what it is, importance, source and governance.
The methodology of the curriculum is inductive, comparative and critical
One of the most important results: it is not permissible to attack it and it is a valuable money, and it is permissible to sell big data as long as it does not contain data to users who are not satisfied with selling it
Recommendation: Follow-up of studies dealing with the provisions of the issue
Subject Terms
Judgment, Sale, Data, Mega, Sayings, Jurists
In this article, we aim to define a universal set consisting of the subscripts of the fuzzy differential equation (5) except the two elements and , subsets of that universal set are defined according to certain conditions. Then, we use the constructed universal set with its subsets for suggesting an analytical method which facilitates solving fuzzy initial value problems of any order by using the strongly generalized H-differentiability. Also, valid sets with graphs for solutions of fuzzy initial value problems of higher orders are found.
Objectives: Teenage pregnancy with gestational diabetes mellitus (GDM) offers a real challenge to the health system and needs a special care. We aimed to evaluate possible obstetrical and neonatal adverse events of different treatment protocols in adolescent GDM including lifestyle, metformin (MTF), and insulin. Methods: All teen pregnant women ≤ 19 years old visiting Baghdad Teaching Hospital throughout four years (from June 1, 2016 till May 31, 2020) diagnosed with GDM were included in this cohort study and followed-up closely throughout pregnancy and after delivery. Included adolescents were put on lifestyle alone during the first week of presentation. Adolescents who reached target glucose measurements were categorized i
... Show MoreDifferent ANN architectures of MLP have been trained by BP and used to analyze Landsat TM images. Two different approaches have been applied for training: an ordinary approach (for one hidden layer M-H1-L & two hidden layers M-H1-H2-L) and one-against-all strategy (for one hidden layer (M-H1-1)xL, & two hidden layers (M-H1-H2-1)xL). Classification accuracy up to 90% has been achieved using one-against-all strategy with two hidden layers architecture. The performance of one-against-all approach is slightly better than the ordinary approach
Big data analysis is essential for modern applications in areas such as healthcare, assistive technology, intelligent transportation, environment and climate monitoring. Traditional algorithms in data mining and machine learning do not scale well with data size. Mining and learning from big data need time and memory efficient techniques, albeit the cost of possible loss in accuracy. We have developed a data aggregation structure to summarize data with large number of instances and data generated from multiple data sources. Data are aggregated at multiple resolutions and resolution provides a trade-off between efficiency and accuracy. The structure is built once, updated incrementally, and serves as a common data input for multiple mining an
... Show More