Recurrent strokes can be devastating, often resulting in severe disability or death. However, nearly 90% of the causes of recurrent stroke are modifiable, which means recurrent strokes can be averted by controlling risk factors, which are mainly behavioral and metabolic in nature. Thus, it shows that from the previous works that recurrent stroke prediction model could help in minimizing the possibility of getting recurrent stroke. Previous works have shown promising results in predicting first-time stroke cases with machine learning approaches. However, there are limited works on recurrent stroke prediction using machine learning methods. Hence, this work is proposed to perform an empirical analysis and to investigate machine learning algorithms implementation in the recurrent stroke prediction models. This research aims to investigate and compare the performance of machine learning algorithms using recurrent stroke clinical public datasets. In this study, Artificial Neural Network (ANN), Support Vector Machine (SVM) and Bayesian Rule List (BRL) are used and compared their performance in the domain of recurrent stroke prediction model. The result of the empirical experiments shows that ANN scores the highest accuracy at 80.00%, follows by BRL with 75.91% and SVM with 60.45%.
this research aims at a number of objectives including Developing the tax examination process and raise its efficiency without relying on comprehensive examination method using some statistical methods in the tax examination and Discussing the most important concepts related to the statistical methods used in the tax examination and showing its importance and how they are applied. the research represents an applied study in the General Commission of taxes. In order to achieve its objectives the research has used in the theoretical side the descriptive approach (analytical), and in the practical side Some statistical methods applied to the sample of the final accounts for the contracting company (limited) and the pharmaceutical industry (
... Show MoreBotnet detection develops a challenging problem in numerous fields such as order, cybersecurity, law, finance, healthcare, and so on. The botnet signifies the group of co-operated Internet connected devices controlled by cyber criminals for starting co-ordinated attacks and applying various malicious events. While the botnet is seamlessly dynamic with developing counter-measures projected by both network and host-based detection techniques, the convention techniques are failed to attain sufficient safety to botnet threats. Thus, machine learning approaches are established for detecting and classifying botnets for cybersecurity. This article presents a novel dragonfly algorithm with multi-class support vector machines enabled botnet
... Show MoreObjecte The study aims to test the effect of using the appropriate quantitative method of demand forecasting in improving the performance of supply chain of the aviation fuel product ( The study sample), One of the products of the Doura refinery (The study site), By testing a set of quantitative methods of demand forecasting using forecasting error measurements, and choosing the least faulty, most accurate and reliable method and adept it in the building chain.
Is the study of problem through a starting with the fol
... Show MoreMarketing is one of the most important pillars on which most industrial and commercial sectors depend on evaluating their performance, improving their financial position, development and economic growth. The presence of effective marketing activities in any industrial or commercial organization (which works to meet the requirements of customers in order to ensure the integration of trading and handling rings with consumers and to ensure the growth of the marketing process regularly and not to retreat) effectively contributes to maintaining the company's position between its competitors and its customers. It is necessary to have these marketing activities in order to meet the requirements of the organization on the one hand and to
... Show MoreSome auditors may think that the audit process ends with discovering misstatements and informing management about them, while the discovery of misstatements may be classified by some as the first step in the phase of separating these distortions, as the auditor should collect these misstatements, evaluate them and detail them into misstatements involving errors or misstatements involving fraud Then evaluating it to material or immaterial according to what was stated in the international auditing standards and directing management to amend the essential ones. The importance of this research lies in identifying the concept of distortions and their types, identifying the method of evaluating distortions into substantial and non-essent
... Show MoreAbstract
The study aims to examine the relationships between cognitive absorption and E-Learning readiness in the preparatory stage. The study sample consisted of (190) students who were chosen randomly. The Researcher has developed the cognitive absorption and E-Learning readiness scales. A correlational descriptive approach was adopted. The research revealed that there is a positive statistical relationship between cognitive absorption and eLearning readiness.
Iraq suffers the continuing lack of water resources in generdwether it is surface or underearth water or rain. The study of rain has got the utmost importance in order to the rain direction in Iraq and in Mosul in particular and what it will be in future. It also shows the wet as well as the dry seasons and the possibility of expecting them and expecting their quantities in order to invest them and to keep this vital resource The research deals with predict the wet and dry rainy seasons in Mosul using (SPI) Standardized precipitation index extracted from conversion of Gamma distribution to standardized normal distribution , depending on data of monthly rain amounts for 1940-2013 . Results showed existence of 31 w
... Show MoreRecommender Systems are tools to understand the huge amount of data available in the internet world. Collaborative filtering (CF) is one of the most knowledge discovery methods used positively in recommendation system. Memory collaborative filtering emphasizes on using facts about present users to predict new things for the target user. Similarity measures are the core operations in collaborative filtering and the prediction accuracy is mostly dependent on similarity calculations. In this study, a combination of weighted parameters and traditional similarity measures are conducted to calculate relationship among users over Movie Lens data set rating matrix. The advantages and disadvantages of each measure are spotted. From the study, a n
... Show More