This Book is intended to be textbook studied for undergraduate course in multivariate analysis. This book is designed to be used in semester system. In order to achieve the goals of the book, it is divided into the following chapters (as done in the first edition 2019). Chapter One introduces matrix algebra. Chapter Two devotes to Linear Equation System Solution with quadratic forms, Characteristic roots & vectors. Chapter Three discusses Partitioned Matrices and how to get Inverse, Jacobi and Hessian matrices. Chapter Four deals with Multivariate Normal Distribution (MVN). Chapter Five concern with Joint, Marginal and Conditional Normal Distribution, independency and correlations. While the revised new chapters have been added (as the current second edition 2024). Chapter six introduces mean vector estimation and covariance matrix estimation. Chapter seven devotes to testing concerning mean: one sample mean, and two sample mean. Chapter eight discusses special case of factorial analysis which is principal components analysis. Chapter nine deals with discriminant analysis. While chapter ten deals with cluster analysis. Many solved examples are intended in this book, in addition to a variety of unsolved relied problems at the end of each chapter to enrich the statistical knowledge of the readers.
Researchers have increased interest in recent years in determining the optimum sample size to obtain sufficient accuracy and estimation and to obtain high-precision parameters in order to evaluate a large number of tests in the field of diagnosis at the same time. In this research, two methods were used to determine the optimum sample size to estimate the parameters of high-dimensional data. These methods are the Bennett inequality method and the regression method. The nonlinear logistic regression model is estimated by the size of each sampling method in high-dimensional data using artificial intelligence, which is the method of artificial neural network (ANN) as it gives a high-precision estimate commensurate with the dat
... Show MoreThe interests toward developing accurate automatic face emotion recognition methodologies are growing vastly, and it is still one of an ever growing research field in the region of computer vision, artificial intelligent and automation. However, there is a challenge to build an automated system which equals human ability to recognize facial emotion because of the lack of an effective facial feature descriptor and the difficulty of choosing proper classification method. In this paper, a geometric based feature vector has been proposed. For the classification purpose, three different types of classification methods are tested: statistical, artificial neural network (NN) and Support Vector Machine (SVM). A modified K-Means clustering algorithm
... Show MoreThis Research Tries To Investigate The Problem Of Estimating The Reliability Of Two Parameter Weibull Distribution,By Using Maximum Likelihood Method, And White Method. The Comparison Is done Through Simulation Process Depending On Three Choices Of Models (?=0.8 , ß=0.9) , (?=1.2 , ß=1.5) and (?=2.5 , ß=2). And Sample Size n=10 , 70, 150 We Use the Statistical Criterion Based On the Mean Square Error (MSE) For Comparison Amongst The Methods.
Physical and chemical adsorption analyses were carried out by nitrogen gas using ASTM apparatus at 77 K and hydrogen gas using volumetric apparatus at room temperature respectively. These analyses were used for determination the effect of coke deposition and poisoning metal on surface area, pore size distribution and metal surface area of fresh and spent hydrodesulphurization catalyst Co-MoAl2O3 .Samples of catalyst (fresh and spent) used in this study are taken from AL-Dura refinery. The results of physical adsorption shows that surface area of spent catalyst reduced to third compare with fresh catalyst and these catalysts exhibit behavior of type four according to BET classification ,so, the pores of these samples are cylindrical, and the
... Show MoreAspect-based sentiment analysis is the most important research topic conducted to extract and categorize aspect-terms from online reviews. Recent efforts have shown that topic modelling is vigorously used for this task. In this paper, we integrated word embedding into collapsed Gibbs sampling in Latent Dirichlet Allocation (LDA). Specifically, the conditional distribution in the topic model is improved using the word embedding model that was trained against (customer review) training dataset. Semantic similarity (cosine measure) was leveraged to distribute the aspect-terms to their related aspect-category cognitively. The experiment was conducted to extract and categorize the aspect terms from SemEval 2014 dataset.
Single-photon detection concept is the most crucial factor that determines the performance of quantum key distribution (QKD) systems. In this paper, a simulator with time domain visualizers and configurable parameters using continuous time simulation approach is presented for modeling and investigating the performance of single-photon detectors operating in Gieger mode at the wavelength of 830 nm. The widely used C30921S silicon avalanche photodiode was modeled in terms of avalanche pulse, the effect of experiment conditions such as excess voltage, temperature and average photon number on the photon detection efficiency, dark count rate and afterpulse probability. This work shows a general repeatable modeling process for significant perform
... Show MoreA mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the
... Show MoreBackground: COVID-19 is a disease that started in Wuhan/China in late 2019 and continued through 2020 worldwide. Scientists worldwide continue to research to find vaccines, treatments, and medication for this disease. Studies also conenue to find the pathogenicity and epidemiology mechanisms. Materials and Methods: In this work, we analyzed cases obtained from Alshifaa center in Baghdad/Iraq for 23/2/2020-31/5/2020 with total instances of 797, positive cases of 393, and death cases of 30. Results: Results showed that the highest infection cases were among people aged between 41-45. Also, it was found that males' number of cases was more than females. In contrast, death cases were significantly higher in males than females. It was not
... Show MorePhysical and chemical adsorption analyses were carried out by nitrogen gas using ASTM apparatus at 77 K and hydrogen gas using volumetric apparatus at room temperature, respectively. These analyses were used to determine the effect of coke deposition and poisoning metal on surface area, pore size distribution, and metal surface area of fresh and spent hydrodesulphurization catalyst Co-Mo\Al2O3 . Samples of catalyst (fresh and spent) used in this study are taken from AL-Dura refinery. The results of physical adsorption shows that surface area of spent catalyst reduced to third compare with fresh catalyst and these catalysts exhibit behavior of type four according to BET classification ,so, the pores of these samples are cylindrical, an
... Show MoreA mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the others
... Show More