In this research، a comparison has been made between the robust estimators of (M) for the Cubic Smoothing Splines technique، to avoid the problem of abnormality in data or contamination of error، and the traditional estimation method of Cubic Smoothing Splines technique by using two criteria of differentiation which are (MADE، WASE) for different sample sizes and disparity levels to estimate the chronologically different coefficients functions for the balanced longitudinal data which are characterized by observations obtained through (n) from the independent subjects، each one of them is measured repeatedly by group of specific time points (m)،since the frequent measurements within the subjects are almost connected and independent among the different subjects
Visual analytics becomes an important approach for discovering patterns in big data. As visualization struggles from high dimensionality of data, issues like concept hierarchy on each dimension add more difficulty and make visualization a prohibitive task. Data cube offers multi-perspective aggregated views of large data sets and has important applications in business and many other areas. It has high dimensionality, concept hierarchy, vast number of cells, and comes with special exploration operations such as roll-up, drill-down, slicing and dicing. All these issues make data cubes very difficult to visually explore. Most existing approaches visualize a data cube in 2D space and require preprocessing steps. In this paper, we propose a visu
... Show MoreThe using of the parametric models and the subsequent estimation methods require the presence of many of the primary conditions to be met by those models to represent the population under study adequately, these prompting researchers to search for more flexible parametric models and these models were nonparametric, many researchers, are interested in the study of the function of permanence and its estimation methods, one of these non-parametric methods.
For work of purpose statistical inference parameters around the statistical distribution for life times which censored data , on the experimental section of this thesis has been the comparison of non-parametric methods of permanence function, the existence
... Show MoreThe transmitting and receiving of data consume the most resources in Wireless Sensor Networks (WSNs). The energy supplied by the battery is the most important resource impacting WSN's lifespan in the sensor node. Therefore, because sensor nodes run from their limited battery, energy-saving is necessary. Data aggregation can be defined as a procedure applied for the elimination of redundant transmissions, and it provides fused information to the base stations, which in turn improves the energy effectiveness and increases the lifespan of energy-constrained WSNs. In this paper, a Perceptually Important Points Based Data Aggregation (PIP-DA) method for Wireless Sensor Networks is suggested to reduce redundant data before sending them to the
... Show MoreAbstract
A surface fitting model is developed based on calorimeter data for two famous brands of household compressors. Correlation equations of ten coefficient polynomials were found as a function of refrigerant saturating and evaporating temperatures in range of (-35℃ to -10℃) using Matlab software for cooling capacity, power consumption, and refrigerant mass flow rate.
Additional correlations equations for these variables as a quick choice selection for a proper compressor use at ASHRAE standard that cover a range of swept volume range (2.24-11.15) cm3.
The result indicated that these surface fitting models are accurate with in ± 15% for 72 compressors model of cooling cap
... Show MoreThe research aims to estimate missing values using covariance analysis method Coons way to the variable response or dependent variable that represents the main character studied in a type of multi-factor designs experiments called split block-design (SBED) so as to increase the accuracy of the analysis results and the accuracy of statistical tests based on this type of designs. as it was noted in the theoretical aspect to the design of dissident sectors and statistical analysis have to analyze the variation in the experience of experiment )SBED) and the use of covariance way coons analysis according to two methods to estimate the missing value, either in the practical side of it has been implemented field experiment wheat crop in
... Show MoreBackground: Accurate measurement of a patient’s height and weight is an essential part of diagnosis and therapy, but there is some controversy as to how to calculate the height and weight of patients with disabilities. Objective: This study aims to use anthropometric measurements (arm span, length of leg, chest circumference, and waist circumference) to find a model (alternatives) that can allow the calculation of the height and the body weight of patients with disabilities. Additionally, a model for the prediction of weight and height measurements of patients with disabilities was established. Method: Four hander patients aged 20-80 years were enrolled in this study and divided into two groups, 210 (52.5%) male and 190 (47.5%) fe
... Show MoreThe Estimation Of The Reliability Function Depends On The Accuracy Of The Data Used To Estimate The Parameters Of The Probability distribution, and Because Some Data Suffer from a Skew in their Data to Estimate the Parameters and Calculate the Reliability Function in light of the Presence of Some Skew in the Data, there must be a Distribution that has flexibility in dealing with that Data. As in the data of Diyala Company for Electrical Industries, as it was observed that there was a positive twisting in the data collected from the Power and Machinery Department, which required distribution that deals with those data and searches for methods that accommodate this problem and lead to accurate estimates of the reliability function,
... Show MoreResearchers have increased interest in recent years in determining the optimum sample size to obtain sufficient accuracy and estimation and to obtain high-precision parameters in order to evaluate a large number of tests in the field of diagnosis at the same time. In this research, two methods were used to determine the optimum sample size to estimate the parameters of high-dimensional data. These methods are the Bennett inequality method and the regression method. The nonlinear logistic regression model is estimated by the size of each sampling method in high-dimensional data using artificial intelligence, which is the method of artificial neural network (ANN) as it gives a high-precision estimate commensurate with the dat
... Show MoreThis research aims primarily to highlight personal tax exemptions A comparative study with some Arab and European regulations. And by conducting both theoretical comparative analyses. Most important findings of the study is the need to grant personal and family exemptions that differ according to the civil status of the taxpayer (single or married). In other words, the exemption increases as the number of family members depend on its social sense. Also taking into account some incomes that require a certain effort and looking at the tax rates, it is unreasonable for wages to be subject to the same rates applied to commercial profits.
Background: Atherosclerosis is well known related to age and certain cardiovascular diseases. Aging is one reason of arteries function deterioration which can cause loss of compliance and plaque accumulation, this effect increases by the presence of certain diseases such as hypertension and diabetes disease. Aim: To investigate the reduction of blood supply to the brain in patients with diabetes and hypertension with age and the role of resistive index in the diagnosis of reduced blood flow. Method: Patients with both diseases diabetic and hypertension were classified according to their age to identify the progression of the disease and factors influencing the carotid artery blood flow. By using ultrasound and standard Doppler techniq
... Show More