The region-based association analysis has been proposed to capture the collective behavior of sets of variants by testing the association of each set instead of individual variants with the disease. Such an analysis typically involves a list of unphased multiple-locus genotypes with potentially sparse frequencies in cases and controls. To tackle the problem of the sparse distribution, a two-stage approach was proposed in literature: In the first stage, haplotypes are computationally inferred from genotypes, followed by a haplotype coclassification. In the second stage, the association analysis is performed on the inferred haplotype groups. If a haplotype is unevenly distributed between the case and control samples, this haplotype is labeled as a risk haplotype. Unfortunately, the in-silico reconstruction of haplotypes might produce a proportion of false haplotypes which hamper the detection of rare but true haplotypes. Here, to address the issue, we propose an alternative approach: In Stage 1, we cluster genotypes instead of inferred haplotypes and estimate the risk genotypes based on a finite mixture model. In Stage 2, we infer risk haplotypes from risk genotypes inferred from the previous stage. To estimate the finite mixture model, we propose an EM algorithm with a novel data partition-based initialization. The performance of the proposed procedure is assessed by simulation studies and a real data analysis. Compared to the existing multiple Z-test procedure, we find that the power of genome-wide association studies can be increased by using the proposed procedure.
The current study aims to compare between the assessments of the Rush model’s parameters to the missing and completed data in various ways of processing the missing data. To achieve the aim of the present study, the researcher followed the following steps: preparing Philip Carter test for the spatial capacity which consists of (20) items on a group of (250) sixth scientific stage students in the directorates of Baghdad Education at Al–Rusafa (1st, 2nd and 3rd) for the academic year (2018-2019). Then, the researcher relied on a single-parameter model to analyze the data. The researcher used Bilog-mg3 model to check the hypotheses, data and match them with the model. In addition
... Show MorePoly (viny1 alcohol) (PVA) of 72000 g mol -1 molar mass was cross linked through cold cast esterification with different mol % of MA and EDTA (10 % , 20 % and 30 % ), and two different mol % mixture of MA l EDTA (20 %/5% and 20%/10% .
The issue of increasing the range covered by a wireless sensor network with restricted sensors is addressed utilizing improved CS employing the PSO algorithm and opposition-based learning (ICS-PSO-OBL). At first, the iteration is carried out by updating the old solution dimension by dimension to achieve independent updating across the dimensions in the high-dimensional optimization problem. The PSO operator is then incorporated to lessen the preference random walk stage's imbalance between exploration and exploitation ability. Exceptional individuals are selected from the population using OBL to boost the chance of finding the optimal solution based on the fitness value. The ICS-PSO-OBL is used to maximize coverage in WSN by converting r
... Show MoreDeep drawing process to produce square cup is very complex process due to a lot of process parameters which control on this process, therefore associated with it many of defects such as earing, wrinkling and fracture. Study of the effect of some process parameters to determine the values of these parameters which give the best result, the distributions for the thickness and depths of the cup were used to estimate the effect of the parameters on the cup numerically, in addition to experimental verification just to the conditions which give the best numerical predictions in order to reduce the time, efforts and costs for producing square cup with less defects experimentally is the aim of this study. The numerical analysis is used to study
... Show MoreThis paper is concerned with combining two different transforms to present a new joint transform FHET and its inverse transform IFHET. Also, the most important property of FHET was concluded and proved, which is called the finite Hankel – Elzaki transforms of the Bessel differential operator property, this property was discussed for two different boundary conditions, Dirichlet and Robin. Where the importance of this property is shown by solving axisymmetric partial differential equations and transitioning to an algebraic equation directly. Also, the joint Finite Hankel-Elzaki transform method was applied in solving a mathematical-physical problem, which is the Hotdog Problem. A steady state which does not depend on time was discussed f
... Show MoreBasic orientation is to look at identifying conceptual perspective to market self-research and descriptive, as has the marketing theme for the same attention in the practical side before endo scopic In recent years, is marketing an integrated and holistic included many areas not limited to the marketing of goods and services, and even included the marketing of religion, politics and individuals for themselves, as the awareness and concepts that seep into the soul of man from its inception until his arrival to the stage of owning a level of skills or expertise, scientific or all of those things degrees mixed with ambition and aspiration for self-realization takes way to search for opportunities or created, often observe individual
... Show MoreLinear discriminant analysis and logistic regression are the most widely used in multivariate statistical methods for analysis of data with categorical outcome variables .Both of them are appropriate for the development of linear classification models .linear discriminant analysis has been that the data of explanatory variables must be distributed multivariate normal distribution. While logistic regression no assumptions on the distribution of the explanatory data. Hence ,It is assumed that logistic regression is the more flexible and more robust method in case of violations of these assumptions.
In this paper we have been focus for the comparison between three forms for classification data belongs
... Show MoreThis paper provides an attempt for modeling rate of penetration (ROP) for an Iraqi oil field with aid of mud logging data. Data of Umm Radhuma formation was selected for this modeling. These data include weight on bit, rotary speed, flow rate and mud density. A statistical approach was applied on these data for improving rate of penetration modeling. As result, an empirical linear ROP model has been developed with good fitness when compared with actual data. Also, a nonlinear regression analysis of different forms was attempted, and the results showed that the power model has good predicting capability with respect to other forms.
Today, problems of spatial data integration have been further complicated by the rapid development in communication technologies and the increasing amount of available data sources on the World Wide Web. Thus, web-based geospatial data sources can be managed by different communities and the data themselves can vary in respect to quality, coverage, and purpose. Integrating such multiple geospatial datasets remains a challenge for geospatial data consumers. This paper concentrates on the integration of geometric and classification schemes for official data, such as Ordnance Survey (OS) national mapping data, with volunteered geographic information (VGI) data, such as the data derived from the OpenStreetMap (OSM) project. Useful descriptions o
... Show More