The region-based association analysis has been proposed to capture the collective behavior of sets of variants by testing the association of each set instead of individual variants with the disease. Such an analysis typically involves a list of unphased multiple-locus genotypes with potentially sparse frequencies in cases and controls. To tackle the problem of the sparse distribution, a two-stage approach was proposed in literature: In the first stage, haplotypes are computationally inferred from genotypes, followed by a haplotype coclassification. In the second stage, the association analysis is performed on the inferred haplotype groups. If a haplotype is unevenly distributed between the case and control samples, this haplotype is labeled as a risk haplotype. Unfortunately, the in-silico reconstruction of haplotypes might produce a proportion of false haplotypes which hamper the detection of rare but true haplotypes. Here, to address the issue, we propose an alternative approach: In Stage 1, we cluster genotypes instead of inferred haplotypes and estimate the risk genotypes based on a finite mixture model. In Stage 2, we infer risk haplotypes from risk genotypes inferred from the previous stage. To estimate the finite mixture model, we propose an EM algorithm with a novel data partition-based initialization. The performance of the proposed procedure is assessed by simulation studies and a real data analysis. Compared to the existing multiple Z-test procedure, we find that the power of genome-wide association studies can be increased by using the proposed procedure.
Objective : To study the effect of some risk factors like age, smoking and Diabetes mellitus (DM) among patients with
certain cardiovascular diseases (Angina pectoris and Myocardial infarction), in addition to the assessment of the Creactive
protein (CRP) in the sera of those patients.
Methodology: The study was carried out on (100) subjects who were hospitalized in the Iraqi Center of heart Diseases
in Baghdad city and were suffering from Myocardial InfarcƟon (MI) (16) and Angina Pectoris (AP) (79) or from both (5)
over a period from September 2009 to June 2010. The results of paƟents were compared with those of (30) healthy
and age-matched individuals as a control group. Data were obtained from patients who were alr
Nonlinear time series analysis is one of the most complex problems ; especially the nonlinear autoregressive with exogenous variable (NARX) .Then ; the problem of model identification and the correct orders determination considered the most important problem in the analysis of time series . In this paper , we proposed splines estimation method for model identification , then we used three criterions for the correct orders determination. Where ; proposed method used to estimate the additive splines for model identification , And the rank determination depends on the additive property to avoid the problem of curse dimensionally . The proposed method is one of the nonparametric methods , and the simulation results give a
... Show MoreLost circulation or losses in drilling fluid is one of the most important problems in the oil and gas industry, and it appeared at the beginning of this industry, which caused many problems during the drilling process, which may lead to closing the well and stopping the drilling process. The drilling muds are relatively expensive, especially the muds that contain oil-based mud or that contain special additives, so it is not economically beneficial to waste and lose these muds. The treatment of drilling fluid losses is also somewhat expensive as a result of the wasted time that it caused, as well as the high cost of materials used in the treatment such as heavy materials, cement, and others. The best way to deal with drilling fluid losses
... Show MoreIn many scientific fields, Bayesian models are commonly used in recent research. This research presents a new Bayesian model for estimating parameters and forecasting using the Gibbs sampler algorithm. Posterior distributions are generated using the inverse gamma distribution and the multivariate normal distribution as prior distributions. The new method was used to investigate and summaries Bayesian statistics' posterior distribution. The theory and derivation of the posterior distribution are explained in detail in this paper. The proposed approach is applied to three simulation datasets of 100, 300, and 500 sample sizes. Also, the procedure was extended to the real dataset called the rock intensity dataset. The actual dataset is collecte
... Show MoreAsphalt binder is a thermoplastic material that conducts as an elastic solid at lower service temperatures or throughout fast loading rate. At a high temperature or slow rate of loading, asphalt binder conducts as a different liquid. The classical duplication generates a required to assess the mechanical properties of asphalt concrete at the anticipated service temperature to reduce the stress cracking, which happens at lower temperatures, fatigue, and the plastic deformation at higher temperatures (rutting). In this study, an achievement was made to assess the effect of temperature on the mechanical characteristics of asphalt concrete mixes. A total of 132 asphalt concrete samples were attended utilizing two asphalt cement grades (40-50) a
... Show MoreThe current world is observing huge developments in presenting the opportunity for organizations and administrative units to use information and communication technology and their adoption by administrative work due to its importance in the achievement of work with higher efficiency, speed, and facility of communication with all individuals and companies using various means of communication Depending on the Internet networks. Therefore, the research dealt with the study of electronic systems designed and adopted in the creation or construction of a database for archiving data, which is the main method in organizations and administrative units in developed countries. Where this system works to convert documents, and manual processes and t
... Show MoreThis article explores the process of VGI collection by assessing the relative usability and accuracy of a range of different methods (Smartphone GPS, Tablet, and analogue maps) for data collection amongst different demographic and educational groups, and in different geographical contexts. Assessments are made of positional accuracy, completeness, and data collectors’ experiences with reference to the official cadastral data and the administration system in a case-study region of Iraq. Ownership data was validated by crowd agreement. The result shows that successful VGI projects have access to varying data collection methods.
Due to the easily access to the satellite images, Google Earth (GE) images have become more popular than other online virtual globes. However, the popularity of GE is not an indication of its accuracy. A considerable amount of literature has been published on evaluating the positional accuracy of GE data; however there are few studies which have investigated the subject of improving the GE accuracy. In this paper, a practical method for enhancing the horizontal positional accuracy of GE is suggested by establishing ten reference points, in University of Baghdad main campus, using different Global Navigation Satellite System (GNSS) observation techniques: Rapid Static, Post-Processing Kinematic, and Network. Then, the GE image for the study
... Show More