The region-based association analysis has been proposed to capture the collective behavior of sets of variants by testing the association of each set instead of individual variants with the disease. Such an analysis typically involves a list of unphased multiple-locus genotypes with potentially sparse frequencies in cases and controls. To tackle the problem of the sparse distribution, a two-stage approach was proposed in literature: In the first stage, haplotypes are computationally inferred from genotypes, followed by a haplotype coclassification. In the second stage, the association analysis is performed on the inferred haplotype groups. If a haplotype is unevenly distributed between the case and control samples, this haplotype is labeled as a risk haplotype. Unfortunately, the in-silico reconstruction of haplotypes might produce a proportion of false haplotypes which hamper the detection of rare but true haplotypes. Here, to address the issue, we propose an alternative approach: In Stage 1, we cluster genotypes instead of inferred haplotypes and estimate the risk genotypes based on a finite mixture model. In Stage 2, we infer risk haplotypes from risk genotypes inferred from the previous stage. To estimate the finite mixture model, we propose an EM algorithm with a novel data partition-based initialization. The performance of the proposed procedure is assessed by simulation studies and a real data analysis. Compared to the existing multiple Z-test procedure, we find that the power of genome-wide association studies can be increased by using the proposed procedure.
In this paper the method of singular value decomposition is used to estimate the ridge parameter of ridge regression estimator which is an alternative to ordinary least squares estimator when the general linear regression model suffer from near multicollinearity.
A restrictive relative clause (RRC hereafter), which is also known as a defining relative clause, gives essential information about a noun that comes before it: without this clause the sentence wouldn’t make much sense. A RRC can be introduced by that, which, whose, who, or whom. Givon (1993, 1995), Fox (1987), Fox and Thompson (1990) state that a RCC is used for two main functions: grounding and description. When a RRC serves the function of linking the current referent to the preceding utterance in the discourse, it does a grounding function; and when the information coded in a RRC is associated with the prior proposition frame, the RRC does a proposition-linking grounding function. Furthermore, when a RRC is not used to ground a new di
... Show MoreThis study focuses on the modeling of manufactured damper when used in steel buildings. The main aim of the manufactured dampers is to protect the steel buildings from the damaging effects that may result due to earthquakes by introducing an extra damping in addition to the traditional damping.
Only Pure Manufactured Dampers, has been considered in this study. Viscous modeling of damping is generally preferred in structural engineering as it leads to a linear model then it has been used during this study to simulate the behavior of the Pure Manufactured Damper.
After definition of structural parameters of a manufactured damper (its stiffness and its damping) it can be used as a structural element that can be added to a mathematica
This study aims at identifying the reality of alternative assessment for teachers of the first cycle of the basic education in the Sultanate of Oman with respect to the degree of teachers' use of alternative assessment strategies, level of self-efficacy for alternative assessment strategies, and attitude towards alternative assessment, and their relationship with other variables. To achieve the aims of the study, a descriptive research approach was utilized. A 5-point self-rated questionnaire was developed. It consists of three sections: Actual use of alternative assessment strategies (21 items), self-efficacy for alternative assessment strategies (21 items), and attitude towards alternative assessment (27 items). The psychometric proper
... Show MoreThe research aims to explain the role of huge data analyzes in measuring quality costs in the Iraqi company for the production of seed, and the research problem was diagnosed with the weakness of the approved method to measure quality costs, and the weak traditional systems of data analyzes, the researcher in the theoretical aspect relied on collecting sources and previous studies, as well as Adoption of the applied analytical approach in the practical aspect, as a set of financial analyzes were applied within the measurement of quality costs and a statement of the role of data analyzes in the practical side, the research concluded to a set of conc
... Show MoreThe financial markets are one of the sectors whose data is characterized by continuous movement in most of the times and it is constantly changing, so it is difficult to predict its trends , and this leads to the need of methods , means and techniques for making decisions, and that pushes investors and analysts in the financial markets to use various and different methods in order to reach at predicting the movement of the direction of the financial markets. In order to reach the goal of making decisions in different investments, where the algorithm of the support vector machine and the CART regression tree algorithm are used to classify the stock data in order to determine
... Show MoreThis study investigates the impact of spatial resolution enhancement on supervised classification accuracy using Landsat 9 satellite imagery, achieved through pan-sharpening techniques leveraging Sentinel-2 data. Various methods were employed to synthesize a panchromatic (PAN) band from Sentinel-2 data, including dimension reduction algorithms and weighted averages based on correlation coefficients and standard deviation. Three pan-sharpening algorithms (Gram-Schmidt, Principal Components Analysis, Nearest Neighbour Diffusion) were employed, and their efficacy was assessed using seven fidelity criteria. Classification tasks were performed utilizing Support Vector Machine and Maximum Likelihood algorithms. Results reveal that specifi
... Show MoreThe increasing complexity of how humans interact with and process information has demonstrated significant advancements in Natural Language Processing (NLP), transitioning from task-specific architectures to generalized frameworks applicable across multiple tasks. Despite their success, challenges persist in specialized domains such as translation, where instruction tuning may prioritize fluency over accuracy. Against this backdrop, the present study conducts a comparative evaluation of ChatGPT-Plus and DeepSeek (R1) on a high-fidelity bilingual retrieval-and-translation task. A single standardize prompt directs each model to access the Arabic-language news section of the College of Medicine, University of Baghdad, retrieve the three most r
... Show More