The development of information systems in recent years has contributed to various methods of gathering information to evaluate IS performance. The most common approach used to collect information is called the survey system. This method, however, suffers one major drawback. The decision makers consume considerable time to transform data from survey sheets to analytical programs. As such, this paper proposes a method called ‘survey algorithm based on R programming language’ or SABR, for data transformation from the survey sheets inside R environments by treating the arrangement of data as a relational format. R and Relational data format provide excellent opportunity to manage and analyse the accumulated data. Moreover, a survey syste
... Show MoreAbstract
The research Compared two methods for estimating fourparametersof the compound exponential Weibull - Poisson distribution which are the maximum likelihood method and the Downhill Simplex algorithm. Depending on two data cases, the first one assumed the original data (Non-polluting), while the second one assumeddata contamination. Simulation experimentswere conducted for different sample sizes and initial values of parameters and under different levels of contamination. Downhill Simplex algorithm was found to be the best method for in the estimation of the parameters, the probability function and the reliability function of the compound distribution in cases of natural and contaminateddata.
... Show More
Wireless sensor applications are susceptible to energy constraints. Most of the energy is consumed in communication between wireless nodes. Clustering and data aggregation are the two widely used strategies for reducing energy usage and increasing the lifetime of wireless sensor networks. In target tracking applications, large amount of redundant data is produced regularly. Hence, deployment of effective data aggregation schemes is vital to eliminate data redundancy. This work aims to conduct a comparative study of various research approaches that employ clustering techniques for efficiently aggregating data in target tracking applications as selection of an appropriate clustering algorithm may reflect positive results in the data aggregati
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for
The data preprocessing step is an important step in web usage mining because of the nature of log data, which are heterogeneous, unstructured, and noisy. Given the scalability and efficiency of algorithms in pattern discovery, a preprocessing step must be applied. In this study, the sequential methodologies utilized in the preprocessing of data from web server logs, with an emphasis on sub-phases, such as session identification, user identification, and data cleansing, are comprehensively evaluated and meticulously examined.
Gypseous soils are distributed in many regions in the world including Iraq, which cover more than (31%) of the surface area of the country. Existence of these soils, always with high gypsum content, caused difficult problems to the buildings and strategic projects due to dissolution and leaching of the gypsum caused by the action of water flow through soil mass. For the study, the gypseous soil was brought from Bahr Al-Najaf, Al-Najaf Governorate which is located in the middle of Iraq. The model pile was embedded in gypseous soil with 42% gypsum content. Compression axial model pile load tests have been carried out for model pile embedded in gypseous soil at initial degree of saturation of (7%) before and after soil satu
... Show MoreReducing the drag force has become one of the most important concerns in the automotive industry. This study concentrated on reducing drag through use of some external modifications of passive flow control, such as vortex generators, rear under body diffuser slices and a rear wing spoiler. The study was performed at inlet velocity (V=10,20,30,40 m/s) which correspond to an incompressible car model length Reynolds numbers (Re=2.62×105, 5.23×105, 7.85×105 and 10.46×105), respectively and we studied their effect on the drag force. We also present a theoretical study finite volume method (FVM) of solvi
The objective of this paper is to study the stability of SIS epidemic model involving treatment. Two types of such eco-epidemiological models are introduced and analyzed. Boundedness of the system is established. The local and global dynamical behaviors are performed. The conditions of persistence of the models are derived.