It has increasingly been recognised that the future developments in geospatial data handling will centre on geospatial data on the web: Volunteered Geographic Information (VGI). The evaluation of VGI data quality, including positional and shape similarity, has become a recurrent subject in the scientific literature in the last ten years. The OpenStreetMap (OSM) project is the most popular one of the leading platforms of VGI datasets. It is an online geospatial database to produce and supply free editable geospatial datasets for a worldwide. The goal of this paper is to present a comprehensive overview of the quality assurance of OSM data. In addition, the credibility of open source geospatial data is discussed, highlighting the difficulties and challenges of VGI data quality assessment. The conclusion is that for OSM dataset, it is quite difficult to control its quality. It therefore makes sense to use OSM data for applications do not need high quality spatial datasets.
تمهيد
غالبا ما يكون تعامل المنظمات المالية والمصرفية مع الزبائن بشكل أساسي مما يتطلب منها جمع كميات هائلة من البيانات عن هؤلاء الزبائن هذا بالإضافة الى ما يرد اليها يوميا من بيانات يجعلها أمام أكداس كبيرة من البيانات تحتاج الى جهود جبارة تحسن التعامل معها والاستفادة منها بما يخدم المنظمة.
ان التعامل اليدوي مع مثل هذه البيانات دون استخدام تقنيات حديثة يبعد المنظمة عن التط
... Show More: The need for means of transmitting data in a confidential and secure manner has become one of the most important subjects in the world of communications. Therefore, the search began for what would achieve not only the confidentiality of information sent through means of communication, but also high speed of transmission and minimal energy consumption, Thus, the encryption technology using DNA was developed which fulfills all these requirements [1]. The system proposes to achieve high protection of data sent over the Internet by applying the following objectives: 1. The message is encrypted using one of the DNA methods with a key generated by the Diffie-Hellman Ephemeral algorithm, part of this key is secret and this makes the pro
... Show MoreABSTRUCT
In This Paper, some semi- parametric spatial models were estimated, these models are, the semi – parametric spatial error model (SPSEM), which suffer from the problem of spatial errors dependence, and the semi – parametric spatial auto regressive model (SPSAR). Where the method of maximum likelihood was used in estimating the parameter of spatial error ( λ ) in the model (SPSEM), estimated the parameter of spatial dependence ( ρ ) in the model ( SPSAR ), and using the non-parametric method in estimating the smoothing function m(x) for these two models, these non-parametric methods are; the local linear estimator (LLE) which require finding the smoo
... Show MoreMultilocus haplotype analysis of candidate variants with genome wide association studies (GWAS) data may provide evidence of association with disease, even when the individual loci themselves do not. Unfortunately, when a large number of candidate variants are investigated, identifying risk haplotypes can be very difficult. To meet the challenge, a number of approaches have been put forward in recent years. However, most of them are not directly linked to the disease-penetrances of haplotypes and thus may not be efficient. To fill this gap, we propose a mixture model-based approach for detecting risk haplotypes. Under the mixture model, haplotypes are clustered directly according to their estimated d
Measuring the efficiency of postgraduate and undergraduate programs is one of the essential elements in educational process. In this study, colleges of Baghdad University and data for the academic year (2011-2012) have been chosen to measure the relative efficiencies of postgraduate and undergraduate programs in terms of their inputs and outputs. A relevant method to conduct the analysis of this data is Data Envelopment Analysis (DEA). The effect of academic staff to the number of enrolled and alumni students to the postgraduate and undergraduate programs are the main focus of the study.
Crime is a threat to any nation’s security administration and jurisdiction. Therefore, crime analysis becomes increasingly important because it assigns the time and place based on the collected spatial and temporal data. However, old techniques, such as paperwork, investigative judges, and statistical analysis, are not efficient enough to predict the accurate time and location where the crime had taken place. But when machine learning and data mining methods were deployed in crime analysis, crime analysis and predication accuracy increased dramatically. In this study, various types of criminal analysis and prediction using several machine learning and data mining techniques, based o
Software-Defined Networking (SDN) has evolved network management by detaching the control plane from the data forwarding plane, resulting in unparalleled flexibility and efficiency in network administration. However, the heterogeneity of traffic in SDN presents issues in achieving Quality of Service (QoS) demands and efficiently managing network resources. SDN traffic flows are often divided into elephant flows (EFs) and mice flows (MFs). EFs, which are distinguished by their huge packet sizes and long durations, account for a small amount of total traffic but require disproportionate network resources, thus causing congestion and delays for smaller MFs. MFs, on the other hand, have a short lifetime and are latency-sensitive, but they accou
... Show MoreThe child realize that the meals time provide golden opportunities to get the attention the child needs. However, many difficulties that may appear about eating food can be avoided, if the family used the right way in dealing with this problem. The study aims, in this case, at searching for the reasons that lies behind the child's obstinacy in eating his food and the attempt to study some variables that is related to the research topic.
The results are summed up as follows:
- The number of the sample children is 3 of both sexes between the ages of 3 to 6 years
The COVID-19 pandemic has necessitated new methods for controlling the spread of the virus, and machine learning (ML) holds promise in this regard. Our study aims to explore the latest ML algorithms utilized for COVID-19 prediction, with a focus on their potential to optimize decision-making and resource allocation during peak periods of the pandemic. Our review stands out from others as it concentrates primarily on ML methods for disease prediction.To conduct this scoping review, we performed a Google Scholar literature search using "COVID-19," "prediction," and "machine learning" as keywords, with a custom range from 2020 to 2022. Of the 99 articles that were screened for eligibility, we selected 20 for the final review.Our system
... Show More