Underwater Wireless Sensor Networks (UWSNs) have emerged as a promising technology for a wide range of ocean monitoring applications. The UWSNs suffer from unique challenges of the underwater environment, such as dynamic and sparse network topology, which can easily lead to a partitioned network. This results in hotspot formation and the absence of the routing path from the source to the destination. Therefore, to optimize the network lifetime and limit the possibility of hotspot formation along the data transmission path, the need to plan a traffic-aware protocol is raised. In this research, we propose a traffic-aware routing protocol called PG-RES, which is predicated on the ideas of Pressure Gradient and RESistance concept. The proposed PG-RES protocol initially detects its neighboring area using a node request message to build a routing directory that includes the communication cost to each neighboring node. Then, by adjusting the routing process according to network conditions in a proactive mode, PG-RES mitigates traffic burden in the nodes along the transmission path to the sink, so the chances of hotspot occurrence are reduced in the underwater environment. The simulation results have revealed that the proposed PG-RES protocol achieves superior performance than the other techniques in terms of average energy usage, packet delivery ratio, network lifetime, and transmission delay. The PG-RES protocol demonstrated a reliable data transmission with a packet drop ratio that was 13.92% lower than EEDOR-VA and 3.66% lower than VHARD-FS. The development of this protocol aims to support real-time applications in highly isolated ocean environments, where reliable data forwarding and hotspot handling are essential for timely data transmission.
This paper describes the use of microcomputer as a laboratory instrument system. The system is focused on three weather variables measurement, are temperature, wind speed, and wind direction. This instrument is a type of data acquisition system; in this paper we deal with the design and implementation of data acquisition system based on personal computer (Pentium) using Industry Standard Architecture (ISA)bus. The design of this system involves mainly a hardware implementation, and the software programs that are used for testing, measuring and control. The system can be used to display the required information that can be transferred and processed from the external field to the system. A visual basic language with Microsoft foundation cl
... Show MoreWith the continuous downscaling of semiconductor processes, the growing power density and thermal issues in multicore processors become more and more challenging, thus reliable dynamic thermal management (DTM) is required to prevent severe challenges in system performance. The accuracy of the thermal profile, delivered to the DTM manager, plays a critical role in the efficiency and reliability of DTM, different sources of noise and variations in deep submicron (DSM) technologies severely affecting the thermal data that can lead to significant degradation of DTM performance. In this article, we propose a novel fault-tolerance scheme exploiting approximate computing to mitigate the DSM effects on DTM efficiency. Approximate computing in hardw
... Show MorePortable devices such as smartphones, tablet PCs, and PDAs are a useful combination of hardware and software turned toward the mobile workers. While they present the ability to review documents, communicate via electronic mail, appointments management, meetings, etc. They usually lack a variety of essential security features. To address the security concerns of sensitive data, many individuals and organizations, knowing the associated threats mitigate them through improving authentication of users, encryption of content, protection from malware, firewalls, intrusion prevention, etc. However, no standards have been developed yet to determine whether such mobile data management systems adequately provide the fu
... Show MoreSoftware-defined networking (SDN) is an innovative network paradigm, offering substantial control of network operation through a network’s architecture. SDN is an ideal platform for implementing projects involving distributed applications, security solutions, and decentralized network administration in a multitenant data center environment due to its programmability. As its usage rapidly expands, network security threats are becoming more frequent, leading SDN security to be of significant concern. Machine-learning (ML) techniques for intrusion detection of DDoS attacks in SDN networks utilize standard datasets and fail to cover all classification aspects, resulting in under-coverage of attack diversity. This paper proposes a hybr
... Show MoreRecently, the increasing demand to transfer data through the Internet has pushed the Internet infrastructure to the nal edge of the ability of these networks. This high demand causes a deciency of rapid response to emergencies and disasters to control or reduce the devastating effects of these disasters. As one of the main cornerstones to address the data trafc forwarding issue, the Internet networks need to impose the highest priority on the special networks: Security, Health, and Emergency (SHE) data trafc. These networks work in closed and private domains to serve a group of users for specic tasks. Our novel proposed network ow priority management based on ML and SDN fullls high control to give the required ow priority to SHE dat
... Show MoreArtificial Neural Networks (ANN) is one of the important statistical methods that are widely used in a range of applications in various fields, which simulates the work of the human brain in terms of receiving a signal, processing data in a human cell and sending to the next cell. It is a system consisting of a number of modules (layers) linked together (input, hidden, output). A comparison was made between three types of neural networks (Feed Forward Neural Network (FFNN), Back propagation network (BPL), Recurrent Neural Network (RNN). he study found that the lowest false prediction rate was for the recurrentt network architecture and using the Data on graduate students at the College of Administration and Economics, Univer
... Show MoreThe road networks is considered to be one of the determinants that controls to specify the areas of human activities, which it depend on to specify the arrival cost , in addition it is useful to achieve the connectivity for interaction and human activities , and shorten the distance and time between the population and places of service. The density of the road network in any space directly affected by the density of population and the type of economic activities and administrative functions performed by the space. On this basis, the subject of this study is reflected in the quantitative analysis of the roads network in the Governorate of Karbala. The study consists the quantitative analysis for the roads network and the Urban Nodes in th
... Show MoreThis research aims to analyze and simulate biochemical real test data for uncovering the relationships among the tests, and how each of them impacts others. The data were acquired from Iraqi private biochemical laboratory. However, these data have many dimensions with a high rate of null values, and big patient numbers. Then, several experiments have been applied on these data beginning with unsupervised techniques such as hierarchical clustering, and k-means, but the results were not clear. Then the preprocessing step performed, to make the dataset analyzable by supervised techniques such as Linear Discriminant Analysis (LDA), Classification And Regression Tree (CART), Logistic Regression (LR), K-Nearest Neighbor (K-NN), Naïve Bays (NB
... Show MoreMultiple eliminations (de-multiple) are one of seismic processing steps to remove their effects and delineate the correct primary refractors. Using normal move out to flatten primaries is the way to eliminate multiples through transforming these data to frequency-wavenumber domain. The flatten primaries are aligned with zero axis of the frequency-wavenumber domain and any other reflection types (multiples and random noise) are distributed elsewhere. Dip-filter is applied to pass the aligned data and reject others will separate primaries from multiple after transforming the data back from frequency-wavenumber domain to time-distance domain. For that, a suggested name for this technique as normal move out- frequency-wavenumber domain
... Show More