The technological development in the field of information and communication has been accompanied by the emergence of security challenges related to the transmission of information. Encryption is a good solution. An encryption process is one of the traditional methods to protect the plain text, by converting it into inarticulate form. Encryption implemented can be occurred by using some substitute techniques, shifting techniques, or mathematical operations. This paper proposed a method with two branches to encrypt text. The first branch is a new mathematical model to create and exchange keys, the proposed key exchange method is the development of Diffie-Hellman. It is a new mathematical operations model to exchange keys based on prime numbers and the possibility of using integer numbers. While the second branch of the proposal is the multi-key encryption algorithm. The current algorithm provides the ability to use more than two keys. Keys can be any kind of integer number (at least the last key is a prime number), not necessarily to be of the same length. The Encryption process is based on converting the text characters to suggested integer numbers, and these numbers are converted to other numbers by using a multilevel mathematical model many times (a multilevel process depending on the number of keys used), while the decryption process is a one-level process using just one key as the main key, while the other keys used as secondary keys. The messages are encoded before encryption (coded by ASCII or any suggested system). The algorithm can use an unlimited number of keys with a very large size (more than 7500 bytes), at least one of them a prime number. Exponentiation is also used for keys to increase complexity. The experiments proved the robustness of the key exchange protocol and the encryption algorithm in addition to the security. Comparing the suggested method with other methods ensures that the suggested method is more secure and flexible and easy to implement.
Semantic segmentation realization and understanding is a stringent task not just for computer vision but also in the researches of the sciences of earth, semantic segmentation decompose compound architectures in one elements, the most mutual object in a civil outside or inside senses must classified then reinforced with information meaning of all object, it’s a method for labeling and clustering point cloud automatically. Three dimensions natural scenes classification need a point cloud dataset to representation data format as input, many challenge appeared with working of 3d data like: little number, resolution and accurate of three Dimensional dataset . Deep learning now is the po
The goal of this research is to develop a numerical model that can be used to simulate the sedimentation process under two scenarios: first, the flocculation unit is on duty, and second, the flocculation unit is out of commission. The general equation of flow and sediment transport were solved using the finite difference method, then coded using Matlab software. The result of this study was: the difference in removal efficiency between the coded model and operational model for each particle size dataset was very close, with a difference value of +3.01%, indicating that the model can be used to predict the removal efficiency of a rectangular sedimentation basin. The study also revealed
This research aims to distinguish the reef environment from the non-reef environment. The Oligocene-Miocene-succussion in western Iraq was selected as a case study, represented by the reefal limestone facies of the Anah Formation (Late Oligocene) deposited in reef-back reef environments, dolomitic limestone of the Euphrates Formation (Early Miocene) deposited in open sea environments, and gypsiferous marly limestone of the Fatha Formation (Middle Miocene) deposited in a lagoonal environment. The content of the rare earth elements (REEs) (La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Er, Ho, Tm, Yb, Lu, and Y) in reef facies appear to be much lower than of those in the non-reef facies. The open sea facies have a low content of REEs due to bein
... Show MoreMost of the known cases of strong gravitational lensing involve multiple imaging of an active galactic nucleus. The properties of lensed active galactic nuclei make them promising systems for astrophysical applications of gravitational lensing. So we present a simple model for strong lensing in the gravitational lensed systems to calculate the age of four lensed galaxies, in the present work we take the freedman models with (k curvature index =0) Euclidian case, and the result show a good agreement with the other models.
The Adaptive Optics technique has been developed to obtain the correction of atmospheric seeing. The purpose of this study is to use the MATLAB program to investigate the performance of an AO system with the most recent AO simulation tools, Objected-Oriented Matlab Adaptive Optics (OOMAO). This was achieved by studying the variables that impact image quality correction, such as observation wavelength bands, atmospheric parameters, telescope parameters, deformable mirror parameters, wavefront sensor parameters, and noise parameters. The results presented a detailed analysis of the factors that influence the image correction process as well as the impact of the AO components on that process
Optical Mark Recognition (OMR) is an important technology for applications that require speedy, high-accuracy processing of a huge volume of hand-filled forms. The aim of this technology is to reduce manual work, human effort, high accuracy in assessment, and minimize time for evaluation answer sheets. This paper proposed OMR by using Modify Bidirectional Associative Memory (MBAM), MBAM has two phases (learning and analysis phases), it will learn on the answer sheets that contain the correct answers by giving its own code that represents the number of correct answers, then detection marks from answer sheets by using analysis phase. This proposal will be able to detect no selection or select more than one choice, in addition, using M
... Show MoreThe Enhanced Thematic Mapper Plus (ETM+) that loaded onboard the Landsat-7 satellite was launched on 15 April 1999. After 4 years, the image collected by this sensor was greatly impacted by the failure of the system’s Scan Line Corrector (SLC), a radiometry error.The median filter is one of the basic building blocks in many image processing situations. Digital images are often distorted by impulse noise due to errors generated by the noise sensor, errors that occur during the conversion of signals from analog-to-digital, as well as errors generated in communication channels. This error inevitably leads to a change in the intensity of some pixels, while some pixels remain unchanged. To remove impulse noise and improve the quality of the
... Show More