The technological development in the field of information and communication has been accompanied by the emergence of security challenges related to the transmission of information. Encryption is a good solution. An encryption process is one of the traditional methods to protect the plain text, by converting it into inarticulate form. Encryption implemented can be occurred by using some substitute techniques, shifting techniques, or mathematical operations. This paper proposed a method with two branches to encrypt text. The first branch is a new mathematical model to create and exchange keys, the proposed key exchange method is the development of Diffie-Hellman. It is a new mathematical operations model to exchange keys based on prime numbers and the possibility of using integer numbers. While the second branch of the proposal is the multi-key encryption algorithm. The current algorithm provides the ability to use more than two keys. Keys can be any kind of integer number (at least the last key is a prime number), not necessarily to be of the same length. The Encryption process is based on converting the text characters to suggested integer numbers, and these numbers are converted to other numbers by using a multilevel mathematical model many times (a multilevel process depending on the number of keys used), while the decryption process is a one-level process using just one key as the main key, while the other keys used as secondary keys. The messages are encoded before encryption (coded by ASCII or any suggested system). The algorithm can use an unlimited number of keys with a very large size (more than 7500 bytes), at least one of them a prime number. Exponentiation is also used for keys to increase complexity. The experiments proved the robustness of the key exchange protocol and the encryption algorithm in addition to the security. Comparing the suggested method with other methods ensures that the suggested method is more secure and flexible and easy to implement.
Scientific development has occupied a prominent place in the field of diagnosis, far from traditional procedures. Scientific progress and the development of cities have imposed diseases that have spread due to this development, perhaps the most prominent of which is diabetes for accurate diagnosis without examining blood samples and using image analysis by comparing two images of the affected person for no less than a period. Less than ten years ago they used artificial intelligence programs to analyze and prove the validity of this study by collecting samples of infected people and healthy people using one of the Python program libraries, which is (Open-CV) specialized in measuring changes to the human face, through which we can infer the
... Show MoreThe aim of this work is to study reverse osmosis characteristics for copper sulfate hexahydrate (CuSO4.6H2O), nickel sulfate hexahydrate (NiSO4.6H2O) and zinc sulfate hexahydrate (ZnSO4.6H2O) removal from aqueous solution which discharge from some Iraqi factories such as Alnasser Company for mechanical industries. The mode of operation of reverse osmosis was permeate is removed and the concentrate of metals solution is recycled back to the feed vessel. Spiral-wound membrane is thin film composite membrane (TFC) was used to conduct this study on reverse osmosis. The variables studied are metals concentrations (50 – 150 ppm) and time (15 – 90 min). It was found that increasing the time results in an increase in concentration of metal in p
... Show More
Document analysis of images snapped by camera is a growing challenge. These photos are often poor-quality compound images, composed of various objects and text; this makes automatic analysis complicated. OCR is one of the image processing techniques which is used to perform automatic identification of texts. Existing image processing techniques need to manage many parameters in order to clearly recognize the text in such pictures. Segmentation is regarded one of these essential parameters. This paper discusses the accuracy of segmentation process and its effect over the recognition process. According to the proposed method, the images were firstly filtered using the wiener filter then the active contour algorithm could b
... Show MoreSmart water flooding (low salinity water flooding) was mainly invested in a sandstone reservoir. The main reasons for using low salinity water flooding are; to improve oil recovery and to give a support for the reservoir pressure.
In this study, two core plugs of sandstone were used with different permeability from south of Iraq to explain the effect of water injection with different ions concentration on the oil recovery. Water types that have been used are formation water, seawater, modified low salinity water, and deionized water.
The effects of water salinity, the flow rate of water injected, and the permeability of core plugs have been studied in order to summarize the best conditions of low salinity
... Show More