Data security is an important component of data communication and transmission systems. Its main role is to keep sensitive information safe and integrated from the sender to the receiver. The proposed system aims to secure text messages through two security principles encryption and steganography. The system produced a novel method for encryption using graph theory properties; it formed a graph from a password to generate an encryption key as a weight matrix of that graph and invested the Least Significant Bit (LSB) method for hiding the encrypted message in a colored image within a green component. Practical experiments of (perceptibility, capacity, and robustness) were calculated using similarity measures like PSNR, MSE, and SSIM. These measures had proved the efficiency of the system for image quality and hiding messages with PSNR ratio more than 85 dB, MSE ranged (4.537e-05 to 5.27546e-04) and SSIM=1.0 for using a cover file with size ranged from 256×300 to 1200×760 pixels and message ranged from 16 to 300 characters.
The modern textual study researched the textuality of the texts and specified for that seven well-known standards, relying in all of that on the main elements of the text (the speaker, the text, and the recipient). This study was to investigate the textuality of philology, and the jurisprudence of the science of the text.
Joining tissue is a growing problem in surgery with the advancement of the technology and more precise and difficult surgeries are done. Tissue welding using laser is a promising technique that might help in more advancement of the surgical practice. Objectives: To study the ability of laser in joining tissues and the optimum parameters for yielding good welding of tissues. Methods: An in-vitro study, done at the Institute of Laser, Baghdad University during the period from October 2008 to February 2009. Diode and Nd-YAG lasers were applied, using different sessions, on sheep small intestine with or without solder to obtain welding of a 2-mm length full thickness incision. Different powers and energies were used to get maximum effect. Re
... Show MoreThe aesthetic contents of data visualization is one of the contemporary areas through which data scientists and designers have been able to link data to humans, and even after reaching successful attempts to model data visualization, it wasn't clear how that reveals how it contributed to choosing the aesthetic content as an input to humanize these models, so the goal of the current research is to use The analytical descriptive approach aims to identify the aesthetic contents in data visualization, which the researchers interpreted through pragmatic philosophy and Kantian philosophy, and analyze a sample of data visualization models to reveal the aesthetic entrances in them to explain how to humanize them. The two researchers reached seve
... Show MoreThe data communication has been growing in present day. Therefore, the data encryption became very essential in secured data transmission and storage and protecting data contents from intruder and unauthorized persons. In this paper, a fast technique for text encryption depending on genetic algorithm is presented. The encryption approach is achieved by the genetic operators Crossover and mutation. The encryption proposal technique based on dividing the plain text characters into pairs, and applying the crossover operation between them, followed by the mutation operation to get the encrypted text. The experimental results show that the proposal provides an important improvement in encryption rate with comparatively high-speed Process
... Show MoreBuilding a system to identify individuals through their speech recording can find its application in diverse areas, such as telephone shopping, voice mail and security control. However, building such systems is a tricky task because of the vast range of differences in the human voice. Thus, selecting strong features becomes very crucial for the recognition system. Therefore, a speaker recognition system based on new spin-image descriptors (SISR) is proposed in this paper. In the proposed system, circular windows (spins) are extracted from the frequency domain of the spectrogram image of the sound, and then a run length matrix is built for each spin, to work as a base for feature extraction tasks. Five different descriptors are generated fro
... Show MoreThis research attempted to take advantage of modern techniques in the study of the superstructural phonetic features of spoken text in language using phonetic programs to achieve more accurate and objective results, far from being limited to self-perception and personal judgment, which varies from person to person.
It should be noted that these phonological features (Nabr, waqf, toning) are performance controls that determine the fate of the meaning of the word or sentence, but in the modern era has received little attention and attention, and that little attention to some of them came to study issues related to the composition or style Therefore, we recommend that more attention should be given to the study of
Compressing the speech reduces the data storage requirements, leading to reducing the time of transmitting the digitized speech over long-haul links like internet. To obtain best performance in speech compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry.The MCT bases functions are derived from GHM bases function using 2D linear convolution .The fast computation algorithm methods introduced here added desirable features to the current transform. We further assess the performance of the MCT in speech compression application. This paper discusses the effect of using DWT and MCT (one and two dimension) on speech compression. DWT and MCT performances in terms of comp
... Show MoreIn probability theory generalizing distribution is an important area. Several distributions are inappropriate for data modeling, either symmetrical, semi-symmetrical, or heavily skewed. In this paper, a new compound distribution with four parameters called Marshall Olkin Marshall Olkin Weibull (MOMOWe) is introduced. Several important statistical properties of new distribution were studied and examined. The estimation of unknown four parameters was carried out according to the maximum likelihood estimation method. The flexibility of MOMOWe distribution is demonstrated by the adoption of two real datasets (semi-symmetric and right-skewed) with different information fitting criteria. Su
Codes of red, green, and blue data (RGB) extracted from a lab-fabricated colorimeter device were used to build a proposed classifier with the objective of classifying colors of objects based on defined categories of fundamental colors. Primary, secondary, and tertiary colors namely red, green, orange, yellow, pink, purple, blue, brown, grey, white, and black, were employed in machine learning (ML) by applying an artificial neural network (ANN) algorithm using Python. The classifier, which was based on the ANN algorithm, required a definition of the mentioned eleven colors in the form of RGB codes in order to acquire the capability of classification. The software's capacity to forecast the color of the code that belongs to an ob
... Show MoreThe deep learning algorithm has recently achieved a lot of success, especially in the field of computer vision. This research aims to describe the classification method applied to the dataset of multiple types of images (Synthetic Aperture Radar (SAR) images and non-SAR images). In such a classification, transfer learning was used followed by fine-tuning methods. Besides, pre-trained architectures were used on the known image database ImageNet. The model VGG16 was indeed used as a feature extractor and a new classifier was trained based on extracted features.The input data mainly focused on the dataset consist of five classes including the SAR images class (houses) and the non-SAR images classes (Cats, Dogs, Horses, and Humans). The Conv
... Show More