Steganography is a technique of concealing secret data within other quotidian files of the same or different types. Hiding data has been essential to digital information security. This work aims to design a stego method that can effectively hide a message inside the images of the video file. In this work, a video steganography model has been proposed through training a model to hiding video (or images) within another video using convolutional neural networks (CNN). By using a CNN in this approach, two main goals can be achieved for any steganographic methods which are, increasing security (hardness to observed and broken by used steganalysis program), this was achieved in this work as the weights and architecture are randomized. Thus, the exact way by which the network will hide the information is unable to be known to anyone who does not have the weights. The second goal is to increase hiding capacity, which has been achieved by using CNN as a strategy to make decisions to determine the best areas that are redundant and, as a result, gain more size to be hidden. Furthermore, In the proposed model, CNN is concurrently trained to generate the revealing and hiding processes, and it is designed to work as a pair mainly. This model has a good strategy for the patterns of images, which assists to make decisions to determine which is the parts of the cover image should be redundant, as well as more pixels are hidden there. The CNN implementation can be done by using Keras, along with tensor flow backend. In addition, random RGB images from the "ImageNet dataset" have been used for training the proposed model (About 45000 images of size (256x256)). The proposed model has been trained by CNN using random images taken from the database of ImageNet and can work on images taken from a wide range of sources. By saving space on an image by removing redundant areas, the quantity of hidden data can be raised (improve capacity). Since the weights and model architecture are randomized, the actual method in which the network will hide the data can't be known to anyone who does not have the weights. Furthermore, additional block-shuffling is incorporated as an encryption method to improved security; also, the image enhancement methods are used to improving the output quality. From results, the proposed method has achieved high-security level, high embedding capacity. In addition, the result approves that the system achieves good results in visibility and attacks, in which the proposed method successfully tricks observer and the steganalysis program.
In this study, ultraviolet (UV), ozone techniques with hydrogen peroxide oxidant were used to treat the wastewater which is produced from South Baghdad Power Station using lab-scale system. From UV-H2O2 experiments, it was shown that the optimum exposure time was 80 min. At this time, the highest removal percentages of oil, COD, and TOC were 84.69 %, 56.33 % and 50 % respectively. Effect of pH on the contaminants removing was studied in the range of (2-12). The best oil, COD, and TOC removal percentages (69.38 %, 70 % and 52 %) using H2O2/UV were at pH=12. H2O2/ozone experiments exhibited better performance compared to
... Show MoreIt is well known that drilling fluid is a key parameter for optimizing drilling operations, cleaning the hole, and managing the rig hydraulics and margins of surge and swab pressures. Although the experimental works represent valid and reliable results, they are expensive and time consuming. In contrast, continuous and regular determination of the rheological fluid properties can perform its essential functions during good construction. The aim of this study is to develop empirical models to estimate the drilling mud rheological properties of water-based fluids with less need for lab measurements. This study provides two predictive techniques, multiple regression analysis and artificial neural networks, to determine the rheological
... Show MoreThe study using Nonparametric methods for roubust to estimate a location and scatter it is depending minimum covariance determinant of multivariate regression model , due to the presence of outliear values and increase the sample size and presence of more than after the model regression multivariate therefore be difficult to find a median location .
It has been the use of genetic algorithm Fast – MCD – Nested Extension and compared with neural Network Back Propagation of multilayer in terms of accuracy of the results and speed in finding median location ,while the best sample to be determined by relying on less distance (Mahalanobis distance)has the stu
... Show MoreThe use of data envelopment analysis method helps to improve the performance of organizations in order to exploit their resources efficiently in order to improve the service quality. represented study a problem in need of the Iraqi Middle East Investment Bank to assess the performance of bank branches, according to the service quality provided, Thus, the importance of the study is to contribute using a scientific and systematic method by applying the data envelopment analysis method in assessing the service quality provided by the bank branches, The study focused on achieving the goal of determining the efficiency of the services quality provided by the bank branches manner which reflect the extent of utilization of a
... Show MoreDeep learning convolution neural network has been widely used to recognize or classify voice. Various techniques have been used together with convolution neural network to prepare voice data before the training process in developing the classification model. However, not all model can produce good classification accuracy as there are many types of voice or speech. Classification of Arabic alphabet pronunciation is a one of the types of voice and accurate pronunciation is required in the learning of the Qur’an reading. Thus, the technique to process the pronunciation and training of the processed data requires specific approach. To overcome this issue, a method based on padding and deep learning convolution neural network is proposed to
... Show MoreIn this work Laser wireless video communication system using intensity modualtion direct
detection IM/DD over a 1 km range between transmitter and receiver is experimentally investigated and
demonstrated. Beam expander and beam collimeter were implemented to collimete laser beam at the
transmitter and focus this beam at the receiver respectively. The results show that IM/DD communication
sysatem using laser diode is quite attractive for transmitting video signal. In this work signal to noise
ratio (S/N) higher than 20 dB is achieved in this work.
In this paper, we derive and prove the stability bounds of the momentum coefficient µ and the learning rate ? of the back propagation updating rule in Artificial Neural Networks .The theoretical upper bound of learning rate ? is derived and its practical approximation is obtained
Artificial Neural Networks (ANN) is one of the important statistical methods that are widely used in a range of applications in various fields, which simulates the work of the human brain in terms of receiving a signal, processing data in a human cell and sending to the next cell. It is a system consisting of a number of modules (layers) linked together (input, hidden, output). A comparison was made between three types of neural networks (Feed Forward Neural Network (FFNN), Back propagation network (BPL), Recurrent Neural Network (RNN). he study found that the lowest false prediction rate was for the recurrentt network architecture and using the Data on graduate students at the College of Administration and Economics, Univer
... Show MoreIn this study, we made a comparison between LASSO & SCAD methods, which are two special methods for dealing with models in partial quantile regression. (Nadaraya & Watson Kernel) was used to estimate the non-parametric part ;in addition, the rule of thumb method was used to estimate the smoothing bandwidth (h). Penalty methods proved to be efficient in estimating the regression coefficients, but the SCAD method according to the mean squared error criterion (MSE) was the best after estimating the missing data using the mean imputation method