Steganography is a technique of concealing secret data within other quotidian files of the same or different types. Hiding data has been essential to digital information security. This work aims to design a stego method that can effectively hide a message inside the images of the video file. In this work, a video steganography model has been proposed through training a model to hiding video (or images) within another video using convolutional neural networks (CNN). By using a CNN in this approach, two main goals can be achieved for any steganographic methods which are, increasing security (hardness to observed and broken by used steganalysis program), this was achieved in this work as the weights and architecture are randomized. Thus, the exact way by which the network will hide the information is unable to be known to anyone who does not have the weights. The second goal is to increase hiding capacity, which has been achieved by using CNN as a strategy to make decisions to determine the best areas that are redundant and, as a result, gain more size to be hidden. Furthermore, In the proposed model, CNN is concurrently trained to generate the revealing and hiding processes, and it is designed to work as a pair mainly. This model has a good strategy for the patterns of images, which assists to make decisions to determine which is the parts of the cover image should be redundant, as well as more pixels are hidden there. The CNN implementation can be done by using Keras, along with tensor flow backend. In addition, random RGB images from the "ImageNet dataset" have been used for training the proposed model (About 45000 images of size (256x256)). The proposed model has been trained by CNN using random images taken from the database of ImageNet and can work on images taken from a wide range of sources. By saving space on an image by removing redundant areas, the quantity of hidden data can be raised (improve capacity). Since the weights and model architecture are randomized, the actual method in which the network will hide the data can't be known to anyone who does not have the weights. Furthermore, additional block-shuffling is incorporated as an encryption method to improved security; also, the image enhancement methods are used to improving the output quality. From results, the proposed method has achieved high-security level, high embedding capacity. In addition, the result approves that the system achieves good results in visibility and attacks, in which the proposed method successfully tricks observer and the steganalysis program.
The purpose of this paper is to find the best multiplier approximation of unbounded functions in –space by using some discrete linear positive operators. Also we will estimate the degree of the best multiplier approximation in term of modulus of continuity and the averaged modulus.
The tax system, like any other system, as a set of elements and parts that complement each other and are interrelated and interact to achieve specific goals, and is a natural reflection of the economic, social and political conditions prevailing in society, and therefore the objectives of tax policy formulated in line with the objectives of economic policy in general, which means that any change in economic policy clearly affects fiscal policy measures and fiscal policy in particular.
The problem of searching for the impact of foreign direct investment in the Iraqi tax system was focused on the study the of foreign direct investment and therole played in developing and improving the economic reality and its implicatio
... Show MoreWeibull distribution is considered as one of the most widely distribution applied in real life, Its similar to normal distribution in the way of applications, it's also considered as one of the distributions that can applied in many fields such as industrial engineering to represent replaced and manufacturing time ,weather forecasting, and other scientific uses in reliability studies and survival function in medical and communication engineering fields.
In this paper, The scale parameter has been estimated for weibull distribution using Bayesian method based on Jeffery prior information as a first method , then enhanced by improving Jeffery prior information and then used as a se
... Show MoreIncremental forming is a flexible sheet metal forming process which is performed by utilizing simple tools to locally deform a sheet of metal along a predefined tool path without using of dies. This work presents the single point incremental forming process for producing pyramid geometry and studies the effect of tool geometry, tool diameter, and spindle speed on the residual stresses. The residual stresses were measured by ORIONRKS 6000 test measuring instrument. This instrument was used with four angles of (0º,15º,30º, and 45º) and the average value of residual stresses was determined, the value of the residual stress in the original blanks was (10.626 MPa). The X-ray diffraction technology was used to measure the residual stresses
... Show MoreIn this work, functionally graded materials were synthesized by centrifugal technique at different
volume fractions 0.5, 1, 1.5, and 2% Vf with a rotation speed of 1200 rpm and a constant rotation time, T
= 6 min . The mechanical properties were characterized to study the graded and non-graded nanocomposites
and the pure epoxy material. The mechanical tests showed that graded and non-graded added alumina
(Al2O3) nanoparticles enhanced the effect more than pure epoxy. The maximum difference in impact strength
occurred at (FGM), which was loaded from the rich side of the nano-alumina where the maximum value was
at 1% Vf by 133.33% of the sample epoxy side. The flexural strength and Young modulus of the fu
An efficient modification and a novel technique combining the homotopy concept with Adomian decomposition method (ADM) to obtain an accurate analytical solution for Riccati matrix delay differential equation (RMDDE) is introduced in this paper . Both methods are very efficient and effective. The whole integral part of ADM is used instead of the integral part of homotopy technique. The major feature in current technique gives us a large convergence region of iterative approximate solutions .The results acquired by this technique give better approximations for a larger region as well as previously. Finally, the results conducted via suggesting an efficient and easy technique, and may be addressed to other non-linear problems.
Electrical Discharge Machining (EDM) is a widespread Nontraditional Machining (NTM) processes for manufacturing of a complicated geometry or very hard metals parts that are difficult to machine by traditional machining operations. Electrical discharge machining is a material removal (MR) process characterized by using electrical discharge erosion. This paper discusses the optimal parameters of EDM on high-speed steel (HSS) AISI M2 as a workpiece using copper and brass as an electrode. The input parameters used for experimental work are current (10, 24 and 42 A), pulse on time (100, 150 and 200 µs), and pulse off time (4, 12 and 25 µs) that have effect on the material removal rate (MRR), electrode wear rate (EWR) and wear ratio (WR). A
... Show MoreThe university course timetable problem (UCTP) is typically a combinatorial optimization problem. Manually achieving a useful timetable requires many days of effort, and the results are still unsatisfactory. unsatisfactory. Various states of art methods (heuristic, meta-heuristic) are used to satisfactorily solve UCTP. However, these approaches typically represent the instance-specific solutions. The hyper-heuristic framework adequately addresses this complex problem. This research proposed Particle Swarm Optimizer-based Hyper Heuristic (HH PSO) to solve UCTP efficiently. PSO is used as a higher-level method that selects low-level heuristics (LLH) sequence which further generates an optimal solution. The proposed a
... Show MoreThe monthly time series of the Total Suspended Solids (TSS) concentrations in Euphrates River at Nasria was analyzed as a time series. The data used for the analysis was the monthly series during (1977-2000).
The series was tested for nonhomogenity and found to be nonhomogeneous. A significant positive jump was observed after 1988. This nonhomogenity was removed using a method suggested by Yevichevich (7). The homogeneous series was then normalized using Box and Cox (2) transformation. The periodic component of the series was fitted using harmonic analyses, and removed from the series to obtain the dependent stochastic component. This component was then modeled using first order autoregressive model (Markovian chain). The above a
... Show More