Deepfake is a type of artificial intelligence used to create convincing images, audio, and video hoaxes and it concerns celebrities and everyone because they are easy to manufacture. Deepfake are hard to recognize by people and current approaches, especially high-quality ones. As a defense against Deepfake techniques, various methods to detect Deepfake in images have been suggested. Most of them had limitations, like only working with one face in an image. The face has to be facing forward, with both eyes and the mouth open, depending on what part of the face they worked on. Other than that, a few focus on the impact of pre-processing steps on the detection accuracy of the models. This paper introduces a framework design focused on this aspect of the Deepfake detection task and proposes pre-processing steps to improve accuracy and close the gap between training and validation results with simple operations. Additionally, it differed from others by dealing with the positions of the face in various directions within the image, distinguishing the concerned face in an image containing multiple faces, and segmentation the face using facial landmarks points. All these were done using face detection, face box attributes, facial landmarks, and key points from the MediaPipe tool with the pre-trained model (DenseNet121). Lastly, the proposed model was evaluated using Deepfake Detection Challenge datasets, and after training for a few epochs, it achieved an accuracy of 97% in detecting the Deepfake
In this paper, a compression system with high synthetic architect is introduced, it is based on wavelet transform, polynomial representation and quadtree coding. The bio-orthogonal (tap 9/7) wavelet transform is used to decompose the image signal, and 2D polynomial representation is utilized to prune the existing high scale variation of image signal. Quantization with quadtree coding are followed by shift coding are applied to compress the detail band and the residue part of approximation subband. The test results indicate that the introduced system is simple and fast and it leads to better compression gain in comparison with the case of using first order polynomial approximation.
A novel median filter based on crow optimization algorithms (OMF) is suggested to reduce the random salt and pepper noise and improve the quality of the RGB-colored and gray images. The fundamental idea of the approach is that first, the crow optimization algorithm detects noise pixels, and that replacing them with an optimum median value depending on a criterion of maximization fitness function. Finally, the standard measure peak signal-to-noise ratio (PSNR), Structural Similarity, absolute square error and mean square error have been used to test the performance of suggested filters (original and improved median filter) used to removed noise from images. It achieves the simulation based on MATLAB R2019b and the resul
... Show MoreThis research examines the issue of Internet banking services and their impact on customer's perceived value to know the potential of Iraqi commercial Banks to provide Internet banking services as well as determine the customer's level of perception of the value of such services. The research aims to demonstrate the effect that Internet banking services have on the customer's perceived value and to find how online banking services can be used to add value that the customer perceive. The main findings were that research sample banks don't have a sophisticated network of electronic Windows through which banking services are provided to allow customers to manage their accounts, and perform various operational operations through the
... Show MoreThis paper discusses the method for determining the permeability values of Tertiary Reservoir in Ajeel field (Jeribe, dhiban, Euphrates) units and this study is very important to determine the permeability values that it is needed to detect the economic value of oil in Tertiary Formation. This study based on core data from nine wells and log data from twelve wells. The wells are AJ-1, AJ-4, AJ-6, AJ-7, AJ-10, AJ-12, AJ-13, AJ-14, AJ-15, AJ-22, AJ-25, and AJ-54, but we have chosen three wells (AJ4, AJ6, and AJ10) to study in this paper. Three methods are used for this work and this study indicates that one of the best way of obtaining permeability is the Neural network method because the values of permeability obtained be
... Show MoreAbstract
Objectives: To find out the association between enhancing learning needs and demographic characteristic of (gender, education level and age).
Methods: This study was conducted on purposive sample was selected to obtain representative and accurate data consisting of (90) patients who are in a peroid of recovering from myocardial infarction at Missan Center for Cardiac Diseases and Surgery, (10) patients were excluded for the pilot study, Data were analyzed using descriptive statistical data analysis approach of frequency, percentage, and analysis of variance (ANOVA).
Results: The study finding shows, there was sign
... Show MoreThe study aimed to identify the use of the electronic concept maps method in learning some of the skills of the floor exercises in the artistic gymnastics for third graders ,as well as to identify the best group between the two research groups (experimental And the officer to learn and retain some of the skills of the floor exercises in the artistic gymnastics of the research subject , and the experimental method was used and included the sample research on students of the collage of Physical Education and Sports Sciences/University of Baghdad, third grade, and has selected (10) Students for each group of The experimental and controlling groups randomly by lottery and after the completion of the period of implementation of the experiment wh
... Show MoreCurrently, one of the topical areas of application of machine learning methods is the prediction of material characteristics. The aim of this work is to develop machine learning models for determining the rheological properties of polymers from experimental stress relaxation curves. The paper presents an overview of the main directions of metaheuristic approaches (local search, evolutionary algorithms) to solving combinatorial optimization problems. Metaheuristic algorithms for solving some important combinatorial optimization problems are described, with special emphasis on the construction of decision trees. A comparative analysis of algorithms for solving the regression problem in CatBoost Regressor has been carried out. The object of
... Show MoreThe widespread of internet allover the world, in addition to the increasing of the huge number of users that they exchanged important information over it highlights the need for a new methods to protect these important information from intruders' corruption or modification. This paper suggests a new method that ensures that the texts of a given document cannot be modified by the intruders. This method mainly consists of mixture of three steps. The first step which barrows some concepts of "Quran" security system to detect some type of change(s) occur in a given text. Where a key of each paragraph in the text is extracted from a group of letters in that paragraph which occur as multiply of a given prime number. This step cannot detect the ch
... Show MoreIn the image processing’s field and computer vision it’s important to represent the image by its information. Image information comes from the image’s features that extracted from it using feature detection/extraction techniques and features description. Features in computer vision define informative data. For human eye its perfect to extract information from raw image, but computer cannot recognize image information. This is why various feature extraction techniques have been presented and progressed rapidly. This paper presents a general overview of the feature extraction categories for image.