The main reason for the emergence of a deepfake (deep learning and fake) term is the evolution in artificial intelligence techniques, especially deep learning. Deep learning algorithms, which auto-solve problems when giving large sets of data, are used to swap faces in digital media to create fake media with a realistic appearance. To increase the accuracy of distinguishing a real video from fake one, a new model has been developed based on deep learning and noise residuals. By using Steganalysis Rich Model (SRM) filters, we can gather a low-level noise map that is used as input to a light Convolution neural network (CNN) to classify a real face from fake one. The results of our work show that the training accuracy of the CNN model can be significantly enhanced by using noise residuals instead of RGB pixels. Compared to alternative methods, the advantages of our method include higher detection accuracy, lowest training time, a fewer number of layers and parameters. Index Terms— Deepfake, Deep Learning, Steganalysis Rich Model, Convolution Neural Network.
The present study discusses the problem based learning in Iraqi classroom. This method aims to involve all learners in collaborative activities and it is learner-centered method. To fulfill the aims and verify the hypothesis which reads as follow” It is hypothesized that there is no statistically significant differences between the achievements of Experimental group and control group”. Thirty learners are selected to be the sample of present study.Mann-Whitney Test for two independent samples is used to analysis the results. The analysis shows that experimental group’s members who are taught according to problem based learning gets higher scores than the control group’s members who are taught according to traditional method. This
... Show MoreThe present study discusses the problem based learning in Iraqi classroom. This method aims to involve all learners in collaborative activities and it is learner-centered method. To fulfill the aims and verify the hypothesis which reads as follow” It is hypothesized that there is no statistically significant differences between the achievements of Experimental group and control group”. Thirty learners are selected to be the sample of present study.Mann-Whitney Test for two independent samples is used to analysis the results. The analysis shows that experimental group’s members who are taught according to problem based learning gets higher scores than the control group’s members who are taught according to traditional method. This
... Show MoreThe present study investigates deep eutectic solvents (DESs) as potential media for enzymatic hydrolysis. A series of ternary ammonium and phosphonium-based DESs were prepared at different molar ratios by mixing with aqueous glycerol (85%). The physicochemical properties including surface tension, conductivity, density, and viscosity were measured at a temperature range of 298.15 K – 363.15 K. The eutectic points were highly influenced by the variation of temperature. The eutectic point of the choline chloride: glycerol: water (ratio of 1: 2.55: 2.28) and methyltriphenylphosphonium bromide:glycerol:water (ratio of 1: 4.25: 3.75) is 213.4 K and 255.8 K, respectively. The stability of the lipase enzyme isolated from porcine pancreas (PPL) a
... Show MoreBreast cancer is a heterogeneous disease characterized by molecular complexity. This research utilized three genetic expression profiles—gene expression, deoxyribonucleic acid (DNA) methylation, and micro ribonucleic acid (miRNA) expression—to deepen the understanding of breast cancer biology and contribute to the development of a reliable survival rate prediction model. During the preprocessing phase, principal component analysis (PCA) was applied to reduce the dimensionality of each dataset before computing consensus features across the three omics datasets. By integrating these datasets with the consensus features, the model's ability to uncover deep connections within the data was significantly improved. The proposed multimodal deep
... Show MoreClinical keratoconus (KCN) detection is a challenging and time-consuming task. In the diagnosis process, ophthalmologists must revise demographic and clinical ophthalmic examinations. The latter include slit-lamb, corneal topographic maps, and Pentacam indices (PI). We propose an Ensemble of Deep Transfer Learning (EDTL) based on corneal topographic maps. We consider four pretrained networks, SqueezeNet (SqN), AlexNet (AN), ShuffleNet (SfN), and MobileNet-v2 (MN), and fine-tune them on a dataset of KCN and normal cases, each including four topographic maps. We also consider a PI classifier. Then, our EDTL method combines the output probabilities of each of the five classifiers to obtain a decision b
A simple setup of random number generator is proposed. The random number generation is based on the shot-noise fluctuations in a p-i-n photodiode. These fluctuations that are defined as shot noise are based on a stationary random process whose statistical properties reflect Poisson statistics associated with photon streams. It has its origin in the quantum nature of light and it is related to vacuum fluctuations. Two photodiodes were used and their shot noise fluctuations were subtracted. The difference was applied to a comparator to obtain the random sequence.
— To identify the effect of deep learning strategy on mathematics achievement and practical intelligence among secondary school students during the 2022/2023 academic year. In the research, the experimental research method with two groups (experimental and control) with a post-test were adopted. The research community is represented by the female students of the fifth scientific grade from the first Karkh Education Directorate. (61) female students were intentionally chosen, and they were divided into two groups: an experimental group (30) students who were taught according to the proposed strategy, and a control group (31) students who were taught according to the usual method. For the purpose of collecting data for the experimen
... Show MoreIn this paper, the methods of weighted residuals: Collocation Method (CM), Least Squares Method (LSM) and Galerkin Method (GM) are used to solve the thin film flow (TFF) equation. The weighted residual methods were implemented to get an approximate solution to the TFF equation. The accuracy of the obtained results is checked by calculating the maximum error remainder functions (MER). Moreover, the outcomes were examined in comparison with the 4th-order Runge-Kutta method (RK4) and good agreements have been achieved. All the evaluations have been successfully implemented by using the computer system Mathematica®10.