General Background: Deep image matting is a fundamental task in computer vision, enabling precise foreground extraction from complex backgrounds, with applications in augmented reality, computer graphics, and video processing. Specific Background: Despite advancements in deep learning-based methods, preserving fine details such as hair and transparency remains a challenge. Knowledge Gap: Existing approaches struggle with accuracy and efficiency, necessitating novel techniques to enhance matting precision. Aims: This study integrates deep learning with fusion techniques to improve alpha matte estimation, proposing a lightweight U-Net model incorporating color-space fusion and preprocessing. Results: Experiments using the AdobeComposition-1k dataset demonstrate superior performance compared to traditional methods, achieving higher accuracy, faster processing speed, and improved boundary preservation. Novelty: The proposed model effectively combines deep learning with fusion techniques, enhancing matting quality while maintaining robustness across various environmental conditions. Implications: These findings highlight the potential of integrating fusion techniques with deep learning for image matting, offering valuable insights for future research in automated image processing applications, including augmented reality, gaming, and interactive video technologies. Highlights: Better Precision: Fusion techniques enhance fine detail preservation. Faster Processing: Lightweight U-Net improves speed and accuracy. Wide Applications: Useful for AR, gaming, and video processing. Keywords: Deep image matting, computer vision, deep learning, fusion techniques, U-Net
In this research, an analysis for the standard Hueckel edge detection algorithm behaviour by using three dimensional representations for the edge goodness criterion is presents after applying it on a real high texture satellite image, where the edge goodness criterion is analysis statistically. The Hueckel edge detection algorithm showed a forward exponential relationship between the execution time with the used disk radius. Hueckel restrictions that mentioned in his papers are adopted in this research. A discussion for the resultant edge shape and malformation is presented, since this is the first practical study of applying Hueckel edge detection algorithm on a real high texture image containing ramp edges (satellite image).
In this research a proposed technique is used to enhance the frame difference technique performance for extracting moving objects in video file. One of the most effective factors in performance dropping is noise existence, which may cause incorrect moving objects identification. Therefore it was necessary to find a way to diminish this noise effect. Traditional Average and Median spatial filters can be used to handle such situations. But here in this work the focus is on utilizing spectral domain through using Fourier and Wavelet transformations in order to decrease this noise effect. Experiments and statistical features (Entropy, Standard deviation) proved that these transformations can stand to overcome such problems in an elegant way.
... Show MoreThis paper introduces an innovative method for image encryption called "Two-Fold Cryptography," which leverages the Henon map in a dual-layer encryption framework. By applying two distinct encryption processes, this approach offers enhanced security for images. Key parameters generated by the Henon map dynamically shape both stages of encryption, creating a sophisticated and robust security system. The findings reveal that Two-Fold Cryptography provides a notable improvement in image protection, outperforming traditional single-layer encryption techniques.
In this paper the behavior of the quality of the gradient that implemented on an image as a function of noise error is presented. The cross correlation coefficient (ccc) between the derivative of the original image before and after introducing noise error shows dramatic decline compared with the corresponding images before taking derivatives. Mathematical equations have been constructed to control the relation between (ccc) and the noise parameter.
The image caption is the process of adding an explicit, coherent description to the contents of the image. This is done by using the latest deep learning techniques, which include computer vision and natural language processing, to understand the contents of the image and give it an appropriate caption. Multiple datasets suitable for many applications have been proposed. The biggest challenge for researchers with natural language processing is that the datasets are incompatible with all languages. The researchers worked on translating the most famous English data sets with Google Translate to understand the content of the images in their mother tongue. In this paper, the proposed review aims to enhance the understanding o
... Show MoreThe majority of Arab EFL (English as a Foreign Language) learners struggle with speaking English fluency. Iraqi students struggle to speak English confidently due to mispronunciation, grammatical errors, short and long pauses while speaking or feeling confused in normal conversations. Collaborative learning is crucial to enhance student’s speaking skills in the long run. This study aims to state the importance of collaborative learning as a teaching method to EFL learners in the meantime. In this quantitative and qualitative study, specific focus is taken on some of Barros’s views of collaborative learning as a teamwork and some of Pattanpichet’s speaking achievements under four categories: academic benefits, social benefits,
... Show MoreThe present research aims at revealing the advertising image semiotics in the American printed poster by following the image's significance and its transformations through the poster design trends and indicating its nature whether it is an explicit or implicit image. The limits of the research were the American printed poster during 2016-2018 period. The theoretical side was determined by two sections, the first: (the advertising image semiotics) and the second (design trends in the printed poster). The research procedures were represented by the research method adopted in the analysis of the sample models identified in four models taken from the research community which contains (24) models. The selection was made according to the trend
... Show MoreThis study calculated the surface roughness length (Zo), zero-displacement length (Zd) and height of the roughness elements (ZH) using GIS applications. The practical benefit of this study is to classify the development of Baghdad, choose the appropriate places for installing wind turbines, improve urban planning, find rates of turbulence, pollution and others. The surface roughness length (Zo) of Baghdad city was estimated based on the data of the wind speed obtained from an automatic weather station installed at Al-Mustansiriyah University, the data of the satellite images digital elevation model (DEM), and the digital surface model (DSM), utilizing Remote Sensing Techniques. The study area w
... Show More