General Background: Deep image matting is a fundamental task in computer vision, enabling precise foreground extraction from complex backgrounds, with applications in augmented reality, computer graphics, and video processing. Specific Background: Despite advancements in deep learning-based methods, preserving fine details such as hair and transparency remains a challenge. Knowledge Gap: Existing approaches struggle with accuracy and efficiency, necessitating novel techniques to enhance matting precision. Aims: This study integrates deep learning with fusion techniques to improve alpha matte estimation, proposing a lightweight U-Net model incorporating color-space fusion and preprocessing. Results: Experiments using the AdobeComposition-1k dataset demonstrate superior performance compared to traditional methods, achieving higher accuracy, faster processing speed, and improved boundary preservation. Novelty: The proposed model effectively combines deep learning with fusion techniques, enhancing matting quality while maintaining robustness across various environmental conditions. Implications: These findings highlight the potential of integrating fusion techniques with deep learning for image matting, offering valuable insights for future research in automated image processing applications, including augmented reality, gaming, and interactive video technologies. Highlights: Better Precision: Fusion techniques enhance fine detail preservation. Faster Processing: Lightweight U-Net improves speed and accuracy. Wide Applications: Useful for AR, gaming, and video processing. Keywords: Deep image matting, computer vision, deep learning, fusion techniques, U-Net
Krawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the
... Show MoreThis study looks into the many methods that are used in the risk assessment procedure that is used in the construction industry nowadays. As a result of the slow adoption of novel assessment methods, professionals frequently resort to strategies that have previously been validated as being successful. When it comes to risk assessment, having a precise analytical tool that uses the cost of risk as a measurement and draws on the knowledge of professionals could potentially assist bridge the gap between theory and practice. This step will examine relevant literature, sort articles according to their published year, and identify domains and qualities. Consequently, the most significant findings have been presented in a manne
... Show MoreThis investigation presents an experimental and analytical study on the behavior of reinforced concrete deep beams before and after repair. The original beams were first loaded under two points load up to failure, then, repaired by epoxy resin and tested again. Three of the test beams contains shear reinforcement and the other two beams have no shear reinforcement. The main variable in these beams was the percentage of longitudinal steel reinforcement (0, 0.707, 1.061, and 1.414%). The main objective of this research is to investigate the possibility of restoring the full load carrying capacity of the reinforced concrete deep beam with and without shear reinforcement by using epoxy resin as the material of repair. All be
... Show MoreThis research aims to analyze and simulate biochemical real test data for uncovering the relationships among the tests, and how each of them impacts others. The data were acquired from Iraqi private biochemical laboratory. However, these data have many dimensions with a high rate of null values, and big patient numbers. Then, several experiments have been applied on these data beginning with unsupervised techniques such as hierarchical clustering, and k-means, but the results were not clear. Then the preprocessing step performed, to make the dataset analyzable by supervised techniques such as Linear Discriminant Analysis (LDA), Classification And Regression Tree (CART), Logistic Regression (LR), K-Nearest Neighbor (K-NN), Naïve Bays (NB
... Show MoreResearchers are increasingly using multimodal biometrics to strengthen the security of biometric applications. In this study, a strong multimodal human identification model was developed to address the growing problem of spoofing attacks in biometric security systems. Through the use of metaheuristic optimization methods, such as the Genetic Algorithm(GA), Ant Colony Optimization(ACO), and Particle Swarm Optimization (PSO) for feature selection, this unique model incorporates three biometric modalities: face, iris, and fingerprint. Image pre-processing, feature extraction, critical image feature selection, and multibiometric recognition are the four main steps in the workflow of the system. To determine its performance, the model wa
... Show MoreAutomation is one of the key systems in modern agriculture, providing potential solutions to the challenges related to the growing world population, demographic shifts, and economic situation. The present article aims to highlight the importance of precision agriculture (PA) and smart agriculture (SA) in increasing agricultural production and the importance of environmental protection in increasing production and reducing traditional production. For this purpose, different types of automation systems in the field of agricultural operations are discussed, as well as smart agriculture technologies including the Internet of Things (IoT), artificial intelligence (AI), machine learning (ML), big data analysis, in addition to agricultural robots,
... Show MoreMulti-walled carbon nanotubes from cheap tubs company MWCNT-CP were purified by alcohol \ H2O2 \ separation funnel which is simple, easy and scalable techniques. The steps of purification were characterized by X-ray diffraction, Raman spectroscopy, scanning electron microscopy SEM with energy dispersive of X-ray spectroscopy EDX and surface area measurements. The technique was succeeded to remove most the trace element from MWCNT-CP which causing increase the surface area. The ratios of impurities were reduced to less 0.6% after treatment by three steps with losing less than 5% from MWCNT-CP.
One of the significant stages in computer vision is image segmentation which is fundamental for different applications, for example, robot control and military target recognition, as well as image analysis of remote sensing applications. Studies have dealt with the process of improving the classification of all types of data, whether text or audio or images, one of the latest studies in which researchers have worked to build a simple, effective, and high-accuracy model capable of classifying emotions from speech data, while several studies dealt with improving textual grouping. In this study, we seek to improve the classification of image division using a novel approach depending on two methods used to segment the images. The first
... Show MoreBuilding a system to identify individuals through their speech recording can find its application in diverse areas, such as telephone shopping, voice mail and security control. However, building such systems is a tricky task because of the vast range of differences in the human voice. Thus, selecting strong features becomes very crucial for the recognition system. Therefore, a speaker recognition system based on new spin-image descriptors (SISR) is proposed in this paper. In the proposed system, circular windows (spins) are extracted from the frequency domain of the spectrogram image of the sound, and then a run length matrix is built for each spin, to work as a base for feature extraction tasks. Five different descriptors are generated fro
... Show More