In all applications and specially in real time applications, image processing and compression plays in modern life a very important part in both storage and transmission over internet for example, but finding orthogonal matrices as a filter or transform in different sizes is very complex and importance to using in different applications like image processing and communications systems, at present, new method to find orthogonal matrices as transform filter then used for Mixed Transforms Generated by using a technique so-called Tensor Product based for Data Processing, these techniques are developed and utilized. Our aims at this paper are to evaluate and analyze this new mixed technique in Image Compression using the Discrete Wavelet Transform and Slantlet Transform both as 2D matrix but mixed by Tensor Product. The performance parameters such as Compression Ratio, Peak Signal to Noise Ratio, and Root Mean Squared Error, are all evaluated for both standard colored and gray images. The simulation result shows that the techniques provide the quality of the images it was normal but acceptable and need more researchers works.
The woman represents an existential dualism with the man along history. This existence has been manifested through the history of Art starting from the arts of the old civilizations until modernism. It must be said that the history of Art refers to her presence as an extension for this history in the oriental arts, and the Arab countries including Iraq. The woman has varying outputs in terms of the content of her presence and the style of presentation. In her characterizations: maternity, fertility, femininity and others. The Iraqi artists adopted these fields among them the artist Jaber Alwan who formulated his style of presentation and its units depending on the feminine presence and his experience in her formal and stylistic fie
... Show MoreThis research sought to present a concept of cross-sectional data models, A crucial double data to take the impact of the change in time and obtained from the measured phenomenon of repeated observations in different time periods, Where the models of the panel data were defined by different types of fixed , random and mixed, and Comparing them by studying and analyzing the mathematical relationship between the influence of time with a set of basic variables Which are the main axes on which the research is based and is represented by the monthly revenue of the working individual and the profits it generates, which represents the variable response And its relationship to a set of explanatory variables represented by the
... Show MoreThe purpose of this article is to introduce reverse engineering procedure (REP). It can achieved by developing an industrial mechanical product that had no design schemes throughout the 3D-Scanners. The aim of getting a geometric CAD model from 3D scanner is to present physical model. Generally, this used in specific applications, like commercial plan and manufacturing tasks. Having a digital data as stereolithography (STL) format. Converting the point cloud be can developed as a work in programming by producing triangles between focuses, a procedure known as triangulation. Then it could be easy to manufacture parts unknown documentation and transferred the information to CNC-machines. In this work, modification was proposed and used in RE
... Show MoreThis study explores the challenges in Artificial Intelligence (AI) systems in generating image captions, a task that requires effective integration of computer vision and natural language processing techniques. A comparative analysis between traditional approaches such as retrieval- based methods and linguistic templates) and modern approaches based on deep learning such as encoder-decoder models, attention mechanisms, and transformers). Theoretical results show that modern models perform better for the accuracy and the ability to generate more complex descriptions, while traditional methods outperform speed and simplicity. The paper proposes a hybrid framework that combines the advantages of both approaches, where conventional methods prod
... Show MoreThe study aimed to examine the phonological processing profile for students with and without reading disabilities in cycle 1 schools of basic education in the Governorate of Muscat, Sultanate of Oman. The study participants included 306 students, 165 students with reading disabilities and 141 students without reading disabilities. The Comprehensive Test of Phonological Processing (CTOPP) and Working Memory Test (WMT) were administered to the participants. The results of the study showed that the mean score of students without reading disabilities was higher than that of students of reading disabilities in all measures of phonological processing, and that there are statistically significant differences on the case of students in all
... Show MoreIn this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.
Three-dimensional (3D) image and medical image processing, which are considered big data analysis, have attracted significant attention during the last few years. To this end, efficient 3D object recognition techniques could be beneficial to such image and medical image processing. However, to date, most of the proposed methods for 3D object recognition experience major challenges in terms of high computational complexity. This is attributed to the fact that the computational complexity and execution time are increased when the dimensions of the object are increased, which is the case in 3D object recognition. Therefore, finding an efficient method for obtaining high recognition accuracy with low computational complexity is essentia
... Show MoreThis paper considers and proposes new estimators that depend on the sample and on prior information in the case that they either are equally or are not equally important in the model. The prior information is described as linear stochastic restrictions. We study the properties and the performances of these estimators compared to other common estimators using the mean squared error as a criterion for the goodness of fit. A numerical example and a simulation study are proposed to explain the performance of the estimators.
In this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.
In this research for each positive integer integer and is accompanied by connecting that number with the number of Bashz Attabq result any two functions midwives to derive a positive integer so that there is a point