Computer vision seeks to mimic the human visual system and plays an essential role in artificial intelligence. It is based on different signal reprocessing techniques; therefore, developing efficient techniques becomes essential to achieving fast and reliable processing. Various signal preprocessing operations have been used for computer vision, including smoothing techniques, signal analyzing, resizing, sharpening, and enhancement, to reduce reluctant falsifications, segmentation, and image feature improvement. For example, to reduce the noise in a disturbed signal, smoothing kernels can be effectively used. This is achievedby convolving the distributed signal with smoothing kernels. In addition, orthogonal moments (OMs) are a crucial technique in signal preprocessing, serving as key descriptors for signal analysis and recognition. OMs are obtained by the projection of orthogonal polynomials (OPs) onto the signal domain. However, when dealing with 3D signals, the traditional approach of convolving kernels with the signal and computing OMs beforehand significantly increases the computational cost of computer vision algorithms. To address this issue, this paper develops a novel mathematical model to embed the kernel directly into the OPs functions, seamlessly integrating these two processes into a more efficient and accurate approach. The proposed model allows the computation of OMs for smoothed versions of 3D signals directly, thereby reducing computational overhead. Extensive experiments conducted on 3D objects demonstrate that the proposed method outperforms traditional approaches across various metrics. The average recognition accuracy improves to 83.85% when the polynomial order is increased to 10. Experimental results show that the proposed method exhibits higher accuracy and lower computational costs compared to the benchmark methods in various conditions for a wide range of parameter values.
Background: Radiopacity is one of the prerequisites for dental materials, especially for composite restorations. It's essential for easy detection of secondary dental caries as well as observation of the radiographic interface between the materials and tooth structure. The aim of this study to assess the difference in radiopacity of different resin composites using a digital x-ray system. Materials and methods: Ten specimens (6mm diameter and 1mm thickness) of three types of composite resins (Evetric, Estelite Sigma Quick,and G-aenial) were fabricated using Teflon mold. The radiopacity was assessed using dental radiography equipment in combination with a phosphor plate digital system and a grey scale value aluminum step wedge with thickness
... Show MoreThis study focused on spectral clustering (SC) and three-constraint affinity matrix spectral clustering (3CAM-SC) to determine the number of clusters and the membership of the clusters of the COST 2100 channel model (C2CM) multipath dataset simultaneously. Various multipath clustering approaches solve only the number of clusters without taking into consideration the membership of clusters. The problem of giving only the number of clusters is that there is no assurance that the membership of the multipath clusters is accurate even though the number of clusters is correct. SC and 3CAM-SC aimed to solve this problem by determining the membership of the clusters. The cluster and the cluster count were then computed through the cluster-wise J
... Show MoreBackground: Radiopacity is one of the prerequisites for dental materials, especially for composite restorations. It's essential for easy detection of secondary dental caries as well as observation of the radiographic interface between the materials and tooth structure. The aim of this study to assess the difference in radiopacity of different resin composites using a digital x-ray system. Materials and methods: Ten specimens (6mm diameter and 1mm thickness) of three types of composite resins (Evetric, Estelite Sigma Quick,and G-aenial) were fabricated using Teflon mold. The radiopacity was assessed using dental radiography equipment in combination with a phosphor plate digital system and a grey scale value aluminum step wedge with thickness
... Show MoreCompression is the reduction in size of data in order to save space or transmission time. For data transmission, compression can be performed on just the data content or on the entire transmission unit (including header data) depending on a number of factors. In this study, we considered the application of an audio compression method by using text coding where audio compression represented via convert audio file to text file for reducing the time to data transfer by communication channel. Approach: we proposed two coding methods are applied to optimizing the solution by using CFG. Results: we test our application by using 4-bit coding algorithm the results of this method show not satisfy then we proposed a new approach to compress audio fil
... Show MoreWith a descriptive and analytical approach, it discusses the concept of phenomenology as an approach to seeing the aesthetics of ugliness as one of the concepts associated with the aesthetic experience in contemporary art designs, as it is the result of communication between artistic production and the recipient, which leads to creating a state of aesthetic pleasure. A phenomenological method is used to uncover the aesthetics of ugly and to connect ugliness and beauty since ugliness in aesthetics incorporates beauty as a modern aesthetic vision from the phenomenological perspective. The study investigated aesthetics as a phenomenon of perception, sensory knowledge, and the aesthetic response to the production of designs in contemporary a
... Show MoreEmotion could be expressed through unimodal social behaviour’s or bimodal or it could be expressed through multimodal. This survey describes the background of facial emotion recognition and surveys the emotion recognition using visual modality. Some publicly available datasets are covered for performance evaluation. A summary of some of the research efforts to classify emotion using visual modality for the last five years from 2013 to 2018 is given in a tabular form.
The Braille Recognition System is the process of capturing a Braille document image and turning its content into its equivalent natural language characters. The Braille Recognition System's cell transcription and Braille cell recognition are the two basic phases that follow one another. The Braille Recognition System is a technique for locating and recognizing a Braille document stored as an image, such as a jpeg, jpg, tiff, or gif image, and converting the text into a machine-readable format, such as a text file. BCR translates an image's pixel representation into its character representation. As workers at visually impaired schools and institutes, we profit from Braille recognition in a variety of ways. The Braille Recognition S
... Show MoreIdentification by biological features gets tremendous importance with the increasing of security systems in society. Various types of biometrics like face, finger, iris, retina, voice, palm print, ear and hand geometry, in all these characteristics, iris recognition gaining attention because iris of every person is unique, it never changes during human lifetime and highly protected against damage. This unique feature shows that iris can be good security measure. Iris recognition system listed as a high confidence biometric identification system; mostly it is divide into four steps: Acquisition, localization, segmentation and normalization. This work will review various Iris Recognition systems used by different researchers for each recognit
... Show More