Deaf and dumb peoples are suffering difficulties most of the time in communicating with society. They use sign language to communicate with each other and with normal people. But Normal people find it more difficult to understand the sign language and gestures made by deaf and dumb people. Therefore, many techniques have been employed to tackle this problem by converting the sign language to a text or a voice and vice versa. In recent years, research has progressed steadily in regard to the use of computers to recognize and translate the sign language. This paper reviews significant projects in the field beginning with important steps of sign language translation. These projects can be classified according to the use of an input device into image-based and device-based. Image-based is used the traditional methods of image processing and features extraction, by using a digital camera such as a webcam. Device-based uses different devices like (Microsoft Kinect sensor, electronic glove and leap motion controller). These devices are used to reduce the time of both image processing and extraction features. Then the accuracy rates of using device-based are ranged between in 90%-99% where the accuracy rates of using image-based are ranged between 85%-93%.
Browse Iraqi academic journals and research papers
This study evaluated the structural changes of enamel treated by the Regenerate system and carbon dioxide (CO2) laser against acid challenge. Thirty human enamel slabs were prepared and assigned into three groups: Group I: untreated (control); Group II: treated with the Regenerate system; and Group III exposed to CO2 laser. All specimens were subjected to an acid challenge (pH 4.5–7.0) for 14 days. Specimens were evaluated and compared at 120 points using five Raman microspectroscopic peaks; the phosphate vibrations ν1, ν2, ν3, and ν4 at 960, 433, 1029, and 579 cm−1, respectively, and the carbonate at 1070 cm−1, followed by Vickers microhardness test. The ratio of carbonate to phosphate was correlated to the equivalent mic
... Show MoreWireless sensor applications are susceptible to energy constraints. Most of the energy is consumed in communication between wireless nodes. Clustering and data aggregation are the two widely used strategies for reducing energy usage and increasing the lifetime of wireless sensor networks. In target tracking applications, large amount of redundant data is produced regularly. Hence, deployment of effective data aggregation schemes is vital to eliminate data redundancy. This work aims to conduct a comparative study of various research approaches that employ clustering techniques for efficiently aggregating data in target tracking applications as selection of an appropriate clustering algorithm may reflect positive results in the data aggregati
... Show MoreThe purpose of this paper is to apply different transportation models in their minimum and maximum values by finding starting basic feasible solution and finding the optimal solution. The requirements of transportation models were presented with one of their applications in the case of minimizing the objective function, which was conducted by the researcher as real data, which took place one month in 2015, in one of the poultry farms for the production of eggs
... Show MoreTransportability refers to the ease with which people, goods, or services may be transferred. When transportability is high, distance becomes less of a limitation for activities. Transportation networks are frequently represented by a set of locations and a set of links that indicate the connections between those places which is usually called network topology. Hence, each transmission network has a unique topology that distinguishes its structure. The most essential components of such a framework are the network architecture and the connection level. This research aims to demonstrate the efficiency of the road network in the Al-Karrada area which is located in the Baghdad city. The analysis based on a quantitative evaluation using graph th
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for