In recent years, there has been expanding development in the vehicular part and the number of vehicles moving on the road in all the sections of the country. Vehicle number plate identification based on image processing is a dynamic area of this work; this technique is used for security purposes such as tracking of stolen cars and access control to restricted areas. The License Plate Recognition System (LPRS) exploits a digital camera to capture vehicle plate numbers is used as input to the proposed recognition system. Basically, the developing system is consist of three phases, vehicle license plate localization, character segmentation, and character recognition, the License Plate (LP) detection is presented using canny Edge detection algorithm, Connect Component Analysis (CCA) have been exploited for segmenting characters. Finally, a Multi-Layer Perceptron Artificial Neural Network (MLPANN) model is utilized to recognize and detect the vehicle license plate characters, and hence the results are displayed as a text on GUI. The proposed system successfully identified and recognized multi_style Iraqi license plates using different image situations and it was evaluated based on different metrics performance, achieving an overall system performance of 91.99%. This results shows the effectiveness of the proposed method compared with other existing methods, whose average recognition rate is 86% and the average processing time of one image is 0.242 s which proves the practicality of the proposed method
Nowadays, people's expression on the Internet is no longer limited to text, especially with the rise of the short video boom, leading to the emergence of a large number of modal data such as text, pictures, audio, and video. Compared to single mode data ,the multi-modal data always contains massive information. The mining process of multi-modal information can help computers to better understand human emotional characteristics. However, because the multi-modal data show obvious dynamic time series features, it is necessary to solve the dynamic correlation problem within a single mode and between different modes in the same application scene during the fusion process. To solve this problem, in this paper, a feature extraction framework of
... Show MoreGe-Au infrared photoconductive detection was prepared from germanium single crystal which were doped with different gold concentration using thermal evaporation. The spectral resonsivity (Rλ), spectral detectivity (D*) were determined as function of wavelength, also the resistance, conductivity in dark and with illumination to infrared radiation, the gain and relative photo response have been measured with different gold concentration. Remarkable improvements in the photoresponse gain were observed for the highest resistance specimen at the expense of spectral detectivity values.
The dependable and efficient identification of Qin seal script characters is pivotal in the discovery, preservation, and inheritance of the distinctive cultural values embodied by these artifacts. This paper uses image histograms of oriented gradients (HOG) features and an SVM model to discuss a character recognition model for identifying partial and blurred Qin seal script characters. The model achieves accurate recognition on a small, imbalanced dataset. Firstly, a dataset of Qin seal script image samples is established, and Gaussian filtering is employed to remove image noise. Subsequently, the gamma transformation algorithm adjusts the image brightness and enhances the contrast between font structures and image backgrounds. After a s
... Show MoreAA Abbass, HL Hussein, WA Shukur, J Kaabi, R Tornai, Webology, 2022 Individual’s eye recognition is an important issue in applications such as security systems, credit card control and guilty identification. Using video images cause to destroy the limitation of fixed images and to be able to receive users’ image under any condition as well as doing the eye recognition. There are some challenges in these systems; changes of individual gestures, changes of light, face coverage, low quality of video images and changes of personal characteristics in each frame. There is a need for two phases in order to do the eye recognition using images; revelation and eye recognition which will use in the security systems to identify the persons. The mai
... Show MoreThis search has introduced the techniques of multi-wavelet transform and neural network for recognition 3-D object from 2-D image using patches. The proposed techniques were tested on database of different patches features and the high energy subband of discrete multi-wavelet transform DMWT (gp) of the patches. The test set has two groups, group (1) which contains images, their (gp) patches and patches features of the same images as a part of that in the data set beside other images, (gp) patches and features, and group (2) which contains the (gp) patches and patches features the same as a part of that in the database but after modification such as rotation, scaling and translation. Recognition by back propagation (BP) neural network as com
... Show MoreThis search has introduced the techniques of multi-wavelet transform and neural network for recognition 3-D object from 2-D image using patches. The proposed techniques were tested on database of different patches features and the high energy subband of discrete multi-wavelet transform DMWT (gp) of the patches. The test set has two groups, group (1) which contains images, their (gp) patches and patches features of the same images as a part of that in the data set beside other images, (gp) patches and features, and group (2) which contains the (gp) patches and patches features the same as a part of that in the database but after modification such as rotation, scaling and translation. Recognition by back propagation (BP) neural network as
... Show MoreIts well known that understanding human facial expressions is a key component in understanding emotions and finds broad applications in the field of human-computer interaction (HCI), has been a long-standing issue. In this paper, we shed light on the utilisation of a deep convolutional neural network (DCNN) for facial emotion recognition from videos using the TensorFlow machine-learning library from Google. This work was applied to ten emotions from the Amsterdam Dynamic Facial Expression Set-Bath Intensity Variations (ADFES-BIV) dataset and tested using two datasets.