The Braille Recognition System is the process of capturing a Braille document image and turning its content into its equivalent natural language characters. The Braille Recognition System's cell transcription and Braille cell recognition are the two basic phases that follow one another. The Braille Recognition System is a technique for locating and recognizing a Braille document stored as an image, such as a jpeg, jpg, tiff, or gif image, and converting the text into a machine-readable format, such as a text file. BCR translates an image's pixel representation into its character representation. As workers at visually impaired schools and institutes, we profit from Braille recognition in a variety of ways. The Braille Recognition System contains many stages, including image acquisition, pre-processing of images, and character recognition. This review aims to examine the earlier studies on transcription and Braille cell recognition by other scholars and the comparative results and detection techniques among them. This review will look at previous work done by other researchers on Braille cell recognition and transcription, comparing previous works in this study, and will be useful and illuminating for Braille Recognition System researchers, especially newcomers.
In this paper, precision agriculture system is introduced based on Wireless Sensor Network (WSN). Soil moisture considered one of environment factors that effect on crop. The period of irrigation must be monitored. Neural network capable of learning the behavior of the agricultural soil in absence of mathematical model. This paper introduced modified type of neural network that is known as Spiking Neural Network (SNN). In this work, the precision agriculture system is modeled, contains two SNNs which have been identified off-line based on logged data, one of these SNNs represents the monitor that located at sink where the period of irrigation is calculated and the other represents the soil. In addition, to reduce p
... Show MoreModeling data acquisition systems (DASs) can support the vehicle industry in the development and design of sophisticated driver assistance systems. Modeling DASs on the basis of multiple criteria is considered as a multicriteria decision-making (MCDM) problem. Although literature reviews have provided models for DASs, the issue of imprecise, unclear, and ambiguous information remains unresolved. Compared with existing MCDM methods, the robustness of the fuzzy decision by opinion score method II (FDOSM II) and fuzzy weighted with zero inconsistency II (FWZIC II) is demonstrated for modeling the DASs. However, these methods are implemented in an intuitionistic fuzzy set environment that restricts the ability of experts to provide mem
... Show MoreGypseous soil covers approximately 30% of Iraqi lands and is widely used in geotechnical and construction engineering as it is. The demand for residential complexes has increased, so one of the significant challenges in studying gypsum soil due to its unique behavior is understanding its interaction with foundations, such as strip and square footing. This is because there is a lack of experiments that provide total displacement diagrams or failure envelopes, which are well-considered for non-problematic soil. The aim is to address a comprehensive understanding of the micromechanical properties of dry, saturated, and treated gypseous sandy soils and to analyze the interaction of strip base with this type of soil using particle image
... Show MoreEmbedding an identifying data into digital media such as video, audio or image is known as digital watermarking. In this paper, a non-blind watermarking algorithm based on Berkeley Wavelet Transform is proposed. Firstly, the embedded image is scrambled by using Arnold transform for higher security, and then the embedding process is applied in transform domain of the host image. The experimental results show that this algorithm is invisible and has good robustness for some common image processing operations.
In this paper, an efficient method for compressing color image is presented. It allows progressive transmission and zooming of the image without need to extra storage. The proposed method is going to be accomplished using cubic Bezier surface (CBI) representation on wide area of images in order to prune the image component that shows large scale variation. Then, the produced cubic Bezier surface is subtracted from the image signal to get the residue component. Then, bi-orthogonal wavelet transform is applied to decompose the residue component. Both scalar quantization and quad tree coding steps are applied on the produced wavelet sub bands. Finally, adaptive shift coding is applied to handle the remaining statistical redundancy and attain e
... Show MoreRecently, a new secure steganography algorithm has been proposed, namely, the secure Block Permutation Image Steganography (BPIS) algorithm. The new algorithm consists of five main steps, these are: convert the secret message to a binary sequence, divide the binary sequence into blocks, permute each block using a key-based randomly generated permutation, concatenate the permuted blocks forming a permuted binary sequence, and then utilize a plane-based Least-Significant-Bit (LSB) approach to embed the permuted binary sequence into BMP image file format. The performance of algorithm was given a preliminary evaluation through estimating the PSNR (Peak Signal-to-Noise Ratio) of the stego image for limited number of experiments comprised hiding
... Show MoreIn digital images, protecting sensitive visual information against unauthorized access is considered a critical issue; robust encryption methods are the best solution to preserve such information. This paper introduces a model designed to enhance the performance of the Tiny Encryption Algorithm (TEA) in encrypting images. Two approaches have been suggested for the image cipher process as a preprocessing step before applying the Tiny Encryption Algorithm (TEA). The step mentioned earlier aims to de-correlate and weaken adjacent pixel values as a preparation process before the encryption process. The first approach suggests an Affine transformation for image encryption at two layers, utilizing two different key sets for each layer. Th
... Show MoreA common field development task is the object of the present research by specifying the best location of new horizontal re-entry wells within AB unit of South Rumaila Oil Field. One of the key parameters in the success of a new well is the well location in the reservoir, especially when there are several wells are planned to be drilled from the existing wells. This paper demonstrates an application of neural network with reservoir simulation technique as decision tool. A fully trained predictive artificial feed forward neural network (FFNNW) with efficient selection of horizontal re-entry wells location in AB unit has been carried out with maintaining a reasonable accuracy. Sets of available input data were collected from the exploited g
... Show MoreIn this study, multi-objective optimization of nanofluid aluminum oxide in a mixture of water and ethylene glycol (40:60) is studied. In order to reduce viscosity and increase thermal conductivity of nanofluids, NSGA-II algorithm is used to alter the temperature and volume fraction of nanoparticles. Neural network modeling of experimental data is used to obtain the values of viscosity and thermal conductivity on temperature and volume fraction of nanoparticles. In order to evaluate the optimization objective functions, neural network optimization is connected to NSGA-II algorithm and at any time assessment of the fitness function, the neural network model is called. Finally, Pareto Front and the corresponding optimum points are provided and
... Show MoreIn data mining, classification is a form of data analysis that can be used to extract models describing important data classes. Two of the well known algorithms used in data mining classification are Backpropagation Neural Network (BNN) and Naïve Bayesian (NB). This paper investigates the performance of these two classification methods using the Car Evaluation dataset. Two models were built for both algorithms and the results were compared. Our experimental results indicated that the BNN classifier yield higher accuracy as compared to the NB classifier but it is less efficient because it is time-consuming and difficult to analyze due to its black-box implementation.