Online communication on social networks has become a never-given-up way of expressing and sharing views and opinions within the realm of all topics on earth, and that is that! A basis essential in this is the limits at which "freedom of expression" should not be trespassed so as not to fall into the expression of "hate speech". These two ends make a base in the UN regulations pertaining to human rights: One is free to express, but not to hate by expression. Hereunder, a Critical Discourse Analysis in terms of Fairclough's dialectical-relational approach (2001) is made of Facebook posts (being made by common people, and not of official nature) targeting Islam and Muslims. This is made so as to recognize these instances of "speech" a
... Show MoreMultilayer reservoirs are currently modeled as a single zone system by averaging the reservoir parameters associated with each reservoir zone. However, this type of modeling is rarely accurate because a single zone system does not account for the fact that each zone's pressure decreases independently. Pressure drop for each zone has an effect on the total output and would result in inter-flow and the premature depletion of one of the zones. Understanding reservoir performance requires a precise estimation of each layer's permeability and skin factor. The Multilayer Transient Analysis is a well-testing technique designed to determine formation properties in more than one layer, and its effectiveness over the past two decades has been
... Show MoreThe area of character recognition has received a considerable attention by researchers all over the world during the last three decades. However, this research explores best sets of feature extraction techniques and studies the accuracy of well-known classifiers for Arabic numeral using the Statistical styles in two methods and making comparison study between them. First method Linear Discriminant function that is yield results with accuracy as high as 90% of original grouped cases correctly classified. In the second method, we proposed algorithm, The results show the efficiency of the proposed algorithms, where it is found to achieve recognition accuracy of 92.9% and 91.4%. This is providing efficiency more than the first method.
Document analysis of images snapped by camera is a growing challenge. These photos are often poor-quality compound images, composed of various objects and text; this makes automatic analysis complicated. OCR is one of the image processing techniques which is used to perform automatic identification of texts. Existing image processing techniques need to manage many parameters in order to clearly recognize the text in such pictures. Segmentation is regarded one of these essential parameters. This paper discusses the accuracy of segmentation process and its effect over the recognition process. According to the proposed method, the images were firstly filtered using the wiener filter then the active contour algorithm could b
... Show MoreA method is developed for the determination of iron (III) in pharmaceutical preparations by coupling cloud point extraction (CPE) and UV-Vis spectrophotometry. The method is based on the reaction of Fe(III) with excess drug ciprofloxacin (CIPRO) in dilute H2SO4, forming a hydrophobic Fe(III)- CIPRO complex which can be extracted into a non-ionic surfactant Triton X-114, and iron ions are determined spectrophotometrically at absorption maximum of 437 nm. Several variables which impact on the extraction and determination of Fe (III) are optimized in order to maximize the extraction efficiency and improve the sensitivity of the method. The interferences study is also considered to check the accuracy of the procedure. The results hav
... Show MoreFeature selection (FS) constitutes a series of processes used to decide which relevant features/attributes to include and which irrelevant features to exclude for predictive modeling. It is a crucial task that aids machine learning classifiers in reducing error rates, computation time, overfitting, and improving classification accuracy. It has demonstrated its efficacy in myriads of domains, ranging from its use for text classification (TC), text mining, and image recognition. While there are many traditional FS methods, recent research efforts have been devoted to applying metaheuristic algorithms as FS techniques for the TC task. However, there are few literature reviews concerning TC. Therefore, a comprehensive overview was systematicall
... Show MoreIn recent years, there has been expanding development in the vehicular part and the number of vehicles moving on the road in all the sections of the country. Vehicle number plate identification based on image processing is a dynamic area of this work; this technique is used for security purposes such as tracking of stolen cars and access control to restricted areas. The License Plate Recognition System (LPRS) exploits a digital camera to capture vehicle plate numbers is used as input to the proposed recognition system. Basically, the developing system is consist of three phases, vehicle license plate localization, character segmentation, and character recognition, the License Plate (LP) detection is presented using canny Edge detection algo
... Show MoreIn recent years, there has been expanding development in the vehicular part and the number of vehicles moving on the road in all the sections of the country. Vehicle number plate identification based on image processing is a dynamic area of this work; this technique is used for security purposes such as tracking of stolen cars and access control to restricted areas. The License Plate Recognition System (LPRS) exploits a digital camera to capture vehicle plate numbers is used as input to the proposed recognition system. Basically, the developing system is consist of three phases, vehicle license plate localization, character segmentation, and character recognition, the License Plate (LP) detection is presented using canny
... Show MoreA new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show More