In this paper, we investigate the automatic recognition of emotion in text. We perform experiments with a new method of classification based on the PPM character-based text compression scheme. These experiments involve both coarse-grained classification (whether a text is emotional or not) and also fine-grained classification such as recognising Ekman’s six basic emotions (Anger, Disgust, Fear, Happiness, Sadness, Surprise). Experimental results with three datasets show that the new method significantly outperforms the traditional word-based text classification methods. The results show that the PPM compression based classification method is able to distinguish between emotional and nonemotional text with high accuracy, between texts involving Happiness and Sadness emotions (with 80% accuracy for Aman’s dataset and 76.7% for Alm’s datasets) and texts involving Ekman’s six basic emotions for the LiveJournal dataset (87.8% accuracy). Results also show that the method outperforms traditional feature-based classifiers such as Naïve Bayes and SMO in most cases in terms of accuracy, precision, recall and F-measure.
Soil compaction is one of the most harmful elements affecting soil structure, limiting plant growth and agricultural productivity. It is crucial to assess the degree of soil penetration resistance to discover solutions to the harmful consequences of compaction. In order to obtain the appropriate value, using soil cone penetration requires time and labor-intensive measurements. Currently, satellite technologies, electronic measurement control systems, and computer software help to measure soil penetration resistance quickly and easily within the precision agriculture applications approach. The quantitative relationships between soil properties and the factors affecting their diversity contribute to digital soil mapping. Digital soil maps use
... Show MoreIn this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.
Currently, with the huge increase in modern communication and network applications, the speed of transformation and storing data in compact forms are pressing issues. Daily an enormous amount of images are stored and shared among people every moment, especially in the social media realm, but unfortunately, even with these marvelous applications, the limited size of sent data is still the main restriction's, where essentially all these applications utilized the well-known Joint Photographic Experts Group (JPEG) standard techniques, in the same way, the need for construction of universally accepted standard compression systems urgently required to play a key role in the immense revolution. This review is concerned with Different
... Show MoreIn this paper, a method is proposed to increase the compression ratio for the color images by
dividing the image into non-overlapping blocks and applying different compression ratio for these
blocks depending on the importance information of the block. In the region that contain important
information the compression ratio is reduced to prevent loss of the information, while in the
smoothness region which has not important information, high compression ratio is used .The
proposed method shows better results when compared with classical methods(wavelet and DCT).
The effect of the initial pressure upon the laminar flame speed, for a methane-air mixtures, has been detected paractically, for a wide range of equivalence ratio. In this work, a measurement system is designed in order to measure the laminar flame speed using a constant volume method with a thermocouples technique. The laminar burning velocity is measured, by using the density ratio method. The comparison of the present work results and the previous ones show good agreement between them. This indicates that the measurements and the calculations employed in the present work are successful and precise
In this paper Heun method has been used to find numerical solution for first order nonlinear functional differential equation. Moreover, this method has been modified in order to treat system of nonlinear functional differential equations .two numerical examples are given for conciliated the results of this method.
One of the principle concepts to understand any hydrocarbon field is the heterogeneity scale; This becomes particularly challenging in supergiant oil fields with medium to low lateral connectivity and carbonate reservoir rocks.
The main objectives of this study is to quantify the value of the heterogeneity for any well in question, and propagate it to the full reservoir. This is a quite useful specifically prior to conducting detailed water flooding or full field development studies and work, in order to be prepared for a proper design and exploitation requirements that fit with the level of heterogeneity of this formation.
In this study, plain concrete simply supported beams subjected to two points loading were analyzed for the flexure. The numerical model of the beam was constructed in the meso-scale representation of concrete as a two phasic material (aggregate, and mortar). The fracture process of the concrete beams under loading was investigated in the laboratory as well as by the numerical models. The Extended Finite Element Method (XFEM) was employed for the treatment of the discontinuities that appeared during the fracture process in concrete. Finite element method with the feature standard/explicitlywas utilized for the numerical analysis. Aggregate particles were assumedof elliptic shape. Other properties such as grading and sizes of the aggr
... Show More