Estimating the semantic similarity between short texts plays an increasingly prominent role in many fields related to text mining and natural language processing applications, especially with the large increase in the volume of textual data that is produced daily. Traditional approaches for calculating the degree of similarity between two texts, based on the words they share, do not perform well with short texts because two similar texts may be written in different terms by employing synonyms. As a result, short texts should be semantically compared. In this paper, a semantic similarity measurement method between texts is presented which combines knowledge-based and corpus-based semantic information to build a semantic network that represents the relationship between the compared texts and extracts the degree of similarity between them. Representing a text as a semantic network is the best knowledge representation that comes close to the human mind's understanding of the texts, where the semantic network reflects the sentence's semantic, syntactical, and structural knowledge. The network representation is a visual representation of knowledge objects, their qualities, and their relationships. WordNet lexical database has been used as a knowledge-based source while the GloVe pre-trained word embedding vectors have been used as a corpus-based source. The proposed method was tested using three different datasets, DSCS, SICK, and MOHLER datasets. A good result has been obtained in terms of RMSE and MAE.
Stenography is the art of hiding the very presence of communication by embedding secret message into innocuous looking cover document, such as digital image, videos, sound files, and other computer files that contain perceptually irrelevant or redundant information as covers or carriers to hide secret messages.
In this paper, a new Least Significant Bit (LSB) nonsequential embedding technique in wave audio files is introduced. To support the immunity of proposed hiding system, and in order to recover some weak aspect inherent with the pure implementation of stego-systems, some auxiliary processes were suggested and investigated including the use of hidden text jumping process and stream ciphering algorithm. Besides, the suggested
... Show MoreThe cuneiform images need many processes in order to know their contents
and by using image enhancement to clarify the objects (symbols) founded in the
image. The Vector used for classifying the symbol called symbol structural vector
(SSV) it which is build from the information wedges in the symbol.
The experimental tests show insome numbersand various relevancy including
various drawings in online method. The results are high accuracy in this research,
and methods and algorithms programmed using a visual basic 6.0. In this research
more than one method was applied to extract information from the digital images
of cuneiform tablets, in order to identify most of signs of Sumerian cuneiform.
Chaotic systems have been proved to be useful and effective for cryptography. Through this work, a new Feistel cipher depend upon chaos systems and Feistel network structure with dynamic secret key size according to the message size have been proposed. Compared with the classical traditional ciphers like Feistel-based structure ciphers, Data Encryption Standards (DES), is the common example of Feistel-based ciphers, the process of confusion and diffusion, will contains the dynamical permutation choice boxes, dynamical substitution choice boxes, which will be generated once and hence, considered static,
While using chaotic maps, in the suggested system, called
The aim of the research is to investigate the effect of cold plasma on the bacteria grown on texture of sesame paste in its normal particle and nano particle size. Starting by using the image segmentation process depending on the threshold method, it is used to get rid of the reflection of the glass slides on which the sesame samples are placed. The classification process implemented to separate the sesame paste texture from normal and abnormal texture. The abnormal texture appears when the bacteria has been grown on the sesame paste after being left for two days in the air, unsupervised k-mean classification process used to classify the infected region, the normal region and the treated region. The bacteria treated with cold plasma, t
... Show Moreמטרת המחקר הזה היא לבדוק את שלושה סיפורים קצרים מהקובץ הראשון של אהרון אפלפלד , שנחשב ההצהרה הרשמית על היותו סופר השואה הראשון בישראל , המשותף לשלושת הסיפורים הוא הנימה של הגיבורים היוצאים למסע פיזי ונפשי לגלות את אשר אירע להם , מגמתו של המחקר היא לבדוק את הצורה המיוחדת שבה אפיין אפלפלד את גיבוריו שמעידים על כך , כי השואה היא החוויה המרה והקשה בקרב היהודים והשלכותיה נשארה טמונה בקרבם גם אחרי המלחמה ואחרי שנים
... Show MoreNatural gas and oil are one of the mainstays of the global economy. However, many issues surround the pipelines that transport these resources, including aging infrastructure, environmental impacts, and vulnerability to sabotage operations. Such issues can result in leakages in these pipelines, requiring significant effort to detect and pinpoint their locations. The objective of this project is to develop and implement a method for detecting oil spills caused by leaking oil pipelines using aerial images captured by a drone equipped with a Raspberry Pi 4. Using the message queuing telemetry transport Internet of Things (MQTT IoT) protocol, the acquired images and the global positioning system (GPS) coordinates of the images' acquisition are
... Show MoreText categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accuracy th
... Show MoreThe skill scale in most of sport activity monitoring a lot of dynamic behaviours conducted with playing situations that help the excerpt's in sport field to evaluate and put right solutions ,soccer one of games that studies in third stage in college and take skills ,dribbling , passing, shooting these skills helps to execute the plans in game ,the researchers notice that there is no test measure the skills of the game in the beginning of the first semester especially in the method of soccer in physical education college and the problem of the research were by answering the question that is there test connect between one or more that one of skill to measure the ability of students to execute the plans in soccer and the conclusion was the bui
... Show MoreIn this paper, we investigate the automatic recognition of emotion in text. We perform experiments with a new method of classification based on the PPM character-based text compression scheme. These experiments involve both coarse-grained classification (whether a text is emotional or not) and also fine-grained classification such as recognising Ekman’s six basic emotions (Anger, Disgust, Fear, Happiness, Sadness, Surprise). Experimental results with three datasets show that the new method significantly outperforms the traditional word-based text classification methods. The results show that the PPM compression based classification method is able to distinguish between emotional and nonemotional text with high accuracy, between texts invo
... Show MoreAutomatic recognition of individuals is very important in modern eras. Biometric techniques have emerged as an answer to the matter of automatic individual recognition. This paper tends to give a technique to detect pupil which is a mixture of easy morphological operations and Hough Transform (HT) is presented in this paper. The circular area of the eye and pupil is divided by the morphological filter as well as the Hough Transform (HT) where the local Iris area has been converted into a rectangular block for the purpose of calculating inconsistencies in the image. This method is implemented and tested on the Chinese Academy of Sciences (CASIA V4) iris image database 249 person and the IIT Delhi (IITD) iris
... Show More