Estimating the semantic similarity between short texts plays an increasingly prominent role in many fields related to text mining and natural language processing applications, especially with the large increase in the volume of textual data that is produced daily. Traditional approaches for calculating the degree of similarity between two texts, based on the words they share, do not perform well with short texts because two similar texts may be written in different terms by employing synonyms. As a result, short texts should be semantically compared. In this paper, a semantic similarity measurement method between texts is presented which combines knowledge-based and corpus-based semantic information to build a semantic network that repre
... Show MoreThe new media scene reveals that the unprecedented overlap of a number of technical, economic, and political factors has made the new media a very complicated issue; and the focus of specialized and public debates about its impact on traditional means of communication and forms of social media and social relations. Then, the same scene discloses the reality of the relationship between the new and the traditional. These are the axes that will be will be discussed in this study.
The researchers of the present study have conducted a genre analysis of two political debates between American presidential nominees in the 2016 and 2020 elections. The current study seeks to analyze the cognitive construction of political debates to evaluate the typical moves and strategies politicians use to express their communicative intentions and to reveal the language manifestations of those moves and strategies. To achieve the study’s aims, the researchers adopt Bhatia’s (1993) framework of cognitive construction supported by van Emeren’s (2010) pragma-dialectic framework. The study demonstrates that both presidents adhere to this genre structuring to further their political agendas. For a positive and promising image
... Show MoreIn modern technology, the ownership of electronic data is the key to securing their privacy and identity from any trace or interference. Therefore, a new identity management system called Digital Identity Management, implemented throughout recent years, acts as a holder of the identity data to maintain the holder’s privacy and prevent identity theft. Therefore, an overwhelming number of users have two major problems, users who own data and third-party applications will handle it, and users who have no ownership of their data. Maintaining these identities will be a challenge these days. This paper proposes a system that solves the problem using blockchain technology for Digital Identity Management systems. Blockchain is a powerful techniqu
... Show MoreCassava, a significant crop in Africa, Asia, and South America, is a staple food for millions. However, classifying cassava species using conventional color, texture, and shape features is inefficient, as cassava leaves exhibit similarities across different types, including toxic and non-toxic varieties. This research aims to overcome the limitations of traditional classification methods by employing deep learning techniques with pre-trained AlexNet as the feature extractor to accurately classify four types of cassava: Gajah, Manggu, Kapok, and Beracun. The dataset was collected from local farms in Lamongan Indonesia. To collect images with agricultural research experts, the dataset consists of 1,400 images, and each type of cassava has
... Show MoreIn the present work, bentonite clay was used as an adsorbent for the removal of a new prepared mono azo dye, 4-[6-bromo benzothiazolyl azo] thymol (BTAT) using batch adsorption method. The effect of many factors like adsorption time, adsorbent weight, initial BTAT concentration and temperature has been studied. The equilibrium adsorption data was described using Langmuir and frundlich adsorption isotherm. Based on kinetics study, it was found that the adsorption process follow pseudo second order kinetics. Thermodynamics data such as Gibbes Free energy ∆Gᵒ, entropy ∆Sᵒ and ∆Hᵒ were also determined using Vant Hoff plot.
A resume is the first impression between you and a potential employer. Therefore, the importance of a resume can never be underestimated. Selecting the right candidates for a job within a company can be a daunting task for recruiters when they have to review hundreds of resumes. To reduce time and effort, we can use NLTK and Natural Language Processing (NLP) techniques to extract essential data from a resume. NLTK is a free, open source, community-driven project and the leading platform for building Python programs to work with human language data. To select the best resume according to the company’s requirements, an algorithm such as KNN is used. To be selected from hundreds of resumes, your resume must be one of the best. Theref
... Show Morecomposition,depiction,antibacterial,antioxidant,and cytotoxicity activities studies of a new nano-sized binuclear metal(||) schiff base complexes