In the present paper, the researcher attempts to shed some light on the objective behind inserting some Qur'anic verses by Al-Zahraa (Peace Be Upon Her) in her revered speech. Besides, it tries to investigate the hidden meaning of these verses and to study them in the light of pragmaticreferences. This task is supported by Books of Tafseer as well as the books that explained this speech to arrive at its intended meaning. It is possible say that this is astep towards studying speeches of 'Ahlul Bayt' (People of the Prophet's household) in terms of modern linguistic studies, as well as employing modern methods to explore the aesthetic values of these texts.
Aspect categorisation and its utmost importance in the eld of Aspectbased Sentiment Analysis (ABSA) has encouraged researchers to improve topic model performance for modelling the aspects into categories. In general, a majority of its current methods implement parametric models requiring a pre-determined number of topics beforehand. However, this is not e ciently undertaken with unannotated text data as they lack any class label. Therefore, the current work presented a novel non-parametric model drawing a number of topics based on the semantic association present between opinion-targets (i.e., aspects) and their respective expressed sentiments. The model incorporated the Semantic Association Rules (SAR) into the Hierarchical Dirichlet Proce
... Show MoreAdvances in digital technology and the World Wide Web has led to the increase of digital documents that are used for various purposes such as publishing and digital library. This phenomenon raises awareness for the requirement of effective techniques that can help during the search and retrieval of text. One of the most needed tasks is clustering, which categorizes documents automatically into meaningful groups. Clustering is an important task in data mining and machine learning. The accuracy of clustering depends tightly on the selection of the text representation method. Traditional methods of text representation model documents as bags of words using term-frequency index document frequency (TFIDF). This method ignores the relationship an
... Show MoreSecure information transmission over the internet is becoming an important requirement in data communication. These days, authenticity, secrecy, and confidentiality are the most important concerns in securing data communication. For that reason, information hiding methods are used, such as Cryptography, Steganography and Watermarking methods, to secure data transmission, where cryptography method is used to encrypt the information in an unreadable form. At the same time, steganography covers the information within images, audio or video. Finally, watermarking is used to protect information from intruders. This paper proposed a new cryptography method by using thre
... Show MoreHate speech (henceforth HS) has recently spread and become an important issue. This type of speech in children's writings has a particular formulation and specific objectives that the authors intend to convey. Thus, the study aims at examining qualitatively and quantitatively the classism HS and its pragmatic functions via identifying the speech acts used to express classism HS, the implicature instigated as well as impoliteness. Since pragmatics is the study of language in context, which is greatly related to the situations and speaker’s intention, this study depends on pragmatic theoriespeech acts, impoliteness and conversational implicature) to analyze the data which are taken from Katherine Mansfield's short story (The D
... Show MoreThis study examines strategies of cultural domestication in Muravyov and Kistyakovsky’s Russian translation of (The Fellowship of the Ring). It documents transformations of character names, toponyms, dialogues, and cultural references, highlighting systematic Russification and the infusion of Soviet political commentary that reshape the text into cultural rewriting.
A new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show More