Today with increase using social media, a lot of researchers have interested in topic extraction from Twitter. Twitter is an unstructured short text and messy that it is critical to find topics from tweets. While topic modeling algorithms such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) are originally designed to derive topics from large documents such as articles, and books. They are often less efficient when applied to short text content like Twitter. Luckily, Twitter has many features that represent the interaction between users. Tweets have rich user-generated hashtags as keywords. In this paper, we exploit the hashtags feature to improve topics learned from Twitter content without modifying the basic topic model of LSA and LDA. Users who share the same hashtag at most discuss the same topic. We compare the performance of the two methods (LSA and LDA) using the topic coherence (with and without hashtags). The experiment result on the Twitter dataset showed that LSA has better coherence score with hashtags than that do not incorporate hashtags. In contrast, our experiments show that the LDA has a better coherence score without incorporating hashtags. Finally, LDA has a better coherence score than LSA and the best coherence result obtained from the LDA method was (0.6047) and the LSA method was (0.4744) but the number of topics in LDA was higher than LSA. Thus, LDA may cause the same tweets to discuss the same subject set into different clustering.
Non-orthogonal Multiple Access (NOMA) is a multiple-access technique allowing multiusers to share the same communication resources, increasing spectral efficiency and throughput. NOMA has been shown to provide significant performance gains over orthogonal multiple access (OMA) regarding spectral efficiency and throughput. In this paper, two scenarios of NOMA are analyzed and simulated, involving two users and multiple users (four users) to evaluate NOMA's performance. The simulated results indicate that the achievable sum rate for the two users’ scenarios is 16.7 (bps/Hz), while for the multi-users scenario is 20.69 (bps/Hz) at transmitted power of 25 dBm. The BER for two users’ scenarios is 0.004202 and 0.001564 for
... Show MoreComputer systems and networks are being used in almost every aspect of our daily life; as a result the security threats to computers and networks have also increased significantly. Traditionally, password-based user authentication is widely used to authenticate legitimate user in the current system0T but0T this method has many loop holes such as password sharing, shoulder surfing, brute force attack, dictionary attack, guessing, phishing and many more. The aim of this paper is to enhance the password authentication method by presenting a keystroke dynamics with back propagation neural network as a transparent layer of user authentication. Keystroke Dynamics is one of the famous and inexpensive behavioral biometric technologies, which identi
... Show MoreIn this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.
Image fusion is one of the most important techniques in digital image processing, includes the development of software to make the integration of multiple sets of data for the same location; It is one of the new fields adopted in solve the problems of the digital image, and produce high-quality images contains on more information for the purposes of interpretation, classification, segmentation and compression, etc. In this research, there is a solution of problems faced by different digital images such as multi focus images through a simulation process using the camera to the work of the fuse of various digital images based on previously adopted fusion techniques such as arithmetic techniques (BT, CNT and MLT), statistical techniques (LMM,
... Show MoreThis work aims to develop a secure lightweight cipher algorithm for constrained devices. A secure communication among constrained devices is a critical issue during the data transmission from the client to the server devices. Lightweight cipher algorithms are defined as a secure solution for constrained devices that require low computational functions and small memory. In contrast, most lightweight algorithms suffer from the trade-off between complexity and speed in order to produce robust cipher algorithm. The PRESENT cipher has been successfully experimented on as a lightweight cryptography algorithm, which transcends other ciphers in terms of its computational processing that required low complexity operations. The mathematical model of
... Show MoreThe aim of this work is to design an algorithm which combines between steganography andcryptography that can hide a text in an image in a way that prevents, as much as possible, anysuspicion of the hidden textThe proposed system depends upon preparing the image data for the next step (DCT Quantization)through steganographic process and using two levels of security: the RSA algorithm and the digitalsignature, then storing the image in a JPEG format. In this case, the secret message will be looked asplaintext with digital signature while the cover is a coloured image. Then, the results of the algorithmare submitted to many criteria in order to be evaluated that prove the sufficiency of the algorithm andits activity. Thus, the proposed algorit
... Show More
