Various speech enhancement Algorithms (SEA) have been developed in the last few decades. Each algorithm has its advantages and disadvantages because the speech signal is affected by environmental situations. Distortion of speech results in the loss of important features that make this signal challenging to understand. SEA aims to improve the intelligibility and quality of speech that different types of noise have degraded. In most applications, quality improvement is highly desirable as it can reduce listener fatigue, especially when the listener is exposed to high noise levels for extended periods (e.g., manufacturing). SEA reduces or suppresses the background noise to some degree, sometimes called noise suppression algorithms. In this research, the design of SEA based on different speech models (Laplacian model or Gaussian model) has been implemented using two types of discrete transforms, which are Discrete Tchebichef Transform and Discrete Tchebichef-Krawtchouk Transforms. The proposed estimator consists of dual stages of a wiener filter that can effectively estimate the clean speech signal. The evaluation measures' results show the proposed SEA's ability to enhance the noisy speech signal based on a comparison with other types of speech models and a self-comparison based on different types and levels of noise. The presented algorithm's improvements ratio regarding the average SNRseq are 1.96, 2.12, and 2.03 for Buccaneer, White, and Pink noise, respectively.
Dual-species biofilms of Pseudomonas aeruginosa and Staphylococcus aureus generate difficult-to-treat illnesses. Nutrition stress in biofilms affects physiology, microbial metabolism, and species interactions, impacting bacteria growth and survival. Furthermore, the function of alginate, which is encoded by the algD gene, in the production of biofilms has been established. The present study aimed at investigating the impact of starvation on algD gene expression in single-species biofilm of P. aeruginosa and dual-species biofilms of P. aeruginosa and S. aureus from hospital sewage. A total of six P. aeruginosa and six S. aureus isolates were obtained from the microbiology laboratory at the Department of Biology, College of Science, Universit
... Show MoreThe primary purpose of this paper is to introduce the, N-coprobabilistic normed space, coprobabilistic dual space of N-coprobabilistic normed space and give some facts that are related of them.
The major of DDoS attacks use TCP protocol and the TCP SYN flooding attack is the most common one among them. The SYN Cookie mechanism is used to defend against the TCP SYN flooding attack. It is an effective defense, but it has a disadvantage of high calculations and it doesn’t differentiate spoofed packets from legitimate packets. Therefore, filtering the spoofed packet can effectively enhance the SYN Cookie activity. Hop Count Filtering (HCF) is another mechanism used at the server side to filter spoofed packets. This mechanism has a drawback of being not a perfect and final solution in defending against the TCP SYN flooding attack. An enhanced mechanism of Integrating and combining the SYN Cookie with Hop Count Filtering (HCF) mech
... Show MoreAuthentication is the process of determining whether someone or something is, in fact, who or what it is declared to be. As the dependence upon computers and computer networks grows, the need for user authentication has increased. User’s claimed identity can be verified by one of several methods. One of the most popular of these methods is represented by (something user know), such as password or Personal Identification Number (PIN). Biometrics is the science and technology of authentication by identifying the living individual’s physiological or behavioral attributes. Keystroke authentication is a new behavioral access control system to identify legitimate users via their typing behavior. The objective of this paper is to provide user
... Show MoreIris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the origin
... Show MoreToday with increase using social media, a lot of researchers have interested in topic extraction from Twitter. Twitter is an unstructured short text and messy that it is critical to find topics from tweets. While topic modeling algorithms such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) are originally designed to derive topics from large documents such as articles, and books. They are often less efficient when applied to short text content like Twitter. Luckily, Twitter has many features that represent the interaction between users. Tweets have rich user-generated hashtags as keywords. In this paper, we exploit the hashtags feature to improve topics learned