Background: Expectoration of blood that originated in the lungs or bronchial tubes is a frightening symptom for patients and often is a manifestation of significant and possibly dangerous underlying disease. Tuberculosis was and still one of the common causes followed by bronchiactasis , bronchitis, and lung cancer. Objectives: The aim of this study is to find the frequency of causes of respiratory tract bleeding in 100 patients attending alkindy teaching hospital.Type of the study: : Prospective descriptive observational study Methods of a group of patients consist of one hundred consecutive adult patients, with Lower respiratory tract bleeding are studied. History, physical examination, and a group of selected investigations performed, including complete blood examination and blood film, PT, PTT, sputum direct gram and AFB stain, cytology ,chest radiography, CT scan, and bronchoscopy when indicated.Results: pulmonary tuberculosis, acute bronchitis, lung carcinoma, and bronchectiasis are the major causes of hemoptysis in our study with 27%, 23%, 23%, 20% respectively. Of the included patients 63% were males, specially age 41-60, while 37% were females. The primary malignancy is more common than secondary cancer, and that squamous cell carcinoma, and adenocarcioma, are the most common.Conclusions: Tuberculosis is the main cause of lower respiratory tract bleeding, followed by lung carcinoma bronchitis, and bronchiactesis. Most of the patients are males and in middle age , sever bleeding is not common and squamous cell carcinoma is commonest cause regarding malignancies followed by adeno carcinoma
Classification of imbalanced data is an important issue. Many algorithms have been developed for classification, such as Back Propagation (BP) neural networks, decision tree, Bayesian networks etc., and have been used repeatedly in many fields. These algorithms speak of the problem of imbalanced data, where there are situations that belong to more classes than others. Imbalanced data result in poor performance and bias to a class without other classes. In this paper, we proposed three techniques based on the Over-Sampling (O.S.) technique for processing imbalanced dataset and redistributing it and converting it into balanced dataset. These techniques are (Improved Synthetic Minority Over-Sampling Technique (Improved SMOTE), Border
... Show MoreIn the field of data security, the critical challenge of preserving sensitive information during its transmission through public channels takes centre stage. Steganography, a method employed to conceal data within various carrier objects such as text, can be proposed to address these security challenges. Text, owing to its extensive usage and constrained bandwidth, stands out as an optimal medium for this purpose. Despite the richness of the Arabic language in its linguistic features, only a small number of studies have explored Arabic text steganography. Arabic text, characterized by its distinctive script and linguistic features, has gained notable attention as a promising domain for steganographic ventures. Arabic text steganography harn
... Show MoreBlockchain has garnered the most attention as the most important new technology that supports recent digital transactions via e-government. The most critical challenge for public e-government systems is reducing bureaucracy and increasing the efficiency and performance of administrative processes in these systems since blockchain technology can play a role in a decentralized environment and execute a high level of security transactions and transparency. So, the main objectives of this work are to survey different proposed models for e-government system architecture based on blockchain technology implementation and how these models are validated. This work studies and analyzes some research trends focused on blockchain
... Show MoreBackground/Objectives: The purpose of this study was to classify Alzheimer’s disease (AD) patients from Normal Control (NC) patients using Magnetic Resonance Imaging (MRI). Methods/Statistical analysis: The performance evolution is carried out for 346 MR images from Alzheimer's Neuroimaging Initiative (ADNI) dataset. The classifier Deep Belief Network (DBN) is used for the function of classification. The network is trained using a sample training set, and the weights produced are then used to check the system's recognition capability. Findings: As a result, this paper presented a novel method of automated classification system for AD determination. The suggested method offers good performance of the experiments carried out show that the
... Show MoreFree-Space Optical (FSO) can provide high-speed communications when the effect of turbulence is not serious. However, Space-Time-Block-Code (STBC) is a good candidate to mitigate this seriousness. This paper proposes a hybrid of an Optical Code Division Multiple Access (OCDMA) and STBC in FSO communication for last mile solutions, where access to remote areas is complicated. The main weakness effecting a FSO link is the atmospheric turbulence. The feasibility of employing STBC in OCDMA is to mitigate these effects. The current work evaluates the Bit-Error-Rate (BER) performance of OCDMA operating under the scintillation effect, where this effect can be described by the gamma-gamma model. The most obvious finding to emerge from the analysis
... Show MoreThis work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show MoreThe fingerprints are the more utilized biometric feature for person identification and verification. The fingerprint is easy to understand compare to another existing biometric type such as voice, face. It is capable to create a very high recognition rate for human recognition. In this paper the geometric rotation transform is applied on fingerprint image to obtain a new level of features to represent the finger characteristics and to use for personal identification; the local features are used for their ability to reflect the statistical behavior of fingerprint variation at fingerprint image. The proposed fingerprint system contains three main stages, they are: (i) preprocessing, (ii) feature extraction, and (iii) matching. The preprocessi
... Show More