The rapid development of telemedicine services and the requirements for exchanging medical information between physicians, consultants, and health institutions have made the protection of patients’ information an important priority for any future e-health system. The protection of medical information, including the cover (i.e. medical image), has a specificity that slightly differs from the requirements for protecting other information. It is necessary to preserve the cover greatly due to its importance on the reception side as medical staff use this information to provide a diagnosis to save a patient's life. If the cover is tampered with, this leads to failure in achieving the goal of telemedicine. Therefore, this work provides an investigation of information security techniques in medical imaging, focusing on security goals. Encrypting a message before hiding them gives an extra layer of security, and thus, will provide an excellent solution to protect the sensitive information of patients during the sharing of medical information. Medical image steganography is a special case of image steganography, while Digital Imaging and Communications in Medicine (DICOM) is the backbone of all medical imaging divisions, whereby it is most broadly used to store and transmit medical images. The main objective of this study is to provide a general idea of what Least Significant Bit-based (LSB) steganography techniques have achieved in medical images.
Cryptography is a method used to mask text based on any encryption method, and the authorized user only can decrypt and read this message. An intruder tried to attack in many manners to access the communication channel, like impersonating, non-repudiation, denial of services, modification of data, threatening confidentiality and breaking availability of services. The high electronic communications between people need to ensure that transactions remain confidential. Cryptography methods give the best solution to this problem. This paper proposed a new cryptography method based on Arabic words; this method is done based on two steps. Where the first step is binary encoding generation used t
... Show MoreArabic text categorization for pattern recognitions is challenging. We propose for the first time a novel holistic method based on clustering for classifying Arabic writer. The categorization is accomplished stage-wise. Firstly, these document images are sectioned into lines, words, and characters. Secondly, their structural and statistical features are obtained from sectioned portions. Thirdly, F-Measure is used to evaluate the performance of the extracted features and their combination in different linkage methods for each distance measures and different numbers of groups. Finally, experiments are conducted on the standard KHATT dataset of Arabic handwritten text comprised of varying samples from 1000 writers. The results in the generatio
... Show MoreToday with increase using social media, a lot of researchers have interested in topic extraction from Twitter. Twitter is an unstructured short text and messy that it is critical to find topics from tweets. While topic modeling algorithms such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) are originally designed to derive topics from large documents such as articles, and books. They are often less efficient when applied to short text content like Twitter. Luckily, Twitter has many features that represent the interaction between users. Tweets have rich user-generated hashtags as keywords. In this paper, we exploit the hashtags feature to improve topics learned
The quality of Global Navigation Satellite Systems (GNSS) networks are considerably influenced by the configuration of the observed baselines. Where, this study aims to find an optimal configuration for GNSS baselines in terms of the number and distribution of baselines to improve the quality criteria of the GNSS networks. First order design problem (FOD) was applied in this research to optimize GNSS network baselines configuration, and based on sequential adjustment method to solve its objective functions.
FOD for optimum precision (FOD-p) was the proposed model which based on the design criteria of A-optimality and E-optimality. These design criteria were selected as objective functions of precision, whic
... Show MoreThe adsorption of Malonic acid, Succinic acid, Adipic acid, and Azelaic acid from their aqueous solutions on zinc oxide surface were investigated. The adsorption efficiency was investigated using various factors such as adsorbent amount, contact time, initial concentration, and temperature. Optimum conditions for acids removal from its aqueous solutions were found to be adsorbent dose (0.2 g), equilibrium contact time (40 minutes), initial acids concentration (0.005 M). Variation of temperature as a function of adsorption efficiency showed that increasing the temperature would result in decreasing the adsorption ability. Kinetic modeling by applying the pseudo-second order model can provide a better fit of the data with a greater correla
... Show MoreThe current research aims to prepare a proposed Programmebased sensory integration theory for remediating some developmental learning disabilities among children, researchers prepared a Programme based on sensory integration through reviewing studies related to the research topic that can be practicedby some active teaching strategies (cooperative learning, peer learning, Role-playing, and educational stories). The Finalformat consists of(39) training sessions.
Lafutidine (LAF) a newly developed histamine H2-receptor antagonist with absorption window makes it a good candidate to be prepared as floating drug delivery system. The current study involves formulation and in- Vitro evaluation of lafutidine as floating microspheres. Different formulation variables that affect the physicochemical properties of the prepared microspheres besides to the drug release behavior were investigated. Fourteen formulas were prepared by emulsion (o/w) solvent evaporation method using Ethyl cellulose (EC) as the polymeric matrix and tween 80 (TW80) as an emulsifying agent. The prepared formulas were evaluated for their percentage buoyancy (%), Percentage yield (%) and Entrapment efficiency (EE %). The results obt
... Show MoreThis work presents the use of laser diode in the fiber distributed data interface FDDI networks. FDDI uses optical fiber as a transmission media. This solves the problems resulted from the EMI, and noise. In addition it increases the security of transmission. A network with a ring topology consists of three computers was designed and implemented. The timed token protocol was used to achieve and control the process of communication over the ring. Nonreturn to zero inversion (NRZI) modulation was carried out as a part of the physical (PHY) sublayer. The optical system consists of a laser diode with wavelength of 820 nm and 2.5 mW maximum output power as a source, optical fiber as a channel, and positive intrinsic negative (PIN) photodiode
... Show MoreIn this work, the detection of zinc (Zn) ions that cause water pollution is studied using the CSNPs- Linker-alkaloids compound that was prepared by linking extracted alkaloids from Iraqi Catharanthus roseus plant with Chitosan nanoparticles (CSNPs) using maleic anhydride. This compound is characterized by an X-ray diffractometer (XRD) which shows that it has an orthorhombic structure with crystallite size in the nano dimension. Zeta Potential results show that the CSNPs-Linker-alkaloids carried a positive charge of 54.4 mV, which means it possesses high stability. The Fourier transform infrared spectroscopy (FTIR) shows a new distinct band at 1708.93 cm-1 due to C=O esterification. Scanning electron microscope (SEM) image
... Show More