User confidentiality protection is concerning a topic in control and monitoring spaces. In image, user's faces security in concerning with compound information, abused situations, participation on global transmission media and real-world experiences are extremely significant. For minifying the counting needs for vast size of image info and for minifying the size of time needful for the image to be address computationally. consequently, partial encryption user-face is picked. This study focuses on a large technique that is designed to encrypt the user's face slightly. Primarily, dlib is utilizing for user-face detection. Susan is one of the top edge detectors with valuable localization characteristics marked edges, is used to extract
... Show MoreBackground: Hyperlipidemia is an elevated fat (lipids), mostly cholesterol and triglycerides, in the blood. These lipids usually bind to proteins to remain circulated so-called lipoprotein. Aims of the study: To determine taste detection threshold and estimate the trace elements (zinc) in serum and saliva of those patients and compare all of these with healthy control subjects. Methods: Eighty subjects were incorporated in this study, thy were divided into two groups: forty patients on simvastatin treatment age between (35-60) years, and forty healthy control of age range between (35-60) years. Saliva was collected by non-stimulated technique within 10 minutes. Serum was obtained from each subject. Zinc was estimated in serum and saliva
... Show MoreEvolutionary algorithms are better than heuristic algorithms at finding protein complexes in protein-protein interaction networks (PPINs). Many of these algorithms depend on their standard frameworks, which are based on topology. Further, many of these algorithms have been exclusively examined on networks with only reliable interaction data. The main objective of this paper is to extend the design of the canonical and topological-based evolutionary algorithms suggested in the literature to cope with noisy PPINs. The design of the evolutionary algorithm is extended based on the functional domain of the proteins rather than on the topological domain of the PPIN. The gene ontology annotation in each molecular function, biological proce
... Show MoreBackground: The figure for the clinical application of computed tomography have been increased significantly in oral and maxillofacial field that supply the dentists with sufficient data enables them to play a main role in screening osteoporosis, therefore Hounsfield units of mandibular computed tomography view used as a main indicator to predict general skeleton osteoporosis and fracture risk factor. Material and Methods: Thirty subjects (7 males &23 females) with a mean age of (60.1) years underwent computed tomographic scanning for different diagnostic assessment in head and neck region. The mandibular bone quality of them were determined through Hounsfield units of CT scan images and were correlated with the bone mineral density v
... Show MoreA nonlinear filter for smoothing color and gray images
corrupted by Gaussian noise is presented in this paper. The proposed
filter designed to reduce the noise in the R,G, and B bands of the
color images and preserving the edges. This filter applied in order to
prepare images for further processing such as edge detection and
image segmentation.
The results of computer simulations show that the proposed
filter gave satisfactory results when compared with the results of
conventional filters such as Gaussian low pass filter and median filter
by using Cross Correlation Coefficient (ccc) criteria.
The meniscus has a crucial function in human anatomy, and Magnetic Resonance Imaging (M.R.I.) plays an essential role in meniscus assessment. It is difficult to identify cartilage lesions using typical image processing approaches because the M.R.I. data is so diverse. An M.R.I. data sequence comprises numerous images, and the attributes area we are searching for may differ from each image in the series. Therefore, feature extraction gets more complicated, hence specifically, traditional image processing becomes very complex. In traditional image processing, a human tells a computer what should be there, but a deep learning (D.L.) algorithm extracts the features of what is already there automatically. The surface changes become valuable when
... Show More