A spectrophotometric method has been proposed for the determination of two drugs containing phenol group [phenylephrine hydrochloride (PHP) and salbutamol sulphate (SLB)] in pharmaceutical dosage forms. The method is based on the diazotization reaction of metoclopramide hydrochloride (MCP) and coupling of the diazotized reagent with drugs in alkaline medium to give intense orange colored product (?max at 470 nm for each of PHP and SLB). Variable parameters such as temperature, reaction time and concentration of the reactants have been analyzed and optimized. Under the proposed optimum condition, Beer’s law was obeyed in the concentration range of 1-32 and 1-14 ?g mL-1 for PHP and SLB, respectively. The limit of detection (LOD) and limit of quantification (LOQ) for each of PHP and SLB were 0.60, 0.52 ?g mL-1 and 2.02, 1.72 ?g mL-1, respectively. No interference was observed from common excipients present in pharmaceutical preparations. The good correlation coefficients and low relative standard deviation assert the applicability of this method. The suggested method was further applied for the determinations of drugs in commercial pharmaceutical preparations, which was compared statistically with reference methods by means of t- test and F- test and were found not to differ significantly at 95% confidence level. The procedure was characterized by its simplicity with accuracy and precision.
The study aims to discuss the relation between imported inflation and international trade of Iraqi economy for the period (1990-2015) by using annual data. To achieve the study aim, statistical and Econometrics methods are used through NARDL model to explain non-linear relation because it’s a model assigned to measure non-linear relations and as we know most economic relations are non-linear, beside explaining positive and negative effects of imported inflation, and to reach the research aim deductive approach was adopted through using descriptive method to describe and determine phenomenon. Beside the inductive approach by g statistical and standard tools to get the standard model explains the
... Show MoreResearchers have increased interest in recent years in determining the optimum sample size to obtain sufficient accuracy and estimation and to obtain high-precision parameters in order to evaluate a large number of tests in the field of diagnosis at the same time. In this research, two methods were used to determine the optimum sample size to estimate the parameters of high-dimensional data. These methods are the Bennett inequality method and the regression method. The nonlinear logistic regression model is estimated by the size of each sampling method in high-dimensional data using artificial intelligence, which is the method of artificial neural network (ANN) as it gives a high-precision estimate commensurate with the dat
... Show MoreThis study was conducted at the Poultry Research Station of the Agricultural Research Department/Ministry of Agriculture in Abu Ghraib for the period from 25/2/2019 to 7/4/2019 (42 days) with the aim of using several levels of Spirulina (SP)
Objecte The study aims to test the effect of using the appropriate quantitative method of demand forecasting in improving the performance of supply chain of the aviation fuel product ( The study sample), One of the products of the Doura refinery (The study site), By testing a set of quantitative methods of demand forecasting using forecasting error measurements, and choosing the least faulty, most accurate and reliable method and adept it in the building chain.
Is the study of problem through a starting with the fol
... Show MoreECG is an important tool for the primary diagnosis of heart diseases, which shows the electrophysiology of the heart. In our method, a single maternal abdominal ECG signal is taken as an input signal and the maternal P-QRS-T complexes of original signal is averaged and repeated and taken as a reference signal. LMS and RLS adaptive filters algorithms are applied. The results showed that the fetal ECGs have been successfully detected. The accuracy of Daisy database was up to 84% of LMS and 88% of RLS while PhysioNet was up to 98% and 96% for LMS and RLS respectively.
A content-based image retrieval (CBIR) is a technique used to retrieve images from an image database. However, the CBIR process suffers from less accuracy to retrieve images from an extensive image database and ensure the privacy of images. This paper aims to address the issues of accuracy utilizing deep learning techniques as the CNN method. Also, it provides the necessary privacy for images using fully homomorphic encryption methods by Cheon, Kim, Kim, and Song (CKKS). To achieve these aims, a system has been proposed, namely RCNN_CKKS, that includes two parts. The first part (offline processing) extracts automated high-level features based on a flatting layer in a convolutional neural network (CNN) and then stores these features in a
... Show MoreData mining has the most important role in healthcare for discovering hidden relationships in big datasets, especially in breast cancer diagnostics, which is the most popular cause of death in the world. In this paper two algorithms are applied that are decision tree and K-Nearest Neighbour for diagnosing Breast Cancer Grad in order to reduce its risk on patients. In decision tree with feature selection, the Gini index gives an accuracy of %87.83, while with entropy, the feature selection gives an accuracy of %86.77. In both cases, Age appeared as the most effective parameter, particularly when Age<49.5. Whereas Ki67 appeared as a second effective parameter. Furthermore, K- Nearest Neighbor is based on the minimu
... Show MoreIn digital images, protecting sensitive visual information against unauthorized access is considered a critical issue; robust encryption methods are the best solution to preserve such information. This paper introduces a model designed to enhance the performance of the Tiny Encryption Algorithm (TEA) in encrypting images. Two approaches have been suggested for the image cipher process as a preprocessing step before applying the Tiny Encryption Algorithm (TEA). The step mentioned earlier aims to de-correlate and weaken adjacent pixel values as a preparation process before the encryption process. The first approach suggests an Affine transformation for image encryption at two layers, utilizing two different key sets for each layer. Th
... Show MoreText based-image clustering (TBIC) is an insufficient approach for clustering related web images. It is a challenging task to abstract the visual features of images with the support of textual information in a database. In content-based image clustering (CBIC), image data are clustered on the foundation of specific features like texture, colors, boundaries, shapes. In this paper, an effective CBIC) technique is presented, which uses texture and statistical features of the images. The statistical features or moments of colors (mean, skewness, standard deviation, kurtosis, and variance) are extracted from the images. These features are collected in a one dimension array, and then genetic algorithm (GA) is applied for image clustering.
... Show More