The estimation of the regular regression model requires several assumptions to be satisfied such as "linearity". One problem occurs by partitioning the regression curve into two (or more) parts and then joining them by threshold point(s). This situation is regarded as a linearity violation of regression. Therefore, the multiphase regression model is received increasing attention as an alternative approach which describes the changing of the behavior of the phenomenon through threshold point estimation. Maximum likelihood estimator "MLE" has been used in both model and threshold point estimations. However, MLE is not resistant against violations such as outliers' existence or in case of the heavy-tailed error distribution. The main goal of this paper is to suggest a new hybrid estimator obtained by an ad-hoc algorithm which relies on data driven strategy that overcomes outliers. While the minor goal is to introduce a new employment of an unweighted estimation method named "winsorization" which is a good method to get robustness in regression estimation via special technique to reduce the effect of the outliers. Another specific contribution in this paper is to suggest employing "Kernel" function as a new weight (in the scope of the researcher's knowledge).Moreover, two weighted estimations are based on robust weight functions named "Cauchy" and "Talworth". Simulations have been constructed with contamination levels (0%, 5%, and 10%) which associated with sample sizes (n=40,100). Real data application showed the superior performance of the suggested method compared with other methods using RMSE and R2 criteria.
The research aims to measure, assess and evaluate the efficiency of the directorates of Anbar Municipalities by using the Data Envelopment Analysis method (DEA). This is because the municipality sector is consider an important sector and has a direct contact with the citizen’s life. Provides essential services to citizens. The researcher used a case study method, and the sources of information collection based on data were monthly reports, the research population is represented by the Directorate of Anbar Municipalities, and the research sample consists of 7 municipalities which are different in terms of category and size of different types. The most important conclusion reached by the research i
... Show MoreInformation pollution is regarded as a big problem facing journalists working in the editing section, whereby journalistic materials face such pollution through their way across the editing pyramid. This research is an attempt to define the concept of journalistic information pollution, and what are the causes and sources of this pollution. The research applied the descriptive research method to achieve its objectives. A questionnaire was used to collect data. The findings indicate that journalists are aware of the existence of information pollution in journalism, and this pollution has its causes and resources.
Two- dimensional numerical simulations are carried out to study the elements of observing a Dirac point source and a Dirac binary system. The essential features of this simulation are demonstrated in terms of the point spread function and the modulation transfer function. Two mathematical equations have been extracted to present, firstly the relationship between the radius of optical telescope and the distance between the central frequency and cut-off frequency of the optical telescope, secondly the relationship between the radius of the optical telescope and the average frequency components of the modulation transfer function.
Cloud point extraction is a simple, safe, and environmentally friendly technique for preparing many different kinds of samples. In this review, we discussed the CPE method and how to apply it to our environmental sample data. We also spoke about the benefits, problems, and likely developments in CPE. This process received a great deal of attention during preconcentration and extraction. It was used as a disconnection and follow-up improvement system before the natural mixtures (nutrients, polybrominated biphenyl ethers, pesticides, polycyclic sweet-smelling hydrocarbons, polychlorinated compounds, and fragrant amines) and inorganic mixtures were examined and many metals like (silver, lead, cadmium, mercury, and so on). We also find
... Show MoreIt is well known that drilling fluid is a key parameter for optimizing drilling operations, cleaning the hole, and managing the rig hydraulics and margins of surge and swab pressures. Although the experimental works represent valid and reliable results, they are expensive and time consuming. In contrast, continuous and regular determination of the rheological fluid properties can perform its essential functions during good construction. The aim of this study is to develop empirical models to estimate the drilling mud rheological properties of water-based fluids with less need for lab measurements. This study provides two predictive techniques, multiple regression analysis and artificial neural networks, to determine the rheological
... Show MoreA method is developed for the determination of iron (III) in pharmaceutical preparations by coupling cloud point extraction (CPE) and UV-Vis spectrophotometry. The method is based on the reaction of Fe(III) with excess drug ciprofloxacin (CIPRO) in dilute H2SO4, forming a hydrophobic Fe(III)- CIPRO complex which can be extracted into a non-ionic surfactant Triton X-114, and iron ions are determined spectrophotometrically at absorption maximum of 437 nm. Several variables which impact on the extraction and determination of Fe (III) are optimized in order to maximize the extraction efficiency and improve the sensitivity of the method. The interferences study is also considered to check the accuracy of the procedure. The results hav
... Show MoreThis Book is intended to be a textbook studied for undergraduate course in financial statistics/ department of Financial Sciences and Banking. This book is designed to be used in semester system. To achieve the goals of the book, it is divided into the following chapters. Chapter one introduces basic concepts. Chapter two devotes to frequency distribution and data representation. Chapter three discusses central tendency measures (all types of means, mode, and median). Chapter four deals with dispersion Measures (standard deviation, variance, and coefficient of variation). Chapter five concerned with correlation and regression analysis. While chapter six concerned with testing Hypotheses (One population mean test, Two "independent" populati
... Show MoreCrime is considered as an unlawful activity of all kinds and it is punished by law. Crimes have an impact on a society's quality of life and economic development. With a large rise in crime globally, there is a necessity to analyze crime data to bring down the rate of crime. This encourages the police and people to occupy the required measures and more effectively restricting the crimes. The purpose of this research is to develop predictive models that can aid in crime pattern analysis and thus support the Boston department's crime prevention efforts. The geographical location factor has been adopted in our model, and this is due to its being an influential factor in several situations, whether it is traveling to a specific area or livin
... Show MoreBis-anthraquinones with a unique molecular backbone, (+)-2,2’-epicytoskyrin A (epi) and (+)-1,1′-bislunatin (bis), was produced by endophytic fungi Diaporthe sp GNBP-10 associated with Gambir plant (Uncaria gambier). Epi and bis possess robust antimicrobial activity toward various pathogens. This study focus on knowing the optimum condition of epi and bis production from Diaporthe sp GNBP-10. A series of culture media with various nutrient compositions was investigated in epi and bis production. The content of epi and bis was determined by measuring the area under the curve from TLC-densitometric (scanner) experiment. The linear regression analysis was then applied to obtain the results. The optimi
... Show MoreThis paper deals the prediction of the process of random spatial data of two properties, the first is called Primary variables and the second is called secondary variables , the method that were used in the prediction process for this type of data is technique Co-kriging , the method is usually used when the number of primary variables meant to predict for one of its elements is measured in a particular location a few (because of the cost or difficulty of obtaining them) compare with secondary variable which is the number of elements are available and highly correlated with primary variables, as was the&nbs
... Show More