Regression Discontinuity (RD) means a study that exposes a definite group to the effect of a treatment. The uniqueness of this design lies in classifying the study population into two groups based on a specific threshold limit or regression point, and this point is determined in advance according to the terms of the study and its requirements. Thus , thinking was focused on finding a solution to the issue of workers retirement and trying to propose a scenario to attract the idea of granting an end-of-service reward to fill the gap ( discontinuity point) if it had not been granted. The regression discontinuity method has been used to study and to estimate the effect of the end -service reward on the cutoff of insured workers as well as the increase in revenues resulting from that. The research has showed that this reward has a clear effect on increasing revenues due to the regularity of workers in their work and their work continuity . It has also found that using Local Linear Smother (LLS) by using three models of bandwidth selection. Its results after the analysis in the Regression program have been as follows: The CCT (Calonico, Cattaneo & Titiunik) beamwidth gives the best performance followed by the local linear regression using the LK (Lembens and kalyanman) beamwidth. The real data has been used in sample size 71 represented in compensation as a variable of effectiveness (illustrative) X and the revenue as a result or an approved variable Y, while the results of the traditional OLS estimation method have not been good enough.
The electronic characteristics, including the density of state and bond length, in addition to the spectroscopic properties such as IR spectrum and Raman scattering, as a function of the frequency of Sn10O16, C24O6, and hybrid junction (Sn10O16/C24O6) were studied. The methodology uses DFT for all electron levels with the hybrid function B3-LYP (Becke level, 3-parameters, Lee–Yang-Parr), with 6-311G (p,d) basis set, and Stuttgart/Dresden (SDD) basis set, using Gaussian 09 theoretical calculations. The geometrical structures were calculated by Gaussian view 05 as a supplementary program. The band gap was calculated and compared to the measured valu
... Show MoreThe logistic regression model is an important statistical model showing the relationship between the binary variable and the explanatory variables. The large number of explanations that are usually used to illustrate the response led to the emergence of the problem of linear multiplicity between the explanatory variables that make estimating the parameters of the model not accurate.
... Show MoreABSTRACT
Agricultural production, food security and safety, public health animal welfare, access to markets and alleviation of rural poverty have been achieved by controlling on veterinary services to prevent animal disease. World organization for animal health guidelines focus on controlling of animal disease which depends on good governance and veterinary services quality. The aim of veterinary services is controlling and preventing animal disease some of other aspects; it's responsibility of early detection, rapid response to outbreaks of emerging or re-emerging animal disease, optimizing quality and effectiveness of disease
... Show MoreBackground: Obesity tends to appear in modern societies and constitutes a significant public health problem with an increased risk of cardiovascular diseases.
Objective: This study aims to determine the agreement between actual and perceived body image in the general population.
Methods: A descriptive cross-sectional study design was conducted with a sample size of 300. The data were collected from eight major populated areas of Northern district of Karachi Sindh with a period of six months (10th January 2020 to 21st June 2020). The Figure rating questionnaire scale (FRS) was applied to collect the demographic data and perception about body weight. Body mass index (BMI) used for ass
... Show MoreThis study employs wavelet transforms to address the issue of boundary effects. Additionally, it utilizes probit transform techniques, which are based on probit functions, to estimate the copula density function. This estimation is dependent on the empirical distribution function of the variables. The density is estimated within a transformed domain. Recent research indicates that the early implementations of this strategy may have been more efficient. Nevertheless, in this work, we implemented two novel methodologies utilizing probit transform and wavelet transform. We then proceeded to evaluate and contrast these methodologies using three specific criteria: root mean square error (RMSE), Akaike information criterion (AIC), and log
... Show MoreIn the field of data security, the critical challenge of preserving sensitive information during its transmission through public channels takes centre stage. Steganography, a method employed to conceal data within various carrier objects such as text, can be proposed to address these security challenges. Text, owing to its extensive usage and constrained bandwidth, stands out as an optimal medium for this purpose. Despite the richness of the Arabic language in its linguistic features, only a small number of studies have explored Arabic text steganography. Arabic text, characterized by its distinctive script and linguistic features, has gained notable attention as a promising domain for steganographic ventures. Arabic text steganography harn
... Show MoreThis work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show MoreIn this paper, a handwritten digit classification system is proposed based on the Discrete Wavelet Transform and Spike Neural Network. The system consists of three stages. The first stage is for preprocessing the data and the second stage is for feature extraction, which is based on Discrete Wavelet Transform (DWT). The third stage is for classification and is based on a Spiking Neural Network (SNN). To evaluate the system, two standard databases are used: the MADBase database and the MNIST database. The proposed system achieved a high classification accuracy rate with 99.1% for the MADBase database and 99.9% for the MNIST database