In the beta decay process, a neutron converts into a proton, or vice versa, so the atom in this process changes to a more stable isobar. Bethe-Weizsäcker used a quasi-experimental formula in the present study to find the most stable isobar for isobaric groups of mass nuclides (A=165-175). In a group of isobars, there are two methods of calculating the most stable isobar. The most stable isobar represents the lowest parabola value by calculating the binding energy value (B.E) for each nuclide in this family, and then drawing these binding energy values as a function of the atomic number (Z) in order to obtain the mass parabolas, the second method is by calculating the atomic number value of the most stable isobar (ZA). The results show that the mass parabolas of isobar elements with an even mass number (A=even) vary from the mass parabolas of isobar elements with an odd mass number (A=odd), In the case of single isobars, it has one parabola, meaning that it has one stable isobar, while we find that the pairs isobars appear to have two parabolas, meaning that it has more than one stable isobar. When we compared the two methods used in this study to determine the most stable isobars, we found that in two techniques for odd isobars, stable isobars are mostly the same nuclide, whereas in suitcases of even isobars with two stable isobars (only one of them are same stable isobars).
In this research, the X-ray diffraction pattern was used, which was obtained experimentally after preparation of barium oxide powder. A program was used to analyze the X-ray diffraction lines of barium oxide nanoparticles, and then the particle size was calculated by using the Williamson-Hall method, where it was found that the value of the particle size is 25.356 nm. Also, the dislocation density was calculated, which is equal to1.555 x1015 (lines/nm2), and the value of the unit cell number was also calculated, as it is equal to 23831.
The current research aims to shed light on the Global Reporting Initiative (GRI), which helps to report financial and non-financial information by economic units in general and listed on the Iraq Stock Exchange in particular. The research was based on a main premise that apply the criteria of the Global Reporting Initiative (GRI) would provide useful information to users to help them make appropriate decisions. To achieve the goal of the research, the descriptive analysis method was used, and quantitative analysis was used. At the level of the descriptive analysis method, a desk survey was conducted. As for the quantitative analysis, it relied on applied data through a questionnaire form (Questioners) as a research tool, and the
... Show MoreA mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the
... Show MoreIn this study, we made a comparison between LASSO & SCAD methods, which are two special methods for dealing with models in partial quantile regression. (Nadaraya & Watson Kernel) was used to estimate the non-parametric part ;in addition, the rule of thumb method was used to estimate the smoothing bandwidth (h). Penalty methods proved to be efficient in estimating the regression coefficients, but the SCAD method according to the mean squared error criterion (MSE) was the best after estimating the missing data using the mean imputation method
A mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the others
... Show MoreCryptosporidiosis is mainly cause a persistent diarrhea in immune compromised patients, BALB/c mice have been suppressed by dexamethasone, tissue Th1, Th2 and Th17 cytokines concentrations in the ileum were significantly diminished in both infected and immunosuppressed mice. Level of IFN-g, TNF-a, IL-12, IL-6, IL-17A was increased in level, IL-4 didn’t increases, in both ileal and spleen tissue. Levels of above cytokines were examined in spleen in order to follow the proliferation of CD4+ T-cell during C. parvum infection.
Image classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class
... Show MoreThe problem of Bi-level programming is to reduce or maximize the function of the target by having another target function within the constraints. This problem has received a great deal of attention in the programming community due to the proliferation of applications and the use of evolutionary algorithms in addressing this kind of problem. Two non-linear bi-level programming methods are used in this paper. The goal is to achieve the optimal solution through the simulation method using the Monte Carlo method using different small and large sample sizes. The research reached the Branch Bound algorithm was preferred in solving the problem of non-linear two-level programming this is because the results were better.
In this paper, the Magnetohydrodynamic (MHD) for Williamson fluid with varying temperature and concentration in an inclined channel with variable viscosity has been examined. The perturbation technique in terms of the Weissenberg number to obtain explicit forms for the velocity field has been used. All the solutions of physical parameters of the Darcy parameter , Reynolds number , Peclet number and Magnetic parameter are discussed under the different values as shown in plots.
This work is concerned with the vibration attenuation of a smart beam interacting with fluid using proportional-derivative PD control and adaptive approximation compensator AAC. The role of the AAC is to improve the PD performance by compensating for unmodelled dynamics using the concept of function approximation technique FAT. The key idea is to represent the unknown parameters using the weighting coefficient and basis function matrices/vectors. The weighting coefficient vector is updated using Lyapunov theory. This controller is applied to a flexible beam provided with surface bonded piezo-patches while the vibrating beam system is submerged in a fluid. Two main effects are considered: 1) axial stretching of the vibrating beam that leads
... Show More