Currently, one of the topical areas of application of machine learning methods is the prediction of material characteristics. The aim of this work is to develop machine learning models for determining the rheological properties of polymers from experimental stress relaxation curves. The paper presents an overview of the main directions of metaheuristic approaches (local search, evolutionary algorithms) to solving combinatorial optimization problems. Metaheuristic algorithms for solving some important combinatorial optimization problems are described, with special emphasis on the construction of decision trees. A comparative analysis of algorithms for solving the regression problem in CatBoost Regressor has been carried out. The object of the study is the generated data sets obtained on the basis of theoretical stress relaxation curves. Tables of initial data for training models for all samples are presented, a statistical analysis of the characteristics of the initial data sets is carried out. The total number of numerical experiments for all samples was 346020 variations. When developing the models, CatBoost artificial intelligence methods were used, regularization methods (Weight Decay, Decoupled Weight Decay Regularization, Augmentation) were used to improve the accuracy of the model, and the Z-Score method was used to normalize the data. As a result of the study, intelligent models were developed to determine the rheological parameters of polymers included in the generalized non-linear Maxwell-Gurevich equation (initial relaxation viscosity, velocity modulus) using generated data sets for the EDT-10 epoxy binder as an example. Based on the results of testing the models, the quality of the models was assessed, graphs of forecasts for trainees and test samples, graphs of forecast errors were plotted. Intelligent models are based on the CatBoost algorithm and implemented in the Jupyter Notebook environment in Python. The constructed models have passed the quality assessment according to the following metrics: MAE, MSE, RMSE, MAPE. The maximum value of model error predictions was 0.86 for the MAPE metric, and the minimum value of model error predictions was 0.001 for the MSE metric. Model performance estimates obtained during testing are valid.
In this research, we studied the multiple linear regression models for two variables in the presence of the autocorrelation problem for the error term observations and when the error is distributed with general logistic distribution. The auto regression model is involved in the studying and analyzing of the relationship between the variables, and through this relationship, the forecasting is completed with the variables as values. A simulation technique is used for comparison methods depending
Background: Type 2 diabetes mellitus is a condition characterized by an elevation of oxidative stress, which has been implicated in diabetic progression and its vascular complications. Aim: Assessing the impact of gliclazide modified release (MR) versus glimepiride on oxidative stress markers, glycemic indices, lipid profile, and estimated glomerular filtration rate in uncontrolled type 2 diabetic patients on metformin monotherapy. Methods: This was an observational comparative study conducted in Thi-Qar specialized diabetic, endocrine, and metabolism center. Sixty-six patients were randomized into two groups based on the addition of the sulfonylureas (SUs). Group 1 (33 patients) was on gliclazide MR, whereas Group 2 (33 patients)
... Show MoreGenome sequencing has significantly improved the understanding of HIV and AIDS through accurate data on viral transmission, evolution and anti-therapeutic processes. Deep learning algorithms, like the Fined-Tuned Gradient Descent Fused Multi-Kernal Convolutional Neural Network (FGD-MCNN), can predict strain behaviour and evaluate complex patterns. Using genotypic-phenotypic data obtained from the Stanford University HIV Drug Resistance Database, the FGD-MCNN created three files covering various antiretroviral medications for HIV predictions and drug resistance. These files include PIs, NRTIs and NNRTIs. FGD-MCNNs classify genetic sequences as vulnerable or resistant to antiretroviral drugs by analyzing chromosomal information and id
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for
In our article, three iterative methods are performed to solve the nonlinear differential equations that represent the straight and radial fins affected by thermal conductivity. The iterative methods are the Daftardar-Jafari method namely (DJM), Temimi-Ansari method namely (TAM) and Banach contraction method namely (BCM) to get the approximate solutions. For comparison purposes, the numerical solutions were further achieved by using the fourth Runge-Kutta (RK4) method, Euler method and previous analytical methods that available in the literature. Moreover, the convergence of the proposed methods was discussed and proved. In addition, the maximum error remainder values are also evaluated which indicates that the propo
... Show MoreThis paper considers approximate solution of the hyperbolic one-dimensional wave equation with nonlocal mixed boundary conditions by improved methods based on the assumption that the solution is a double power series based on orthogonal polynomials, such as Bernstein, Legendre, and Chebyshev. The solution is ultimately compared with the original method that is based on standard polynomials by calculating the absolute error to verify the validity and accuracy of the performance.
The primary objective of the current paper is to suggest and implement effective computational methods (DECMs) to calculate analytic and approximate solutions to the nonlocal one-dimensional parabolic equation which is utilized to model specific real-world applications. The powerful and elegant methods that are used orthogonal basis functions to describe the solution as a double power series have been developed, namely the Bernstein, Legendre, Chebyshev, Hermite, and Bernoulli polynomials. Hence, a specified partial differential equation is reduced to a system of linear algebraic equations that can be solved by using Mathematica®12. The techniques of effective computational methods (DECMs) have been applied to solve some s
... Show MoreThe study aims detection teaching modalities adopted relationship in Jerash University exactly the classroom, and to achieve the goal of the study was to develop a questionnaire consisting of (39) items, and was achieving validity and reliability have, then sent to a sample of professors made up the university from (122) university professor of the total (172) professor, and they study population.
Study found a relationship between the teaching methods of the four their strategies with classroom management, and as a result the overall average level, while Hspt critical thinking at the highest correlation with classroom discipline management relationship strategy, while the relatio
... Show MorePorosity is important because it reflects the presence of oil reserves. Hence, the number of underground reserves and a direct influence on the essential petrophysical parameters, such as permeability and saturation, are related to connected pores. Also, the selection of perforation interval and recommended drilling additional infill wells. For the estimation two distinct methods are used to obtain the results: the first method is based on conventional equations that utilize porosity logs. In contrast, the second approach relies on statistical methods based on making matrices dependent on rock and fluid composition and solving the equations (matrices) instantaneously. In which records have entered as equations, and the matrix is sol
... Show MoreIn this study, we focused on the random coefficient estimation of the general regression and Swamy models of panel data. By using this type of data, the data give a better chance of obtaining a better method and better indicators. Entropy's methods have been used to estimate random coefficients for the general regression and Swamy of the panel data which were presented in two ways: the first represents the maximum dual Entropy and the second is general maximum Entropy in which a comparison between them have been done by using simulation to choose the optimal methods.
The results have been compared by using mean squares error and mean absolute percentage error to different cases in term of correlation valu
... Show More