Rutting has a significant impact on the pavements' performance. Rutting depth is often used as a parameter to assess the quality of pavements. The Asphalt Institute (AI) design method prescribes a maximum allowable rutting depth of 13mm, whereas the AASHTO design method stipulates a critical serviceability index of 2.5 which is equivalent to an average rutting depth of 15mm. In this research, static and repeated compression tests were performed to evaluate the permanent strain based on (1) the relationship between mix properties (asphalt content and type), and (2) testing temperature. The results indicated that the accumulated plastic strain was higher during the repeated load test than that during the static load tests. Notably, temperature played a major role. The power-law model was used to describe the relationship between the accumulated permanent strain and the number of load repetitions. Furthermore, graphical analysis was performed using VESYS 5W to predict the rut depth for the asphalt concrete layer. The α and µ parameters affected the predicted rut depth significantly. The results show a substantial difference between the two tests, indicating that the repeated load test is more adequate, useful, and accurate when compared with the static load test for the evaluation of the rut depth.
Presents here in the results of comparison between the theoretical equation stated by Huang and Menq and laboratory model tests used to study the bearing capacity of square footing on geogrid-reinforced loose sand by performing model tests. The effects of several parameters were studied in order to study the general behavior of improving the soil by using the geogrid. These parameters include depth of first layer of reinforcement, vertical spacing of reinforcement layers, number of reinforcement layers and types of reinforcement layers The results show that the theoretical equation can be used to estimate the bearing capacity of loose sand.
This study aims to conduct an exhaustive comparison between the performance of human translators and artificial intelligence-powered machine translation systems, specifically examining the top three systems: Spider-AI, Metacate, and DeepL. A variety of texts from distinct categories were evaluated to gain a profound understanding of the qualitative differences, as well as the strengths and weaknesses, between human and machine translations. The results demonstrated that human translation significantly outperforms machine translation, with larger gaps in literary texts and texts characterized by high linguistic complexity. However, the performance of machine translation systems, particularly DeepL, has improved and in some contexts
... Show MoreToday, the science of artificial intelligence has become one of the most important sciences in creating intelligent computer programs that simulate the human mind. The goal of artificial intelligence in the medical field is to assist doctors and health care workers in diagnosing diseases and clinical treatment, reducing the rate of medical error, and saving lives of citizens. The main and widely used technologies are expert systems, machine learning and big data. In the article, a brief overview of the three mentioned techniques will be provided to make it easier for readers to understand these techniques and their importance.
Forest fires continue to rise during the dry season and they are difficult to stop. In this case, high temperatures in the dry season can cause an increase in drought index that could potentially burn the forest every time. Thus, the government should conduct surveillance throughout the dry season. Continuous surveillance without the focus on a particular time becomes ineffective and inefficient because of preventive measures carried out without the knowledge of potential fire risk. Based on the Keetch-Byram Drought Index (KBDI), formulation of Drought Factor is used just for calculating the drought today based on current weather conditions, and yesterday's drought index. However, to find out the factors of drought a day after, the data
... Show MoreTraumatic Brain Injury (TBI) is still considered a worldwide leading cause of mortality and morbidity. Within the last decades, different modalities were used to assess severity and outcome including Glasgow Coma Scale (GCS), imaging modalities, and even genetic polymorphism, however, determining the prognosis of TBI victims is still challenging requiring the emerging of more accurate and more applicable tools to surrogate other old modalities
In this paper has been building a statistical model of the Saudi financial market using GARCH models that take into account Volatility in prices during periods of circulation, were also study the effect of the type of random error distribution of the time series on the accuracy of the statistical model, as it were studied two types of statistical distributions are normal distribution and the T distribution. and found by application of a measured data that the best model for the Saudi market is GARCH (1,1) model when the random error distributed t. student's .
In the present study, an attempt has been made to experimentally investigate the flexural performance of ten simply supported reinforced concrete gable roof beams, including solid control specimen (i.e., without openings) and nine beams with web openings of different dimensions and configurations. The nine beams with openings have identical reinforcement details. All beams were monotonically loaded to failure under mid-span loading. The main variables were the number of the created openings, the total area of the created openings, and the inclination angle of the posts between openings. Of interest is the load-carrying capacity, cracking resistance and propagation, deformability, failure mode, and strain development that represent the behav
... Show MoreToday’s academics have a major hurdle in solving combinatorial problems in the actual world. It is nevertheless possible to use optimization techniques to find, design, and solve a genuine optimal solution to a particular problem, despite the limitations of the applied approach. A surge in interest in population-based optimization methodologies has spawned a plethora of new and improved approaches to a wide range of engineering problems. Optimizing test suites is a combinatorial testing challenge that has been demonstrated to be an extremely difficult combinatorial optimization limitation of the research. The authors have proposed an almost infallible method for selecting combinatorial test cases. It uses a hybrid whale–gray wol
... Show More