Within the framework of big data, energy issues are highly significant. Despite the significance of energy, theoretical studies focusing primarily on the issue of energy within big data analytics in relation to computational intelligent algorithms are scarce. The purpose of this study is to explore the theoretical aspects of energy issues in big data analytics in relation to computational intelligent algorithms since this is critical in exploring the emperica aspects of big data. In this chapter, we present a theoretical study of energy issues related to applications of computational intelligent algorithms in big data analytics. This work highlights that big data analytics using computational intelligent algorithms generates a very high amount of energy, especially during the training phase. The transmission of big data between service providers, users and data centres emits carbon dioxide as a result of high power consumption. This chapter proposes a theoretical framework for big data analytics using computational intelligent algorithms that has the potential to reduce energy consumption and enhance performance. We suggest that researchers should focus more attention on the issue of energy within big data analytics in relation to computational intelligent algorithms, before this becomes a widespread and urgent problem.
In the present work, it had been measured the concentration of radon gas (CRn) for (10) samples of cement used in constructions before and after painting them using enamel paint, purchased from the local markets, to see the extent of its ability to reduce emissions of Rn-222 in the air. These samples were obtained from different sources available in the local markets in Baghdad and other provinces. The measurements were done by the American-made detector (RAD7). The results showed that the highest CRn in the air emitted from cement samples after coating was in the cement sample (Iranian origin) where the concentration was (58.27 Bq/m3) while the lowest CRn was found in building material samples
... Show MoreThe purpose of this paper is to apply different transportation models in their minimum and maximum values by finding starting basic feasible solution and finding the optimal solution. The requirements of transportation models were presented with one of their applications in the case of minimizing the objective function, which was conducted by the researcher as real data, which took place one month in 2015, in one of the poultry farms for the production of eggs
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for
The settlement evaluation for the jet grouted columns (JGC) in soft soils is a problematic matter, because it is influenced by the number of aspects such as soil type, effect mixture between soil and grouting materials, nozzle energy, jet grouting, water flow rate, rotation and lifting speed. Most methods of design the jet-grouting column based on experience. In this study, a prototype single and group jet grouting models (single, 1*2, and 2*2) with the total length and diameter were (2000 and 150 mm) respectively and clear spacing (3D) has been constructed in soft clay and subjected to vertical axial loads. Furthermore, different theoretical methods have been used for the estimation
Survival analysis is one of the types of data analysis that describes the time period until the occurrence of an event of interest such as death or other events of importance in determining what will happen to the phenomenon studied. There may be more than one endpoint for the event, in which case it is called Competing risks. The purpose of this research is to apply the dynamic approach in the analysis of discrete survival time in order to estimate the effect of covariates over time, as well as modeling the nonlinear relationship between the covariates and the discrete hazard function through the use of the multinomial logistic model and the multivariate Cox model. For the purpose of conducting the estimation process for both the discrete
... Show MoreGivers of foreign Audit about Social Responsibility of Profit Organization. The recent time is charcterstically with big economic Organization activities, because there are many transactions between these Organizations and different financial markets development techniques.
This encourgage business men to increase their efforts for investment in these markets. Because the Accounting is in general terms it represents a language of these Unions Activities and translate them in to fact numbers, for that there is need for Accounting recording for certain of these Organizations behavior and their harmonization with their Objectives.
In this respect the Audit function comes to che
... Show MoreThis research presents experimental and theoretical investigation of 15 reinforced concrete spliced and nonspliced girder models. Splices of hooked dowels and cast in place joints, with or without strengthening steel plates were used. Post-tensioning had been used to enhance the splice strength for some spliced girders. The ANSYS computer program was used for analyzing the spliced and non-spliced girders. A nonlinear three dimensional element was used to represent all test girders. The experimental results have shown that for a single span girder using steel plate connectors in the splice zone has given a sufficient continuity to resist flexural stresses in this region. The experimental results have shown that the deflection of hooked do
... Show MoreA three-stage learning algorithm for deep multilayer perceptron (DMLP) with effective weight initialisation based on sparse auto-encoder is proposed in this paper, which aims to overcome difficulties in training deep neural networks with limited training data in high-dimensional feature space. At the first stage, unsupervised learning is adopted using sparse auto-encoder to obtain the initial weights of the feature extraction layers of the DMLP. At the second stage, error back-propagation is used to train the DMLP by fixing the weights obtained at the first stage for its feature extraction layers. At the third stage, all the weights of the DMLP obtained at the second stage are refined by error back-propagation. Network structures an
... Show More