Within the framework of big data, energy issues are highly significant. Despite the significance of energy, theoretical studies focusing primarily on the issue of energy within big data analytics in relation to computational intelligent algorithms are scarce. The purpose of this study is to explore the theoretical aspects of energy issues in big data analytics in relation to computational intelligent algorithms since this is critical in exploring the emperica aspects of big data. In this chapter, we present a theoretical study of energy issues related to applications of computational intelligent algorithms in big data analytics. This work highlights that big data analytics using computational intelligent algorithms generates a very high amount of energy, especially during the training phase. The transmission of big data between service providers, users and data centres emits carbon dioxide as a result of high power consumption. This chapter proposes a theoretical framework for big data analytics using computational intelligent algorithms that has the potential to reduce energy consumption and enhance performance. We suggest that researchers should focus more attention on the issue of energy within big data analytics in relation to computational intelligent algorithms, before this becomes a widespread and urgent problem.
There is an evidence that channel estimation in communication systems plays a crucial issue in recovering the transmitted data. In recent years, there has been an increasing interest to solve problems due to channel estimation and equalization especially when the channel impulse response is fast time varying Rician fading distribution that means channel impulse response change rapidly. Therefore, there must be an optimal channel estimation and equalization to recover transmitted data. However. this paper attempt to compare epsilon normalized least mean square (ε-NLMS) and recursive least squares (RLS) algorithms by computing their performance ability to track multiple fast time varying Rician fading channel with different values of Doppler
... Show MoreIn this work, we calculate and analyze the photon emission from quark and anti-quark interaction during annihilation process using simple model depending on phenomenology of quantum chromodynamic theory (QCD). The parameters, which include the running strength coupling, temperature of the system and the critical temperature, carry information regarding photon emission and have a significant impact on the photons yield. The emission of photon from strange interaction with anti-strange is large sensitive to decreases or increases there running strength coupling. The photons emission increases with decreases running strength coupling and vice versa. We introduce the influence of critical temperature on the photon emission rate in o
... Show MoreThe present project involves photodegrading the dye solochrom violet under advanced oxidation techniques at (25 oC) temperature and UV light. Zinc Oxide (ZnO) and UV radiation at a wavelength of 580 nm were used to conduct the photocatalytic reaction of the solochrom violet dye. One of the factors looked into was the impact of the starting conditions. pH, the amount of original hydrogen peroxide, and the dye concentration time radiation were used. For hours, the kinetics and percentages of degradation were examined at various intervals. In general, it has been discovered that the photodegradation rates of the dye were greater when H2O2 and ZnO were combined with UV light. The best wavelength to use was determined. Modern oxidation techni
... Show MoreAtherosclerosis is the most common causes of vascular diseases and it is associated with a restriction in the lumen of blood vessels. So; the study of blood flow in arteries is very important to understand the relation between hemodynamic characteristics of blood flow and the occurrence of atherosclerosis.
looking for the physical factors and correlations that explain the phenomena of existence the atherosclerosis disease in the proximal site of LAD artery in some people rather than others is achieved in this study by analysis data from coronary angiography as well as estimating the blood velocity from coronary angiography scans without having a required data on velocity by using some mathematical equations and physical laws. Fif
... Show MoreThe debate on the methodology of media and communication research is no longer subject to the logic of the contradiction between the quantitative and the qualitative approach, nor the logic of the comparison between them. The nature of the topics presented for research, the problems they raise, the goals to be achieved from the research, and the epistemological positioning of researchers are among the critical factors that dictate the appropriate approach or methodological approaches to conduct their research. This positioning means the implicit philosophical principles upon which any researcher relies and which determine the path he/ she takes to produce scientifically approved knowledge. The method of the researcher's access to the phe
... Show MoreIn this research, the program SEEP / W was used to compute the value of seepage through the homogenous and non-homogeneous earth dam with known dimensions. The results show that the relationship between the seepage and water height in upstream of the dam to its length for saturated soil was nonlinear when the dam is homogenous. For the non-homogeneous dam, the relationship was linear and the amount of seepage increase with the height of water in upstream to its length. Also the quantity of seepage was calculated using the method of (Fredlund and Xing, 1994) and (Van Genuchten, 1980) when the soil is saturated – unsaturated, the results referred to that the higher value of seepage when the soil is saturated and the lowe
... Show MoreTo achieve safe security to transfer data from the sender to receiver, cryptography is one way that is used for such purposes. However, to increase the level of data security, DNA as a new term was introduced to cryptography. The DNA can be easily used to store and transfer the data, and it becomes an effective procedure for such aims and used to implement the computation. A new cryptography system is proposed, consisting of two phases: the encryption phase and the decryption phase. The encryption phase includes six steps, starting by converting plaintext to their equivalent ASCII values and converting them to binary values. After that, the binary values are converted to DNA characters and then converted to their equivalent complementary DN
... Show More