Modern civilization increasingly relies on sustainable and eco-friendly data centers as the core hubs of intelligent computing. However, these data centers, while vital, also face heightened vulnerability to hacking due to their role as the convergence points of numerous network connection nodes. Recognizing and addressing this vulnerability, particularly within the confines of green data centers, is a pressing concern. This paper proposes a novel approach to mitigate this threat by leveraging swarm intelligence techniques to detect prospective and hidden compromised devices within the data center environment. The core objective is to ensure sustainable intelligent computing through a colony strategy. The research primarily focusses on the applying sigmoid fish swarm optimization (SiFSO) for early compromised device detection and subsequently alerting other network nodes. Additionally, our data center implements an innovative ant skyscape architecture (ASA) cooling mechanism, departing from traditional, unsustainable cooling strategies that harm the environment. To validate the effectiveness of these approaches, extensive simulations were conducted. The evaluations primarily revolved around the fish colony’s ability to detect compromised devices, focusing on source tracing, realistic modelling, and an impressive 98% detection accuracy rate under ASA cooling solution with 0.16 ºC within 1,300 second. Compromised devices pose a substantial risk to green data centers, as attackers could manipulate and disrupt network equipment. Therefore, incorporating cyber enhancements into the green data center concept is imperative to foster more adaptable and efficient smart networks.
In this research, the nonparametric technique has been presented to estimate the time-varying coefficients functions for the longitudinal balanced data that characterized by observations obtained through (n) from the independent subjects, each one of them is measured repeatedly by group of specific time points (m). Although the measurements are independent among the different subjects; they are mostly connected within each subject and the applied techniques is the Local Linear kernel LLPK technique. To avoid the problems of dimensionality, and thick computation, the two-steps method has been used to estimate the coefficients functions by using the two former technique. Since, the two-
... Show Morewith an organized propaganda campaign. This military campaign was helped to formulate its speech by many institutions, research centers, and knowledge and intelligence circles in order to mobilize public opinion gain supporters and face the opponents by different means depending on a variety of styles to achieve its required effects.
After the US occupation of Iraq, US media fighters sought to influence the Iraqi public opinion and making them convinced them of the important presence of US military forces in Iraq which necessitated finding its justification through the use of persuasive techniques in its intensive propaganda campaigns.
This research discusses the most important
In recent years, the Global Navigation Satellite Services (GNSS) technology has been frequently employed for monitoring the Earth crust deformation and movement. Such applications necessitate high positional accuracy that can be achieved through processing GPS/GNSS data with scientific software such as BERENSE, GAMIT, and GIPSY-OSIS. Nevertheless, these scientific softwares are sophisticated and have not been published as free open source software. Therefore, this study has been conducted to evaluate an alternative solution, GNSS online processing services, which may obtain this privilege freely. In this study, eight years of GNSS raw data for TEHN station, which located in Iran, have been downloaded from UNAVCO website
... Show MoreGenerally, statistical methods are used in various fields of science, especially in the research field, in which Statistical analysis is carried out by adopting several techniques, according to the nature of the study and its objectives. One of these techniques is building statistical models, which is done through regression models. This technique is considered one of the most important statistical methods for studying the relationship between a dependent variable, also called (the response variable) and the other variables, called covariate variables. This research describes the estimation of the partial linear regression model, as well as the estimation of the “missing at random” values (MAR). Regarding the
... Show MoreThis research aims to study the mechanism of application of international specification requirements (ISO 9001: 2015) at the Iraqi Center- Korean Vocational Training return to vocational training department at the Ministry of Labour and Social Affairs for the purpose of preparing and creating the center to get a certificate of conformity with the requirements of the standard (ISO 9001: 2015) that would elevate the level of performance and services provided in the respondent Center after it is identified and the study of the reality of the quality management system by identifying strengths and weaknesses in the system to diagnose the gap and find ways to address that gap, and adopted the researchers the case study method to conduc
... Show MoreAbstract. Full-waveform airborne laser scanning data has shown its potential to enhance available segmentation and classification approaches through the additional information it can provide. However, this additional information is unable to directly provide a valid physical representation of surface features due to many variables affecting the backscattered energy during travel between the sensor and the target. Effectively, this delivers a mis-match between signals from overlapping flightlines. Therefore direct use of this information is not recommended without the adoption of a comprehensive radiometric calibration strategy that accounts for all these effects. This paper presents a practical and reliable radiometric calibration r
... Show MoreThe cross section evaluation for (α,n) reaction was calculated according to the available International Atomic Energy Agency (IAEA) and other experimental published data . These cross section are the most recent data , while the well known international libraries like ENDF , JENDL , JEFF , etc. We considered an energy range from threshold to 25 M eV in interval (1 MeV). The average weighted cross sections for all available experimental and theoretical(JENDL) data and for all the considered isotopes was calculated . The cross section of the element is then calculated according to the cross sections of the isotopes of that element taking into account their abundance . A mathematical representative equation for each of the element
... Show MoreThe influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show More