Software-Defined Networking (SDN) has evolved network management by detaching the control plane from the data forwarding plane, resulting in unparalleled flexibility and efficiency in network administration. However, the heterogeneity of traffic in SDN presents issues in achieving Quality of Service (QoS) demands and efficiently managing network resources. SDN traffic flows are often divided into elephant flows (EFs) and mice flows (MFs). EFs, which are distinguished by their huge packet sizes and long durations, account for a small amount of total traffic but require disproportionate network resources, thus causing congestion and delays for smaller MFs. MFs, on the other hand, have a short lifetime and are latency-sensitive, but they account for the vast bulk of traffic in data center networks. The incorrect use of network resources by EFs frequently disturbs the performance of MFs. To meet these issues, precise classification of network traffic has become crucial. This classification enables traffic-aware routing techniques. This paper offers a novel model for classifying SDN traffic into MF and EF using a spike neural network. Once identified, traffic is routed based on the classification results. For MF, the model uses the Dijkstra algorithm. For EF, the Widest Dijkstra algorithm is used. This model solves the difficulties of traffic heterogeneity in SDNs by integrating advanced classification techniques and strategic routing algorithms. It enables desirable resource allocation, eliminates congestion, and increases network performance and dependability. The models used have proven their efficiency by outperforming the traditional Software Defined Network and other algorithms in terms of: throughput by 60%, and 20%, bandwidth utilization by 5%, and 7%, packet loss by 50%, and latency by 60%, respectively.
Support vector machines (SVMs) are supervised learning models that analyze data for classification or regression. For classification, SVM is widely used by selecting an optimal hyperplane that separates two classes. SVM has very good accuracy and extremally robust comparing with some other classification methods such as logistics linear regression, random forest, k-nearest neighbor and naïve model. However, working with large datasets can cause many problems such as time-consuming and inefficient results. In this paper, the SVM has been modified by using a stochastic Gradient descent process. The modified method, stochastic gradient descent SVM (SGD-SVM), checked by using two simulation datasets. Since the classification of different ca
... Show MoreThe limitations of wireless sensor nodes are power, computational capabilities, and memory. This paper suggests a method to reduce the power consumption by a sensor node. This work is based on the analogy of the routing problem to distribute an electrical field in a physical media with a given density of charges. From this analogy a set of partial differential equations (Poisson's equation) is obtained. A finite difference method is utilized to solve this set numerically. Then a parallel implementation is presented. The parallel implementation is based on domain decomposition, where the original calculation domain is decomposed into several blocks, each of which given to a processing element. All nodes then execute computations in parall
... Show MoreIn this paper, precision agriculture system is introduced based on Wireless Sensor Network (WSN). Soil moisture considered one of environment factors that effect on crop. The period of irrigation must be monitored. Neural network capable of learning the behavior of the agricultural soil in absence of mathematical model. This paper introduced modified type of neural network that is known as Spiking Neural Network (SNN). In this work, the precision agriculture system is modeled, contains two SNNs which have been identified off-line based on logged data, one of these SNNs represents the monitor that located at sink where the period of irrigation is calculated and the other represents the soil. In addition, to reduce p
... Show MoreThis work addressed the assignment problem (AP) based on fuzzy costs, where the objective, in this study, is to minimize the cost. A triangular, or trapezoidal, fuzzy numbers were assigned for each fuzzy cost. In addition, the assignment models were applied on linguistic variables which were initially converted to quantitative fuzzy data by using the Yager’sorankingi method. The paper results have showed that the quantitative date have a considerable effect when considered in fuzzy-mathematic models.
Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the origin
... Show MoreAbstract
The study aims to examine the relationships between cognitive absorption and E-Learning readiness in the preparatory stage. The study sample consisted of (190) students who were chosen randomly. The Researcher has developed the cognitive absorption and E-Learning readiness scales. A correlational descriptive approach was adopted. The research revealed that there is a positive statistical relationship between cognitive absorption and eLearning readiness.
This research aims to study the methods of reduction of dimensions that overcome the problem curse of dimensionality when traditional methods fail to provide a good estimation of the parameters So this problem must be dealt with directly . Two methods were used to solve the problem of high dimensional data, The first method is the non-classical method Slice inverse regression ( SIR ) method and the proposed weight standard Sir (WSIR) method and principal components (PCA) which is the general method used in reducing dimensions, (SIR ) and (PCA) is based on the work of linear combinations of a subset of the original explanatory variables, which may suffer from the problem of heterogeneity and the problem of linear
... Show MoreThe idea of carrying out research on incomplete data came from the circumstances of our dear country and the horrors of war, which resulted in the missing of many important data and in all aspects of economic, natural, health, scientific life, etc.,. The reasons for the missing are different, including what is outside the will of the concerned or be the will of the concerned, which is planned for that because of the cost or risk or because of the lack of possibilities for inspection. The missing data in this study were processed using Principal Component Analysis and self-organizing map methods using simulation. The variables of child health and variables affecting children's health were taken into account: breastfeed
... Show More<p>Analyzing X-rays and computed tomography-scan (CT scan) images using a convolutional neural network (CNN) method is a very interesting subject, especially after coronavirus disease 2019 (COVID-19) pandemic. In this paper, a study is made on 423 patients’ CT scan images from Al-Kadhimiya (Madenat Al Emammain Al Kadhmain) hospital in Baghdad, Iraq, to diagnose if they have COVID or not using CNN. The total data being tested has 15000 CT-scan images chosen in a specific way to give a correct diagnosis. The activation function used in this research is the wavelet function, which differs from CNN activation functions. The convolutional wavelet neural network (CWNN) model proposed in this paper is compared with regular convol
... Show MoreThe population has been trying to use clean energy instead of combustion. The choice was to use liquefied petroleum gas (LPG) for domestic use, especially for cooking due to its advantages as a light gas, a lower cost, and clean energy. Residential complexes are supplied with liquefied petroleum gas for each housing unit, transported by pipes from LPG tanks to the equipment. This research aims to simulate the design and performance design of the LPG system in the building that is applied to a residential complex in Baghdad taken as a study case with eight buildings. The building has 11 floors, and each floor has four apartments. The design in this study has been done in two parts, part one is the design of an LPG system for one building, an
... Show More