Optical burst switching (OBS) network is a new generation optical communication technology. In an OBS network, an edge node first sends a control packet, called burst header packet (BHP) which reserves the necessary resources for the upcoming data burst (DB). Once the reservation is complete, the DB starts travelling to its destination through the reserved path. A notable attack on OBS network is BHP flooding attack where an edge node sends BHPs to reserve resources, but never actually sends the associated DB. As a result the reserved resources are wasted and when this happen in sufficiently large scale, a denial of service (DoS) may take place. In this study, we propose a semi-supervised machine learning approach using k-means algorithm, to detect malicious nodes in an OBS network. The proposed semi-supervised model was trained and validated with small amount data from a selected dataset. Experiments show that the model can classify the nodes into either behaving or not-behaving classes with 90% accuracy when trained with just 20% of data. When the nodes are classified into behaving, not-behaving and potentially not-behaving classes, the model shows 65.15% and 71.84% accuracy if trained with 20% and 30% of data respectively. Comparison with some notable works revealed that the proposed model outperforms them in many respects.
Abstract: The aim of the research identify the effect of using the five-finger strategy in learning a movement chain on the balance beam apparatus for students in the third stage in the College of Physical Education and Sports Science, as well as to identify which groups (experimental and controlling) are better in learning the kinematic chain on the balance beam device, has been used The experimental approach is to design the experimental and control groups with pre-and post-test. The research sample was represented by third-graders, as the third division (j) was chosen by lot to represent the experimental group, and a division Third (i) to represent the control group, after which (10) students from each division were tested by lot to repr
... Show MoreThe uptake of Cd(II) ions from simulated wastewater onto olive pips was modeled using artificial neural network (ANN) which consisted of three layers. Based on 112 batch experiments, the effect of contact time (10-240 min), initial pH (2-6), initial concentration (25-250 mg/l), biosorbent dosage (0.05-2 g/100 ml), agitation speed (0-250 rpm) and temperature (20-60ºC) were studied. The maximum uptake (=92 %) of Cd(II) was achieved at optimum parameters of 60 min, 6, 50 mg/l, 1 g/100 ml, 250 rpm and 25ºC respectively.
Tangent sigmoid and linear transfer functions of ANN for hidden and output layers respectively with 7 neurons were sufficient to present good predictions for cadmium removal efficiency with coefficient of correlatio
... Show More
In this work, we introduced the Jacobson radical (shortly Rad (Ș)) of the endomorphism semiring Ș = ( ), provided that is principal P.Q.- injective semimodule and some related concepts, we studied some properties and added conditions that we needed. The most prominent result is obtained in section three
-If is a principal self-generator semimodule, then (ȘȘ) = W(Ș).
Subject Classification: 16y60
The convolutional neural networks (CNN) are among the most utilized neural networks in various applications, including deep learning. In recent years, the continuing extension of CNN into increasingly complicated domains has made its training process more difficult. Thus, researchers adopted optimized hybrid algorithms to address this problem. In this work, a novel chaotic black hole algorithm-based approach was created for the training of CNN to optimize its performance via avoidance of entrapment in the local minima. The logistic chaotic map was used to initialize the population instead of using the uniform distribution. The proposed training algorithm was developed based on a specific benchmark problem for optical character recog
... Show MoreAbstract
Although the rapid development in reverse engineering techniques, 3D laser scanners can be considered the modern technology used to digitize the 3D objects, but some troubles may be associate this process due to the environmental noises and limitation of the used scanners. So, in the present paper a data pre-processing algorithm has been proposed to obtain the necessary geometric features and mathematical representation of scanned object from its point cloud which obtained using 3D laser scanner (Matter and Form) through isolating the noised points. The proposed algorithm based on continuous calculations of chord angle between each adjacent pair of points in point cloud. A MATLAB program has been built t
... Show MoreThe continuous advancement in the use of the IoT has greatly transformed industries, though at the same time it has made the IoT network vulnerable to highly advanced cybercrimes. There are several limitations with traditional security measures for IoT; the protection of distributed and adaptive IoT systems requires new approaches. This research presents novel threat intelligence for IoT networks based on deep learning, which maintains compliance with IEEE standards. Interweaving artificial intelligence with standardization frameworks is the goal of the study and, thus, improves the identification, protection, and reduction of cyber threats impacting IoT environments. The study is systematic and begins by examining IoT-specific thre
... Show MoreIn this paper, we focus on designing feed forward neural network (FFNN) for solving Mixed Volterra – Fredholm Integral Equations (MVFIEs) of second kind in 2–dimensions. in our method, we present a multi – layers model consisting of a hidden layer which has five hidden units (neurons) and one linear output unit. Transfer function (Log – sigmoid) and training algorithm (Levenberg – Marquardt) are used as a sigmoid activation of each unit. A comparison between the results of numerical experiment and the analytic solution of some examples has been carried out in order to justify the efficiency and the accuracy of our method.
... Show More
The objective of this study was tointroduce a recursive least squares (RLS) parameter estimatorenhanced by using a neural network (NN) to facilitate the computing of a bit error rate (BER) (error reduction) during channels estimation of a multiple input-multiple output orthogonal frequency division multiplexing (MIMO-OFDM) system over a Rayleigh multipath fading channel.Recursive least square is an efficient approach to neural network training:first, the neural network estimator learns to adapt to the channel variations then it estimates the channel frequency response. Simulation results show that the proposed method has better performance compared to the conventional methods least square (LS) and the original RLS and it is more robust a
... Show More