Traffic classification is referred to as the task of categorizing traffic flows into application-aware classes such as chats, streaming, VoIP, etc. Most systems of network traffic identification are based on features. These features may be static signatures, port numbers, statistical characteristics, and so on. Current methods of data flow classification are effective, they still lack new inventive approaches to meet the needs of vital points such as real-time traffic classification, low power consumption, ), Central Processing Unit (CPU) utilization, etc. Our novel Fast Deep Packet Header Inspection (FDPHI) traffic classification proposal employs 1 Dimension Convolution Neural Network (1D-CNN) to automatically learn more representational characteristics of traffic flow types; by considering only the position of the selected bits from the packet header. The proposal a learning approach based on deep packet inspection which integrates both feature extraction and classification phases into one system. The results show that the FDPHI works very well on the applications of feature learning. Also, it presents powerful adequate traffic classification results in terms of energy consumption (70% less power CPU utilization around 48% less), and processing time (310% for IPv4 and 595% for IPv6).
The heritage of the human peoples remains a beacon of data, after renewed awareness of them, open to their propositions, which are born from the womb of intertwined elements of customs, knowledge, arts, and literature ... Misleading the reception process, it may be something intriguing to identify with caution with the text sample selected by the current search procedures prepared according to the adoption of a descriptive therapist in a three-dimensional approach, as well as a comprehensive textual reference for that specific communication dimension within Anthology Andalusian - Moroccan poetry in the book of pleasure in the perception of the five senses Tahdib ibn Manzoor (d. 711 e) from the origin of Tivashi (d. 651 e) Tagged: Separat
... Show MoreA skip list data structure is really just a simulation of a binary search tree. Skip lists algorithm are simpler, faster and use less space. this data structure conceptually uses parallel sorted linked lists. Searching in a skip list is more difficult than searching in a regular sorted linked list. Because a skip list is a two dimensional data structure, it is implemented using a two dimensional network of nodes with four pointers. the implementation of the search, insert and delete operation taking a time of upto . The skip list could be modified to implement the order statistic operations of RANKand SEARCH BY RANK while maintaining the same expected time. Keywords:skip list , parallel linked list , randomized algorithm , rank.
Solar photovoltaic (PV) system has emerged as one of the most promising technology to generate clean energy. In this work, the performance of monocrystalline silicon photovoltaic module is studied through observing the effect of necessary parameters: solar irradiation and ambient temperature. The single diode model with series resistors is selected to find the characterization of current-voltage (I-V) and power-voltage (P-V) curves by determining the values of five parameters ( ). This model shows a high accuracy in modeling the solar PV module under various weather conditions. The modeling is simulated via using MATLAB/Simulink software. The performance of the selected solar PV module is tested experimentally for differ
... Show MoreThe aesthetic contents of data visualization is one of the contemporary areas through which data scientists and designers have been able to link data to humans, and even after reaching successful attempts to model data visualization, it wasn't clear how that reveals how it contributed to choosing the aesthetic content as an input to humanize these models, so the goal of the current research is to use The analytical descriptive approach aims to identify the aesthetic contents in data visualization, which the researchers interpreted through pragmatic philosophy and Kantian philosophy, and analyze a sample of data visualization models to reveal the aesthetic entrances in them to explain how to humanize them. The two researchers reached seve
... Show MoreCloud storage provides scalable and low cost resources featuring economies of scale based on cross-user architecture. As the amount of data outsourced grows explosively, data deduplication, a technique that eliminates data redundancy, becomes essential. The most important cloud service is data storage. In order to protect the privacy of data owner, data are stored in cloud in an encrypted form. However, encrypted data introduce new challenges for cloud data deduplication, which becomes crucial for data storage. Traditional deduplication schemes cannot work on encrypted data. Existing solutions of encrypted data deduplication suffer from security weakness. This paper proposes a combined compressive sensing and video deduplication to maximize
... Show MoreIn this paper, image compression technique is presented based on the Zonal transform method. The DCT, Walsh, and Hadamard transform techniques are also implements. These different transforms are applied on SAR images using Different block size. The effects of implementing these different transforms are investigated. The main shortcoming associated with this radar imagery system is the presence of the speckle noise, which affected the compression results.
In many scientific fields, Bayesian models are commonly used in recent research. This research presents a new Bayesian model for estimating parameters and forecasting using the Gibbs sampler algorithm. Posterior distributions are generated using the inverse gamma distribution and the multivariate normal distribution as prior distributions. The new method was used to investigate and summaries Bayesian statistics' posterior distribution. The theory and derivation of the posterior distribution are explained in detail in this paper. The proposed approach is applied to three simulation datasets of 100, 300, and 500 sample sizes. Also, the procedure was extended to the real dataset called the rock intensity dataset. The actual dataset is collecte
... Show MoreIn the field of civil engineering, the adoption and use of Falling Weight Deflectometers (FWDs) is seen as a response to the ever changing and technology-driven world. Specifically, FWDs refer to devices that aid in evaluating the physical properties of a pavement. This paper has assessed the concepts of data processing, storage, and analysis via FWDs. The device has been found to play an important role in enabling the operators and field practitioners to understand vertical deflection responses upon subjecting pavements to impulse loads. In turn, the resultant data and its analysis outcomes lead to the backcalculation of the state of stiffness, with initial analyses of the deflection bowl occurring in conjunction with the measured or assum
... Show More