Optical burst switching (OBS) network is a new generation optical communication technology. In an OBS network, an edge node first sends a control packet, called burst header packet (BHP) which reserves the necessary resources for the upcoming data burst (DB). Once the reservation is complete, the DB starts travelling to its destination through the reserved path. A notable attack on OBS network is BHP flooding attack where an edge node sends BHPs to reserve resources, but never actually sends the associated DB. As a result the reserved resources are wasted and when this happen in sufficiently large scale, a denial of service (DoS) may take place. In this study, we propose a semi-supervised machine learning approach using k-means algorithm, to detect malicious nodes in an OBS network. The proposed semi-supervised model was trained and validated with small amount data from a selected dataset. Experiments show that the model can classify the nodes into either behaving or not-behaving classes with 90% accuracy when trained with just 20% of data. When the nodes are classified into behaving, not-behaving and potentially not-behaving classes, the model shows 65.15% and 71.84% accuracy if trained with 20% and 30% of data respectively. Comparison with some notable works revealed that the proposed model outperforms them in many respects.
Confocal microscope imaging has become popular in biotechnology labs. Confocal imaging technology utilizes fluorescence optics, where laser light is focused onto a specific spot at a defined depth in the sample. A considerable number of images are produced regularly during the process of research. These images require methods of unbiased quantification to have meaningful analyses. Increasing efforts to tie reimbursement to outcomes will likely increase the need for objective data in analyzing confocal microscope images in the coming years. Utilizing visual quantification methods to quantify confocal images with naked human eyes is an essential but often underreported outcome measure due to the time required for manual counting and e
... Show MoreNowadays, information systems constitute a crucial part of organizations; by losing security, these organizations will lose plenty of competitive advantages as well. The core point of information security (InfoSecu) is risk management. There are a great deal of research works and standards in security risk management (ISRM) including NIST 800-30 and ISO/IEC 27005. However, only few works of research focus on InfoSecu risk reduction, while the standards explain general principles and guidelines. They do not provide any implementation details regarding ISRM; as such reducing the InfoSecu risks in uncertain environments is painstaking. Thus, this paper applied a genetic algorithm (GA) for InfoSecu risk reduction in uncertainty. Finally, the ef
... Show MoreIn many applications such as production, planning, the decision maker is important in optimizing an objective function that has fuzzy ratio two functions which can be handed using fuzzy fractional programming problem technique. A special class of optimization technique named fuzzy fractional programming problem is considered in this work when the coefficients of objective function are fuzzy. New ranking function is proposed and used to convert the data of the fuzzy fractional programming problem from fuzzy number to crisp number so that the shortcoming when treating the original fuzzy problem can be avoided. Here a novel ranking function approach of ordinary fuzzy numbers is adopted for ranking of triangular fuzzy numbers with simpler an
... Show MoreA resume is the first impression between you and a potential employer. Therefore, the importance of a resume can never be underestimated. Selecting the right candidates for a job within a company can be a daunting task for recruiters when they have to review hundreds of resumes. To reduce time and effort, we can use NLTK and Natural Language Processing (NLP) techniques to extract essential data from a resume. NLTK is a free, open source, community-driven project and the leading platform for building Python programs to work with human language data. To select the best resume according to the company’s requirements, an algorithm such as KNN is used. To be selected from hundreds of resumes, your resume must be one of the best. Theref
... Show MoreThe Field Programmable Gate Array (FPGA) approach is the most recent category, which takes the place in the implementation of most of the Digital Signal Processing (DSP) applications. It had proved the capability to handle such problems and supports all the necessary needs like scalability, speed, size, cost, and efficiency.
In this paper a new proposed circuit design is implemented for the evaluation of the coefficients of the two-dimensional Wavelet Transform (WT) and Wavelet Packet Transform (WPT) using FPGA is provided.
In this implementation the evaluations of the WT & WPT coefficients are depending upon filter tree decomposition using the 2-D discrete convolution algorithm. This implementation w
... Show MoreThis aim research of this discussion impact of investment in human capital dimensions (training, education, knowledge management, skills development) and its components (knowledge, skills, abilities, value) with the Office of the Inspector General's staff - Ministry of Culture in Iraq, has depended questionnaire as a tool in the collection data and information ,subjected to a measure of validity and reliability, and distributed to a sample of (63) individuals were distributed in positions (director, director of the Division of employees) have been analyzed data search using ready-statistical software (SPSS) the used hypothesis testing and correlati
... Show MoreProblem: Cancer is regarded as one of the world's deadliest diseases. Machine learning and its new branch (deep learning) algorithms can facilitate the way of dealing with cancer, especially in the field of cancer prevention and detection. Traditional ways of analyzing cancer data have their limits, and cancer data is growing quickly. This makes it possible for deep learning to move forward with its powerful abilities to analyze and process cancer data. Aims: In the current study, a deep-learning medical support system for the prediction of lung cancer is presented. Methods: The study uses three different deep learning models (EfficientNetB3, ResNet50 and ResNet101) with the transfer learning concept. The three models are trained using a
... Show MoreRecently, the internet has made the users able to transmit the digital media in the easiest manner. In spite of this facility of the internet, this may lead to several threats that are concerned with confidentiality of transferred media contents such as media authentication and integrity verification. For these reasons, data hiding methods and cryptography are used to protect the contents of digital media. In this paper, an enhanced method of image steganography combined with visual cryptography has been proposed. A secret logo (binary image) of size (128x128) is encrypted by applying (2 out 2 share) visual cryptography on it to generate two secret share. During the embedding process, a cover red, green, and blue (RGB) image of size (512
... Show More