Recently, a new secure steganography algorithm has been proposed, namely, the secure Block Permutation Image Steganography (BPIS) algorithm. The new algorithm consists of five main steps, these are: convert the secret message to a binary sequence, divide the binary sequence into blocks, permute each block using a key-based randomly generated permutation, concatenate the permuted blocks forming a permuted binary sequence, and then utilize a plane-based Least-Significant-Bit (LSB) approach to embed the permuted binary sequence into BMP image file format. The performance of algorithm was given a preliminary evaluation through estimating the PSNR (Peak Signal-to-Noise Ratio) of the stego image for limited number of experiments comprised hiding
... Show MoreThe purpose of this paper is to develop a hybrid conceptual model for building information modelling (BIM) adoption in facilities management (FM) through the integration of the technology task fit (TTF) and the unified theory of acceptance and use of technology (UTAUT) theories. The study also aims to identify the influence factors of BIM adoption and usage in FM and identify gaps in the existing literature and to provide a holistic picture of recent research in technology acceptance and adoption in the construction industry and FM sector.
Machine learning models have recently provided great promise in diagnosis of several ophthalmic disorders, including keratoconus (KCN). Keratoconus, a noninflammatory ectatic corneal disorder characterized by progressive cornea thinning, is challenging to detect as signs may be subtle. Several machine learning models have been proposed to detect KCN, however most of the models are supervised and thus require large well-annotated data. This paper proposes a new unsupervised model to detect KCN, based on adapted flower pollination algorithm (FPA) and the k-means algorithm. We will evaluate the proposed models using corneal data collected from 5430 eyes at different stages of KCN severity (1520 healthy, 331 KCN1, 1319 KCN2, 1699 KCN3 a
... Show MoreThe drill bit is the most essential tool in drilling operation and optimum bit selection is one of the main challenges in planning and designing new wells. Conventional bit selections are mostly based on the historical performance of similar bits from offset wells. In addition, it is done by different techniques based on offset well logs. However, these methods are time consuming and they are not dependent on actual drilling parameters. The main objective of this study is to optimize bit selection in order to achieve maximum rate of penetration (ROP). In this work, a model that predicts the ROP was developed using artificial neural networks (ANNs) based on 19 input parameters. For the
The virtual decomposition control (VDC) is an efficient tool suitable to deal with the full-dynamics-based control problem of complex robots. However, the regressor-based adaptive control used by VDC to control every subsystem and to estimate the unknown parameters demands specific knowledge about the system physics. Therefore, in this paper, we focus on reorganizing the equation of the VDC for a serial chain manipulator using the adaptive function approximation technique (FAT) without needing specific system physics. The dynamic matrices of the dynamic equation of every subsystem (e.g. link and joint) are approximated by orthogonal functions due to the minimum approximation errors produced. The contr
In this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
In this paper, we will present proposed enhance process of image compression by using RLE algorithm. This proposed yield to decrease the size of compressing image, but the original method used primarily for compressing a binary images [1].Which will yield increasing the size of an original image mostly when used for color images. The test of an enhanced algorithm is performed on sample consists of ten BMP 24-bit true color images, building an application by using visual basic 6.0 to show the size after and before compression process and computing the compression ratio for RLE and for the enhanced RLE algorithm.
In the latest years there has been a profound evolution in computer science and technology, which incorporated several fields. Under this evolution, Content Base Image Retrieval (CBIR) is among the image processing field. There are several image retrieval methods that can easily extract feature as a result of the image retrieval methods’ progresses. To the researchers, finding resourceful image retrieval devices has therefore become an extensive area of concern. Image retrieval technique refers to a system used to search and retrieve images from digital images’ huge database. In this paper, the author focuses on recommendation of a fresh method for retrieving image. For multi presentation of image in Convolutional Neural Network (CNN),
... Show More