The field of autonomous robotic systems has advanced tremendously in the last few years, allowing them to perform complicated tasks in various contexts. One of the most important and useful applications of guide robots is the support of the blind. The successful implementation of this study requires a more accurate and powerful self-localization system for guide robots in indoor environments. This paper proposes a self-localization system for guide robots. To successfully implement this study, images were collected from the perspective of a robot inside a room, and a deep learning system such as a convolutional neural network (CNN) was used. An image-based self-localization guide robot image-classification system delivers a more accurate solution for indoor robot navigation. The more accurate solution of the guide robotic system opens a new window of the self-localization system and solves the more complex problem of indoor robot navigation. It makes a reliable interface between humans and robots. This study successfully demonstrated how a robot finds its initial position inside a room. A deep learning system, such as a convolutional neural network, trains the self-localization system as an image classification problem. The robot was placed inside the room to collect images using a panoramic camera. Two datasets were created from the room images based on the height above and below the chest. The above-mentioned method achieved a localization accuracy of 98.98%.
Image segmentation can be defined as a cutting or segmenting process of the digital image into many useful points which are called segmentation, that includes image elements contribute with certain attributes different form Pixel that constitute other parts. Two phases were followed in image processing by the researcher in this paper. At the beginning, pre-processing image on images was made before the segmentation process through statistical confidence intervals that can be used for estimate of unknown remarks suggested by Acho & Buenestado in 2018. Then, the second phase includes image segmentation process by using "Bernsen's Thresholding Technique" in the first phase. The researcher drew a conclusion that in case of utilizing
... Show MoreThis paper introduces an innovative method for image encryption called "Two-Fold Cryptography," which leverages the Henon map in a dual-layer encryption framework. By applying two distinct encryption processes, this approach offers enhanced security for images. Key parameters generated by the Henon map dynamically shape both stages of encryption, creating a sophisticated and robust security system. The findings reveal that Two-Fold Cryptography provides a notable improvement in image protection, outperforming traditional single-layer encryption techniques.
In this paper, we designed a new efficient stream cipher cryptosystem that depend on a chaotic map to encrypt (decrypt) different types of digital images. The designed encryption system passed all basic efficiency criteria (like Randomness, MSE, PSNR, Histogram Analysis, and Key Space) that were applied to the key extracted from the random generator as well as to the digital images after completing the encryption process.
In this study, dynamic encryption techniques are explored as an image cipher method to generate S-boxes similar to AES S-boxes with the help of a private key belonging to the user and enable images to be encrypted or decrypted using S-boxes. This study consists of two stages: the dynamic generation of the S-box method and the encryption-decryption method. S-boxes should have a non-linear structure, and for this reason, K/DSA (Knutt Durstenfeld Shuffle Algorithm), which is one of the pseudo-random techniques, is used to generate S-boxes dynamically. The biggest advantage of this approach is the production of the inverted S-box with the S-box. Compared to the methods in the literature, the need to store the S-box is eliminated. Also, the fabr
... Show MoreFractal image compression gives some desirable properties like fast decoding image, and very good rate-distortion curves, but suffers from a high encoding time. In fractal image compression a partitioning of the image into ranges is required. In this work, we introduced good partitioning process by means of merge approach, since some ranges are connected to the others. This paper presents a method to reduce the encoding time of this technique by reducing the number of range blocks based on the computing the statistical measures between them . Experimental results on standard images show that the proposed method yields minimize (decrease) the encoding time and remain the quality results passable visually.
Fractal image compression depends on representing an image using affine transformations. The main concern for researches in the discipline of fractal image compression (FIC) algorithm is to decrease encoding time needed to compress image data. The basic technique is that each portion of the image is similar to other portions of the same image. In this process, there are many models that were developed. The presence of fractals was initially noticed and handled using Iterated Function System (IFS); that is used for encoding images. In this paper, a review of fractal image compression is discussed with its variants along with other techniques. A summarized review of contributions is achieved to determine the fulfillment of fractal ima
... Show MoreIn this paper, we introduce a DCT based steganographic method for gray scale images. The embedding approach is designed to reach efficient tradeoff among the three conflicting goals; maximizing the amount of hidden message, minimizing distortion between the cover image and stego-image,and maximizing the robustness of embedding. The main idea of the method is to create a safe embedding area in the middle and high frequency region of the DCT domain using a magnitude modulation technique. The magnitude modulation is applied using uniform quantization with magnitude Adder/Subtractor modules. The conducted test results indicated that the proposed method satisfy high capacity, high preservation of perceptual and statistical properties of the steg
... Show MoreIn this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
Currently, with the huge increase in modern communication and network applications, the speed of transformation and storing data in compact forms are pressing issues. Daily an enormous amount of images are stored and shared among people every moment, especially in the social media realm, but unfortunately, even with these marvelous applications, the limited size of sent data is still the main restriction's, where essentially all these applications utilized the well-known Joint Photographic Experts Group (JPEG) standard techniques, in the same way, the need for construction of universally accepted standard compression systems urgently required to play a key role in the immense revolution. This review is concerned with Different
... Show More