In this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The performance of the proposed algorithm is evaluated using detection
techniques such as Peak Signal- to- Noise Ratio (PSNR) to measure the distortion,
Similarity Correlation between the cover-image and watermarked image, and Bit
Error Rate (BER) is used to measure the robustness. The sensitivity against attacks on
the watermarked image is investigated. The types of attacks applied are: Laplacian
sharpening, Median filtering, Salt & Peppers Noise and Rotating attack. The results
show that the proposed algorithm can resist Laplacain sharpening with any sharpening
parameter k, besides laplacian good result according to some other types of attacks is
achieved.
The study focuses on assessment of the quality of some image enhancement methods which were implemented on renal X-ray images. The enhancement methods included Imadjust, Histogram Equalization (HE) and Contrast Limited Adaptive Histogram Equalization (CLAHE). The images qualities were calculated to compare input images with output images from these three enhancement techniques. An eight renal x-ray images are collected to perform these methods. Generally, the x-ray images are lack of contrast and low in radiation dosage. This lack of image quality can be amended by enhancement process. Three quality image factors were done to assess the resulted images involved (Naturalness Image Quality Evaluator (NIQE), Perception based Image Qual
... Show MoreThis paper proposes a new method Object Detection in Skin Cancer Image, the minimum
spanning tree Detection descriptor (MST). This ObjectDetection descriptor builds on the
structure of the minimum spanning tree constructed on the targettraining set of Skin Cancer
Images only. The Skin Cancer Image Detection of test objects relies on their distances to the
closest edge of thattree. Our experimentsshow that the Minimum Spanning Tree (MST) performs
especially well in case of Fogginessimage problems and in highNoisespaces for Skin Cancer
Image.
The proposed method of Object Detection Skin Cancer Image wasimplemented and tested on
different Skin Cancer Images. We obtained very good results . The experiment showed that
The hero traditionally has such admirable traits as courage, fortitude,
chivalry and patriotism. In the literary works, the hero is the leading
character and the pivot around which all the characters and the events
revolve. The characteristics of the hero usually reflect the cultural values
of his time. Because, in each age, Man's attitudes towards himself and the
world change, different images of the hero emerge.
In Greek Mythology, the hero is frequently favoured by the gods;
therefore, he is himself semi-divine. The Greek hero is of princely birth
and is endowed with good physique, exceptional strength, skill in
athletics and battle, energy and eloquence, like Odysseus who is the hero
of the Odyssey, long
Artificial intelligence (AI) is entering many fields of life nowadays. One of these fields is biometric authentication. Palm print recognition is considered a fundamental aspect of biometric identification systems due to the inherent stability, reliability, and uniqueness of palm print features, coupled with their non-invasive nature. In this paper, we develop an approach to identify individuals from palm print image recognition using Orange software in which a hybrid of AI methods: Deep Learning (DL) and traditional Machine Learning (ML) methods are used to enhance the overall performance metrics. The system comprises of three stages: pre-processing, feature extraction, and feature classification or matching. The SqueezeNet deep le
... Show MoreArtificial intelligence (AI) is entering many fields of life nowadays. One of these fields is biometric authentication. Palm print recognition is considered a fundamental aspect of biometric identification systems due to the inherent stability, reliability, and uniqueness of palm print features, coupled with their non-invasive nature. In this paper, we develop an approach to identify individuals from palm print image recognition using Orange software in which a hybrid of AI methods: Deep Learning (DL) and traditional Machine Learning (ML) methods are used to enhance the overall performance metrics. The system comprises of three stages: pre-processing, feature extraction, and feature classification or matching. The SqueezeNet deep le
... Show MoreThe earth's surface comprises different kinds of land cover, water resources, and soil, which create environmental factors for varied animals, plants, and humans. Knowing the significant effects of land cover is crucial for long-term development, climate change modeling, and preserving ecosystems. In this research, the Google Earth Engine platform and freely available Landsat imagery were used to investigate the impact of the expansion and degradation in urbanized areas, watersheds, and vegetative cover on the land surface temperature in Baghdad from 2004 to 2021. Land cover indices such as the Normalized Difference Vegetation Index, Normalized Difference Water Index, and Normalized Difference Built-up Index (NDVI, NDWI, an
... Show MoreThe research aims to identify the effect of the training program that is based on integrating futuristic thinking skills with classroom interaction patterns on mathematics teachers in order to provide their students with creative solution skills. The research sample consisted of 31teachers (15 teachers for the experimental group and 16 for the control groups). The researcher developed a measure for the academic self-efficacy consisting of (39) items. Its validity, reliability, coefficient of difficulty and discriminatory power were estimated. To analyze the findings, the researcher adopted the Mann-Whitney (U) test and the effect size, and the findings were as follows: There is a statistically significant difference at the significance leve
... Show MoreA substantial matter to confidential messages' interchange through the internet is transmission of information safely. For example, digital products' consumers and producers are keen for knowing those products are genuine and must be distinguished from worthless products. Encryption's science can be defined as the technique to embed the data in an images file, audio or videos in a style which should be met the safety requirements. Steganography is a portion of data concealment science that aiming to be reached a coveted security scale in the interchange of private not clear commercial and military data. This research offers a novel technique for steganography based on hiding data inside the clusters that resulted from fuzzy clustering. T
... Show MoreThis paper presents a parametric audio compression scheme intended for scalable audio coding applications, and is particularly well suited for operation at low rates, in the vicinity of 5 to 32 Kbps. The model consists of two complementary components: Sines plus Noise (SN). The principal component of the system is an. overlap-add analysis-by-synthesis sinusoidal model based on conjugate matching pursuits. Perceptual information about human hearing is explicitly included into the model by psychoacoustically weighting the pursuit metric. Once analyzed, SN parameters are efficiently quantized and coded. Our informal listening tests demonstrated that our coder gave competitive performance to the-state-of-the- art HelixTM Producer Plus 9 from
... Show MoreAspect categorisation and its utmost importance in the eld of Aspectbased Sentiment Analysis (ABSA) has encouraged researchers to improve topic model performance for modelling the aspects into categories. In general, a majority of its current methods implement parametric models requiring a pre-determined number of topics beforehand. However, this is not e ciently undertaken with unannotated text data as they lack any class label. Therefore, the current work presented a novel non-parametric model drawing a number of topics based on the semantic association present between opinion-targets (i.e., aspects) and their respective expressed sentiments. The model incorporated the Semantic Association Rules (SAR) into the Hierarchical Dirichlet Proce
... Show More