Human skin detection, which usually performed before image processing, is the method of discovering skin-colored pixels and regions that may be of human faces or limbs in videos or photos. Many computer vision approaches have been developed for skin detection. A skin detector usually transforms a given pixel into a suitable color space and then uses a skin classifier to mark the pixel as a skin or a non-skin pixel. A skin classifier explains the decision boundary of the class of a skin color in the color space based on skin-colored pixels. The purpose of this research is to build a skin detection system that will distinguish between skin and non-skin pixels in colored still pictures. This performed by introducing a metric that measures the distances of pixel colors to skin tones. Results showed that the YCbCr color space performed better skin pixel detection than regular Red Green Blue images due to its isolation of the overall energy of an image in the luminance band. The RGB color space poorly classified images with wooden backgrounds or objects. Then, a histogram-based image segmentation scheme utilized to distinguish between the skin and non-skin pixels. The need for a compact skin model representation should stimulate the development of parametric models of skin detection, which is a future research direction.
Natural gas and oil are one of the mainstays of the global economy. However, many issues surround the pipelines that transport these resources, including aging infrastructure, environmental impacts, and vulnerability to sabotage operations. Such issues can result in leakages in these pipelines, requiring significant effort to detect and pinpoint their locations. The objective of this project is to develop and implement a method for detecting oil spills caused by leaking oil pipelines using aerial images captured by a drone equipped with a Raspberry Pi 4. Using the message queuing telemetry transport Internet of Things (MQTT IoT) protocol, the acquired images and the global positioning system (GPS) coordinates of the images' acquisition are
... Show MoreIn this work a fragile watermarking scheme is presented. This scheme is applied to digital color images in spatial domain. The image is divided into blocks, and each block has its authentication mark embedded in it, we would be able to insure which parts of the image are authentic and which parts have been modified. This authentication carries out without need to exist the original image. The results show the quality of the watermarked image is remaining very good and the watermark survived some type of unintended modification such as familiar compression software like WINRAR and ZIP
Kidney tumors are of different types having different characteristics and also remain challenging in the field of biomedicine. It becomes very important to detect the tumor and classify it at the early stage so that appropriate treatment can be planned. Accurate estimation of kidney tumor volume is essential for clinical diagnoses and therapeutic decisions related to renal diseases. The main objective of this research is to use the Computer-Aided Diagnosis (CAD) algorithms to help the early detection of kidney tumors that addresses the challenges of accurate kidney tumor volume estimation caused by extensive variations in kidney shape, size and orientation across subjects.
In this paper, have tried to implement an automated segmentati
The recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital med
... Show MoreMany approaches of different complexity already exist to edge detection in
color images. Nevertheless, the question remains of how different are the results
when employing computational costly techniques instead of simple ones. This
paper presents a comparative study on two approaches to color edge detection to
reduce noise in image. The approaches are based on the Sobel operator and the
Laplace operator. Furthermore, an efficient algorithm for implementing the two
operators is presented. The operators have been applied to real images. The results
are presented in this paper. It is shown that the quality of the results increases by
using second derivative operator (Laplace operator). And noise reduced in a good
Mammography is at present one of the available method for early detection of masses or abnormalities which is related to breast cancer. The most common abnormalities that may indicate breast cancer are masses and calcifications. The challenge lies in early and accurate detection to overcome the development of breast cancer that affects more and more women throughout the world. Breast cancer is diagnosed at advanced stages with the help of the digital mammogram images. Masses appear in a mammogram as fine, granular clusters, which are often difficult to identify in a raw mammogram. The incidence of breast cancer in women has increased significantly in recent years.
This paper proposes a computer aided diagnostic system for the extracti
The meniscus has a crucial function in human anatomy, and Magnetic Resonance Imaging (M.R.I.) plays an essential role in meniscus assessment. It is difficult to identify cartilage lesions using typical image processing approaches because the M.R.I. data is so diverse. An M.R.I. data sequence comprises numerous images, and the attributes area we are searching for may differ from each image in the series. Therefore, feature extraction gets more complicated, hence specifically, traditional image processing becomes very complex. In traditional image processing, a human tells a computer what should be there, but a deep learning (D.L.) algorithm extracts the features of what is already there automatically. The surface changes become valuable when
... Show More