Image Fusion is being used to gather important data from such an input image array and to place it in a single output picture to make it much more meaningful & usable than either of the input images. Image fusion boosts the quality and application of data. The accuracy of the image that has fused depending on the application. It is widely used in smart robotics, audio camera fusion, photonics, system control and output, construction and inspection of electronic circuits, complex computer, software diagnostics, also smart line assembling robots. In this paper provides a literature review of different image fusion techniques in the spatial domain and frequency domain, such as averaging, min-max, block substitution, Intensity-Hue-Saturation(IHS), Principal Component Analysis (PCA), pyramid-based techniques, and transforming. Different quality metrics for quantitative analysis of these approaches have been debated.
With the continuous progress of image retrieval technology, the speed of searching for the required image from a large amount of image data has become an important issue. Convolutional neural networks (CNNs) have been used in image retrieval. However, many image retrieval systems based on CNNs have poor ability to express image features. Content-based Image Retrieval (CBIR) is a method of finding desired images from image databases. However, CBIR suffers from lower accuracy in retrieving images from large-scale image databases. In this paper, the proposed system is an improvement of the convolutional neural network for greater accuracy and a machine learning tool that can be used for automatic image retrieval. It includes two phases
... Show MoreIn this paper, membrane-based computing image segmentation, both region-based and edge-based, is proposed for medical images that involve two types of neighborhood relations between pixels. These neighborhood relations—namely, 4-adjacency and 8-adjacency of a membrane computing approach—construct a family of tissue-like P systems for segmenting actual 2D medical images in a constant number of steps; the two types of adjacency were compared using different hardware platforms. The process involves the generation of membrane-based segmentation rules for 2D medical images. The rules are written in the P-Lingua format and appended to the input image for visualization. The findings show that the neighborhood relations between pixels o
... Show MoreThe study discusses the marketing profile of electoral candidates and politicians especially the image that takes root in the minds of voters has become more important than the ideologies in the technological era or their party affiliations and voters are no longer paying attention to the concepts of a liberal, conservative, right-wing or secular, etc. while their interests have increased towards candidates. The consultants and image experts are able to make a dramatic shift in their electoral roles. They, as specialists in the electoral arena, dominate the roles of political parties.
The importance of the study comes from the fact that the image exceeds its normal framework in our contemporary world to become political and cultural
JPEG is most popular image compression and encoding, this technique is widely used in many applications (images, videos and 3D animations). Meanwhile, researchers are very interested to develop this massive technique to compress images at higher compression ratios with keeping image quality as much as possible. For this reason in this paper we introduce a developed JPEG based on fast DCT and removed most of zeros and keeps their positions in a transformed block. Additionally, arithmetic coding applied rather than Huffman coding. The results showed up, the proposed developed JPEG algorithm has better image quality than traditional JPEG techniques.
This paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one
... Show Moreتعد مجالات الصورة وعلاماتها الحركية حضوراً دلالياً للاتصال العلامي واتساعاً في الرابطة الجدلية ما بين الدوال ومداليها، التي تقوم بها الرؤية الاخراجية لإنتاج دلالات اخفائية تمتلك جوهرها الانتقالي عبر الافكار بوصفها معطيات العرض، ويسعى التشفير الصوري الى بث ثنائية المعنى داخل الحقول المتعددة للعرض المسرحي، ولفهم المعنى المنبثق من هذه التشفيرات البصرية، تولدت الحاجة لبحث تشكيل هذه التشفيرات وكيفية تح
... Show MoreMany image processing and machine learning applications require sufficient image feature selection and representation. This can be achieved by imitating human ability to process visual information. One such ability is that human eyes are much more sensitive to changes in the intensity (luminance) than the color information. In this paper, we present how to exploit luminance information, organized in a pyramid structure, to transfer properties between two images. Two applications are presented to demonstrate the results of using luminance channel in the similarity metric of two images. These are image generation; where a target image is to be generated from a source one, and image colorization; where color information is to be browsed from o
... Show More