The recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital media. Our investigation rigorously assesses the capabilities of these advanced LLMs in identifying and differentiating manipulated imagery. We explore how these models process visual data, their effectiveness in recognizing subtle alterations, and their potential in safeguarding against misleading representations. The implications of our findings are far-reaching, impacting areas such as security, media integrity, and the trustworthiness of information in digital platforms. Moreover, the study sheds light on the limitations and strengths of current LLMs in handling complex tasks like image verification, thereby contributing valuable insights to the ongoing discourse on AI ethics and digital media reliability.
In this paper, RBF-based multistage auto-encoders are used to detect IDS attacks. RBF has numerous applications in various actual life settings. The planned technique involves a two-part multistage auto-encoder and RBF. The multistage auto-encoder is applied to select top and sensitive features from input data. The selected features from the multistage auto-encoder is wired as input to the RBF and the RBF is trained to categorize the input data into two labels: attack or no attack. The experiment was realized using MATLAB2018 on a dataset comprising 175,341 case, each of which involves 42 features and is authenticated using 82,332 case. The developed approach here has been applied for the first time, to the knowledge of the authors, to dete
... Show Morethe study including isolation and identification of candida spp causing UTIs from patintes coming to al-yarmouk hospital
Community detection is useful for better understanding the structure of complex networks. It aids in the extraction of the required information from such networks and has a vital role in different fields that range from healthcare to regional geography, economics, human interactions, and mobility. The method for detecting the structure of communities involves the partitioning of complex networks into groups of nodes, with extensive connections within community and sparse connections with other communities. In the literature, two main measures, namely the Modularity (Q) and Normalized Mutual Information (NMI) have been used for evaluating the validation and quality of the detected community structures. Although many optimization algo
... Show MoreCorona virus sickness has become a big public health issue in 2019. Because of its contact-transparent characteristics, it is rapidly spreading. The use of a face mask is among the most efficient methods for preventing the transmission of the Covid-19 virus. Wearing the face mask alone can cut the chance of catching the virus by over 70\%. Consequently, World Health Organization (WHO) advised wearing masks in crowded places as precautionary measures. Because of the incorrect use of facial masks, illnesses have spread rapidly in some locations. To solve this challenge, we needed a reliable mask monitoring system. Numerous government entities are attempting to make wearing a face mask mandatory; this process can be facilitated by using face m
... Show MoreAn image retrieval system is a computer system for browsing, looking and recovering pictures from a huge database of advanced pictures. The objective of Content-Based Image Retrieval (CBIR) methods is essentially to extract, from large (image) databases, a specified number of images similar in visual and semantic content to a so-called query image. The researchers were developing a new mechanism to retrieval systems which is mainly based on two procedures. The first procedure relies on extract the statistical feature of both original, traditional image by using the histogram and statistical characteristics (mean, standard deviation). The second procedure relies on the T-
... Show MoreThe results shows existence of metals such as copper, iron, Cadmium, lead and zinc in most of examined samples , the highest concentration are up to (2.26, 40.82, 282.5, 31.02, 19.26, 4.34) Part per million) ppm) in pasta hot (Zer brand), Indomie with chicken, granule (Zer brand), brand (Zer brand), and rice (mahmood brand) respectively, with presence nickel in spaghetti( Zer brand), granule, Zer brand with concentration reached to 4.34 ppm and 1.06 ppm respectively.
The results of cereals group and its products show that two kinds of fungi, Aspergillus spp. and Penicillin spp. were found in rice (Mahmood brand) with numbers got to 1.5×103 Colony Forming Unit/ gram (c.f.u./g),while Bacillus cereus and Staphylococcus aureus were isola
The computer vision branch of the artificial intelligence field is concerned with developing algorithms for analyzing video image content. Extracting edge information, which is the essential process in most pictorial pattern recognition problems. A new method of edge detection technique has been introduces in this research, for detecting boundaries.
Selection of typical lossy techniques for encoding edge video images are also discussed in this research. The concentration is devoted to discuss the Block-Truncation coding technique and Discrete Cosine Transform (DCT) coding technique. In order to reduce the volume of pictorial data which one may need to store or transmit,
... Show More