The recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital media. Our investigation rigorously assesses the capabilities of these advanced LLMs in identifying and differentiating manipulated imagery. We explore how these models process visual data, their effectiveness in recognizing subtle alterations, and their potential in safeguarding against misleading representations. The implications of our findings are far-reaching, impacting areas such as security, media integrity, and the trustworthiness of information in digital platforms. Moreover, the study sheds light on the limitations and strengths of current LLMs in handling complex tasks like image verification, thereby contributing valuable insights to the ongoing discourse on AI ethics and digital media reliability.
G-system composed of three isolates G3 ( Bacillus),G12 ( Arthrobacter )and G27 ( Brevibacterium) was used to detect the mutagenicity of the anticancer drug, cyclophosphamide (CP) under conditions similar to that used for standard mutagen, Nitrosoguanidine (NTG). The CP effected the survival fraction of isolates after treatment for 15 mins using gradual increasing concentrations, but at less extent comparing to NTG. The mutagenic effect of CP was at higher level than that of NTG when using streptomycin as a genetic marker, but the situation was reversed when using rifampicin resistant as a report marker. The latter effect appeared upon recording the mutagen efficiency (ie., number of induced mutants/microgram of mutagen). Measuring the R
... Show MoreThe prevalence of using the applications for the internet of things (IoT) in many human life fields such as economy, social life, and healthcare made IoT devices targets for many cyber-attacks. Besides, the resource limitation of IoT devices such as tiny battery power, small storage capacity, and low calculation speed made its security a big challenge for the researchers. Therefore, in this study, a new technique is proposed called intrusion detection system based on spike neural network and decision tree (IDS-SNNDT). In this method, the DT is used to select the optimal samples that will be hired as input to the SNN, while SNN utilized the non-leaky integrate neurons fire (NLIF) model in order to reduce latency and minimize devices
... Show MoreSensibly highlighting the hidden structures of many real-world networks has attracted growing interest and triggered a vast array of techniques on what is called nowadays community detection (CD) problem. Non-deterministic metaheuristics are proved to competitively transcending the limits of the counterpart deterministic heuristics in solving community detection problem. Despite the increasing interest, most of the existing metaheuristic based community detection (MCD) algorithms reflect one traditional language. Generally, they tend to explicitly project some features of real communities into different definitions of single or multi-objective optimization functions. The design of other operators, however, remains canonical lacking any inte
... Show MoreAfter baking the flour, azodicarbonamide, an approved food additive, can be converted into carcinogenic semicarbazide hydrochloride (SEM) and biurea in flour products. Thus, determine SEM in commercial bread products is become mandatory and need to be performed. Therefore, two accurate, precision, simple and economics colorimetric methods have been developed for the visual detection and quantitative determination of SEM in commercial flour products. The 1st method is based on the formation of a blue-coloured product with λmax at 690 nm as a result of a reaction between the SEM and potassium ferrocyanide in an acidic medium (pH 6.0). In the 2nd method, a brownish-green colored product is formed due to the reaction between the SEM and phosph
... Show MoreBackground: In type 2 diabetes mellitus there is a progressive loss of beta cell function. One new
approach yielding promising results is the use of the orally active dipeptidyl peptidase-4 (DPP-4)
inhibitors for type 2 diabetes mellitus.
Objective: This study aims at comparing the possible occurrence of macrovascular & microvascular
complications in Iraqis patients with type 2 diabetes mellitus using two combinations of drugs
metformin + glibenclamide and metformin + sitagliptin.
Methodology: Sixty eight T2DM patients and 34 normal healthy individuals as control group were
enrolled in this study and categorized in to two treatment groups. The group 1 (34 patients ) received
metformin 500 mg three times daily
In this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
In this paper, a method is proposed to increase the compression ratio for the color images by
dividing the image into non-overlapping blocks and applying different compression ratio for these
blocks depending on the importance information of the block. In the region that contain important
information the compression ratio is reduced to prevent loss of the information, while in the
smoothness region which has not important information, high compression ratio is used .The
proposed method shows better results when compared with classical methods(wavelet and DCT).