The recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital media. Our investigation rigorously assesses the capabilities of these advanced LLMs in identifying and differentiating manipulated imagery. We explore how these models process visual data, their effectiveness in recognizing subtle alterations, and their potential in safeguarding against misleading representations. The implications of our findings are far-reaching, impacting areas such as security, media integrity, and the trustworthiness of information in digital platforms. Moreover, the study sheds light on the limitations and strengths of current LLMs in handling complex tasks like image verification, thereby contributing valuable insights to the ongoing discourse on AI ethics and digital media reliability.
Many problems were encountered during the drilling operations in Zubair oilfield. Stuckpipe, wellbore instability, breakouts and washouts, which increased the critical limits problems, were observed in many wells in this field, therefore an extra non-productive time added to the total drilling time, which will lead to an extra cost spent. A 1D Mechanical Earth Model (1D MEM) was built to suggest many solutions to such types of problems. An overpressured zone is noticed and an alternative mud weigh window is predicted depending on the results of the 1D MEM. Results of this study are diagnosed and wellbore instability problems are predicted in an efficient way using the 1D MEM. Suitable alternative solutions are presented
... Show MoreProject management are still depending on manual exchange of information based on paper documents. Where design drawings drafting by computer-aided design (CAD), but the data needed by project management software can not be extracted directly from CAD, and must be manually entered by the user. The process of calculation and collection of information from drawings and enter in the project management software needs effort and time with the possibility of errors in the transfer and enter of information. This research presents an integrated computer system for building projects where the extraction and import quantities, through the interpretation of AutoCAD drawing with MS Access database of unit costs and productivities for the pricing and
... Show MoreThe research focuses on determination of best location of high elevated tank using the required head of pump as a measure for this purpose. Five types of network were used to find the effect of the variation in the discharge and the node elevation on the best location. The most weakness point was determined for each network. Preliminary tank locations were chosen for test along the primary pipe with same interval distance. For each location, the water elevation in tank and pump head was calculated at each hour depending on the pump head that required to achieve the minimum pressure at the most weakness point. Then, the sum of pump heads through the day was determined. The results proved that there is a most economical lo
... Show MoreSupport vector machine (SVM) is a popular supervised learning algorithm based on margin maximization. It has a high training cost and does not scale well to a large number of data points. We propose a multiresolution algorithm MRH-SVM that trains SVM on a hierarchical data aggregation structure, which also serves as a common data input to other learning algorithms. The proposed algorithm learns SVM models using high-level data aggregates and only visits data aggregates at more detailed levels where support vectors reside. In addition to performance improvements, the algorithm has advantages such as the ability to handle data streams and datasets with imbalanced classes. Experimental results show significant performance improvements in compa
... Show MoreIn this paper we used frequentist and Bayesian approaches for the linear regression model to predict future observations for unemployment rates in Iraq. Parameters are estimated using the ordinary least squares method and for the Bayesian approach using the Markov Chain Monte Carlo (MCMC) method. Calculations are done using the R program. The analysis showed that the linear regression model using the Bayesian approach is better and can be used as an alternative to the frequentist approach. Two criteria, the root mean square error (RMSE) and the median absolute deviation (MAD) were used to compare the performance of the estimates. The results obtained showed that the unemployment rates will continue to increase in the next two decade
... Show MoreSuppose that
The selection and assessment of single-photon detection modules is a crucial problem in satellite-based QKD systems. The system's overall efficiency, secure key rate and quantum bit error rate are all significantly influenced by single-photon detection modules. There is a knowledge gap about the practical performance of commercially available single-photon detectors because existing research frequently relies on theoretical characteristics. This paper introduces a study on the effect of the parameters of three commercial single photon detection modules from ID Quantique company: ID Qube, ID100, and ID281 on certain Bennett-Brassard 1984 protocol parameters such as secure key rate, mean photon number per pulse, quantum bit error rate
... Show MoreA method has been demonstrated to synthesise effective zeolite membranes from existing crystals without a hydrothermal synthesis step.
An automatic text summarization system mimics how humans summarize by picking the most significant sentences in a source text. However, the complexities of the Arabic language have become challenging to obtain information quickly and effectively. The main disadvantage of the traditional approaches is that they are strictly constrained (especially for the Arabic language) by the accuracy of sentence feature functions, weighting schemes, and similarity calculations. On the other hand, the meta-heuristic search approaches have a feature tha
... Show More