With the continuous progress of image retrieval technology, the speed of searching for the required image from a large amount of image data has become an important issue. Convolutional neural networks (CNNs) have been used in image retrieval. However, many image retrieval systems based on CNNs have poor ability to express image features. Content-based Image Retrieval (CBIR) is a method of finding desired images from image databases. However, CBIR suffers from lower accuracy in retrieving images from large-scale image databases. In this paper, the proposed system is an improvement of the convolutional neural network for greater accuracy and a machine learning tool that can be used for automatic image retrieval. It includes two phases; the first phase (offline processing) consist of two stages; stage1 for CNN model classification while stage 2 for extracts high-level features directly from CNN by a flattening layer, which will be stored into a vector. In the second phase (online processing), the retrieval depends on query by image (QBI) from the system, which relies on the online CNN model stage to extract the features of the transmitted image. Afterward, an evaluation is conducted between the extracted features and the features that were previously stored by employing the Hamming distance to return all similar images. Last, it retrieves all the images and sends them to the system. Classification for images was achieved with 97.94% deep learning results, while for retrieved images, the deep learning was 98.94%. For this paper, work done on COREL image dataset. The images in the dataset used for training are more difficult than image classification due to the need for more computational resources. In the experimental part, training images using CNN achieved high accuracy, proving that the model has high accuracy in image retrieval.
Different additives are used in drilling fluids when the demanded properties cannot be gotten with clays. Drilling muds needs several additives and materials to give good characteristics. There are local alternatives more suitable for enhancing the rheology and filtration of drilling fluids. An experimental work had been conducted to assess the suitability of using potato starch to enhance rheological properties and filtration in drilling mud. This study investigated the potato starch as a viscosifier and fluid losses agent in drilling fluid. Results from this study proved that rheological properties of potato starch mud increased when pH of drilling fluid is increased. Potato starch could be used to enhance gel strength at low pH
... Show MoreDifferent additives are used in drilling fluids when the demanded properties cannot be gotten with clays. Drilling muds needs several additives and materials to give good characteristics. There are local alternatives more suitable for enhancing the rheology and filtration of drilling fluids. An experimental work had been conducted to assess the suitability of using potato starch to enhance rheological properties and filtration in drilling mud. This study investigated the potato starch as a viscosifier and fluid losses agent in drilling fluid. Results from this study proved that rheological properties of potato starch mud increased when pH of drilling fluid is increased. Potato starch could be used to enhance gel strength at low pH
... Show MoreThe traditional centralized network management approach presents severe efficiency and scalability limitations in large scale networks. The process of data collection and analysis typically involves huge transfers of management data to the manager which cause considerable network throughput and bottlenecks at the manager side. All these problems processed using the Agent technology as a solution to distribute the management functionality over the network elements. The proposed system consists of the server agent that is working together with clients agents to monitor the logging (off, on) of the clients computers and which user is working on it. file system watcher mechanism is used to indicate any change in files. The results were presente
... Show MoreAssessing water quality provides a scientific foundation for the development and management of water resources. The objective of the research is to evaluate the impact treated effluent from North Rustumiyia wastewater treatment plant (WWTP) on the quality of Diyala river. The model of the artificial neural network (ANN) and factor analysis (FA) based on Nemerow pollution index (NPI). To define important water quality parameters for North Al-Rustumiyia for the line(F2), the Nemerow Pollution Index was introduced. The most important parameters of assessment of water variation quality of wastewater were the parameter used in the model: biochemical oxygen demand (BOD), chemical oxygen dem
The need for an efficient method to find the furthermost appropriate document corresponding to a particular search query has become crucial due to the exponential development in the number of papers that are now readily available to us on the web. The vector space model (VSM) a perfect model used in “information retrieval”, represents these words as a vector in space and gives them weights via a popular weighting method known as term frequency inverse document frequency (TF-IDF). In this research, work has been proposed to retrieve the most relevant document focused on representing documents and queries as vectors comprising average term term frequency inverse sentence frequency (TF-ISF) weights instead of representing them as v
... Show MoreIn this paper, membrane-based computing image segmentation, both region-based and edge-based, is proposed for medical images that involve two types of neighborhood relations between pixels. These neighborhood relations—namely, 4-adjacency and 8-adjacency of a membrane computing approach—construct a family of tissue-like P systems for segmenting actual 2D medical images in a constant number of steps; the two types of adjacency were compared using different hardware platforms. The process involves the generation of membrane-based segmentation rules for 2D medical images. The rules are written in the P-Lingua format and appended to the input image for visualization. The findings show that the neighborhood relations between pixels o
... Show MoreThe basic solution to overcome difficult issues related to huge size of digital images is to recruited image compression techniques to reduce images size for efficient storage and fast transmission. In this paper, a new scheme of pixel base technique is proposed for grayscale image compression that implicitly utilize hybrid techniques of spatial modelling base technique of minimum residual along with transformed technique of Discrete Wavelet Transform (DWT) that also impels mixed between lossless and lossy techniques to ensure highly performance in terms of compression ratio and quality. The proposed technique has been applied on a set of standard test images and the results obtained are significantly encourage compared with Joint P
... Show MoreMany image processing and machine learning applications require sufficient image feature selection and representation. This can be achieved by imitating human ability to process visual information. One such ability is that human eyes are much more sensitive to changes in the intensity (luminance) than the color information. In this paper, we present how to exploit luminance information, organized in a pyramid structure, to transfer properties between two images. Two applications are presented to demonstrate the results of using luminance channel in the similarity metric of two images. These are image generation; where a target image is to be generated from a source one, and image colorization; where color information is to be browsed from o
... Show MoreThis paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one
... Show More