This paper presents a proposed method for (CBIR) from using Discrete Cosine Transform with Kekre Wavelet Transform (DCT/KWT), and Daubechies Wavelet Transform with Kekre Wavelet Transform (D4/KWT) to extract features for Distributed Database system where clients/server as a Star topology, client send the query image and server (which has the database) make all the work and then send the retrieval images to the client. A comparison between these two approaches: first DCT compare with DCT/KWT and second D4 compare with D4/KWT are made. The work experimented over the image database of 200 images of 4 categories and the performance of image retrieval with respect to two similarity measures namely Euclidian distance (ED) and sum of absolute difference (AD) and compared with the overall average of precision and recall.
This research discusses the subject of identity in the urban environment as it attempts to answer a number of questions that come with the concept of identity. The first of these questions: What is identity? Can a definition or conceptual framework be developed for identity? What about individual, collective, cultural, ethnic, political and regional identity? Is there a definition of identity in the urban environment in particular? If there is a definition of identity, what about social mobility responsible for social change? How can we see identity through this kinetics? Can we assume that identity in the urban environment has a variable structure or is of variable shape with a more stable structure? Can we determine the spatial-tempora
... Show MoreCyber-attacks keep growing. Because of that, we need stronger ways to protect pictures. This paper talks about DGEN, a Dynamic Generative Encryption Network. It mixes Generative Adversarial Networks with a key system that can change with context. The method may potentially mean it can adjust itself when new threats appear, instead of a fixed lock like AES. It tries to block brute‑force, statistical tricks, or quantum attacks. The design adds randomness, uses learning, and makes keys that depend on each image. That should give very good security, some flexibility, and keep compute cost low. Tests still ran on several public image sets. Results show DGEN beats AES, chaos tricks, and other GAN ideas. Entropy reached 7.99 bits per pix
... Show MoreBackground: Obesity tends to appear in modern societies and constitutes a significant public health problem with an increased risk of cardiovascular diseases.
Objective: This study aims to determine the agreement between actual and perceived body image in the general population.
Methods: A descriptive cross-sectional study design was conducted with a sample size of 300. The data were collected from eight major populated areas of Northern district of Karachi Sindh with a period of six months (10th January 2020 to 21st June 2020). The Figure rating questionnaire scale (FRS) was applied to collect the demographic data and perception about body weight. Body mass index (BMI) used for ass
... Show MoreThis research delves into the role of satirical television programs in shaping the image of Iraqi politicians. The research problem is summarized in the main question: How does satire featured in television programs influence the portrayal of Iraqi politicians? This research adopts a descriptive approach and employs a survey methodology. The primary data collection tool is a questionnaire, complemented by observation and measurement techniques. The study draws upon the framework of cultural cultivation theory as a guiding theoretical foundation. A total of 430 questionnaires were disseminated among respondents who regularly watch satirical programs, selected through a multi-stage random sampling procedure.
Th
In this paper, membrane-based computing image segmentation, both region-based and edge-based, is proposed for medical images that involve two types of neighborhood relations between pixels. These neighborhood relations—namely, 4-adjacency and 8-adjacency of a membrane computing approach—construct a family of tissue-like P systems for segmenting actual 2D medical images in a constant number of steps; the two types of adjacency were compared using different hardware platforms. The process involves the generation of membrane-based segmentation rules for 2D medical images. The rules are written in the P-Lingua format and appended to the input image for visualization. The findings show that the neighborhood relations between pixels o
... Show MoreSubcutaneous vascularization has become a new solution for identification management over the past few years. Systems based on dorsal hand veins are particularly promising for high-security settings. The dorsal hand vein recognition system comprises the following steps: acquiring images from the database and preprocessing them, locating the region of interest, and extracting and recognizing information from the dorsal hand vein pattern. This paper reviewed several techniques for obtaining the dorsal hand vein area and identifying a person. Therefore, this study just provides a comprehensive review of existing previous theories. This model aims to offer the improvement in the accuracy rate of the system that was shown in previous studies and
... Show MoreThis paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one
... Show More