This article studies a comprehensive methods of edge detection and algorithms in digital images which is reflected a basic process in the field of image processing and analysis. The purpose of edge detection technique is discovering the borders that distinct diverse areas of an image, which donates to refining the understanding of the image contents and extracting structural information. The article starts by clarifying the idea of an edge and its importance in image analysis and studying the most noticeable edge detection methods utilized in this field, (e.g. Sobel, Prewitt, and Canny filters), besides other schemes based on distinguishing unexpected modifications in light intensity and color gradation. The research as well discusses the benefits and limitations of each technique, emphasizing their efficacy in addressing various kinds of images and the dares they face in complex environs. This article offers a comparative analysis of the numerous approaches utilized in edge detection, which assistances in selecting the suitable technique according to the requirements of applications, like video processing, object recognition, medical image analysis, and computer vision.
A substantial portion of today’s multimedia data exists in the form of unstructured text. However, the unstructured nature of text poses a significant task in meeting users’ information requirements. Text classification (TC) has been extensively employed in text mining to facilitate multimedia data processing. However, accurately categorizing texts becomes challenging due to the increasing presence of non-informative features within the corpus. Several reviews on TC, encompassing various feature selection (FS) approaches to eliminate non-informative features, have been previously published. However, these reviews do not adequately cover the recently explored approaches to TC problem-solving utilizing FS, such as optimization techniques.
... Show MoreOne of the main causes for concern is the widespread presence of pharmaceuticals in the environment, which may be harmful to living things. They are often referred to as emerging chemical pollutants in water bodies because they are either still unregulated or undergoing regulation. Pharmaceutical pollution of the environment may have detrimental effects on ecosystem viability, human health, and water quality. In this study, the amount of remaining pharmaceutical compounds in environmental waters was determined using a straightforward review. Pharmaceutical production and consumption have increased due to medical advancements, leading to concerns about their environmental impact and potential harm to living things due to their increa
... Show MoreChromene is considered a fused pyran ring with a benzene ring, which is found in many plants and is part of many important compounds such as anthocyanidins, anthocyanins, catechins, and flavanones. These compounds are included under the headings "flavonoids" and "isoflavonoids." These compounds are well known as bioactive molecules with wide medicinal uses. According to these pharmacokinetic characteristics, many researchers are giving more attention to this type of compound and its derivatives. Many chromene derivatives have been synthesized to study their biological effects for the treatment of many diseases. Furthermore, the researcher displayed wide interest in finding new methods for synthesizing chromene derivatives. These met
... Show MoreDistributed Denial of Service (DDoS) attacks on Web-based services have grown in both number and sophistication with the rise of advanced wireless technology and modern computing paradigms. Detecting these attacks in the sea of communication packets is very important. There were a lot of DDoS attacks that were directed at the network and transport layers at first. During the past few years, attackers have changed their strategies to try to get into the application layer. The application layer attacks could be more harmful and stealthier because the attack traffic and the normal traffic flows cannot be told apart. Distributed attacks are hard to fight because they can affect real computing resources as well as network bandwidth. DDoS attacks
... Show MoreLost circulation or losses in drilling fluid is one of the most important problems in the oil and gas industry, and it appeared at the beginning of this industry, which caused many problems during the drilling process, which may lead to closing the well and stopping the drilling process. The drilling muds are relatively expensive, especially the muds that contain oil-based mud or that contain special additives, so it is not economically beneficial to waste and lose these muds. The treatment of drilling fluid losses is also somewhat expensive as a result of the wasted time that it caused, as well as the high cost of materials used in the treatment such as heavy materials, cement, and others. The best way to deal with drilling fluid losses
... Show MoreThe optical absorption data of Hydrogenated Amorphous Silicon was analyzed using a Dunstan model of optical absorption in amorphous semiconductors. This model introduces disorder into the band-band absorption through a linear exponential distribution of local energy gaps, and it accounts for both the Urbach and Tauc regions of the optical absorption edge.Compared to other models of similar bases, such as the O’Leary and Guerra models, it is simpler to understand mathematically and has a physical meaning. The optical absorption data of Jackson et al and Maurer et al were successfully interpreted using Dunstan’s model. Useful physical parameters are extracted especially the band to the band energy gap , which is the energy gap in the a
... Show MoreThe meniscus has a crucial function in human anatomy, and Magnetic Resonance Imaging (M.R.I.) plays an essential role in meniscus assessment. It is difficult to identify cartilage lesions using typical image processing approaches because the M.R.I. data is so diverse. An M.R.I. data sequence comprises numerous images, and the attributes area we are searching for may differ from each image in the series. Therefore, feature extraction gets more complicated, hence specifically, traditional image processing becomes very complex. In traditional image processing, a human tells a computer what should be there, but a deep learning (D.L.) algorithm extracts the features of what is already there automatically. The surface changes become valuable when
... Show More