One of the significant stages in computer vision is image segmentation which is fundamental for different applications, for example, robot control and military target recognition, as well as image analysis of remote sensing applications. Studies have dealt with the process of improving the classification of all types of data, whether text or audio or images, one of the latest studies in which researchers have worked to build a simple, effective, and high-accuracy model capable of classifying emotions from speech data, while several studies dealt with improving textual grouping. In this study, we seek to improve the classification of image division using a novel approach depending on two methods used to segment the images. The first method used the minimum distance, and the second method used the clustering algorithm called DBSCAN. Both methods were tested with and without reclustering using the self-organizing map (SOM). The result from comparing the images after segmenting them and comparing the time taken to implement the segmentation process shows the effectiveness of these methods when used with SOM.
Document analysis of images snapped by camera is a growing challenge. These photos are often poor-quality compound images, composed of various objects and text; this makes automatic analysis complicated. OCR is one of the image processing techniques which is used to perform automatic identification of texts. Existing image processing techniques need to manage many parameters in order to clearly recognize the text in such pictures. Segmentation is regarded one of these essential parameters. This paper discusses the accuracy of segmentation process and its effect over the recognition process. According to the proposed method, the images were firstly filtered using the wiener filter then the active contour algorithm could b
... Show MoreThe meniscus has a crucial function in human anatomy, and Magnetic Resonance Imaging (M.R.I.) plays an essential role in meniscus assessment. It is difficult to identify cartilage lesions using typical image processing approaches because the M.R.I. data is so diverse. An M.R.I. data sequence comprises numerous images, and the attributes area we are searching for may differ from each image in the series. Therefore, feature extraction gets more complicated, hence specifically, traditional image processing becomes very complex. In traditional image processing, a human tells a computer what should be there, but a deep learning (D.L.) algorithm extracts the features of what is already there automatically. The surface changes become valuable when
... Show MoreSteganography is defined as hiding confidential information in some other chosen media without leaving any clear evidence of changing the media's features. Most traditional hiding methods hide the message directly in the covered media like (text, image, audio, and video). Some hiding techniques leave a negative effect on the cover image, so sometimes the change in the carrier medium can be detected by human and machine. The purpose of suggesting hiding information is to make this change undetectable. The current research focuses on using complex method to prevent the detection of hiding information by human and machine based on spiral search method, the Structural Similarity Index Metrics measures are used to get the accuracy and quality
... Show MoreThe experiment was aimed to evaluate a locally manufactured a dual-action device used for measuring feed pellets durability. The device performs dropping process in conventional devices, then sifting process to separate the pellets from the crumbles simultaneously, with a control the motor speed by using the pulse width modulation (PWM) Technique. The device performance was compared with the durability measuring device of a moving drop box. Rotational speed, diameter of the die holes of the machine was used in this study. The results showed that increasing the rotational speed of the die from 280 to 300 and to 320 rpm, increasing the die holes diameter from 3 to 4 and to 5 mm, led to
A mathematical method with a new algorithm with the aid of Matlab language is proposed to compute the linear equivalence (or the recursion length) of the pseudo-random key-stream periodic sequences using Fourier transform. The proposed method enables the computation of the linear equivalence to determine the degree of the complexity of any binary or real periodic sequences produced from linear or nonlinear key-stream generators. The procedure can be used with comparatively greater computational ease and efficiency. The results of this algorithm are compared with Berlekamp-Massey (BM) method and good results are obtained where the results of the Fourier transform are more accurate than those of (BM) method for computing the linear equivalenc
... Show MoreThe key objective of the study is to understand the best processes that are currently used in managing talent in Australian higher education (AHE) and design a quantitative measurement of talent management processes (TMPs) for the higher education (HE) sector.
The three qualitative multi-method studies that are commonly used in empirical studies, namely, brainstorming, focus group discussions and semi-structured individual interviews were considered. Twenty
Cancer disease has a complicated pathophysiology and is one of the major causes of death and morbidity. Classical cancer therapies include chemotherapy, radiation therapy, and immunotherapy. A typical treatment is chemotherapy, which delivers cytotoxic medications to patients to suppress the uncontrolled growth of cancerous cells. Conventional oral medication has a number of drawbacks, including a lack of selectivity, cytotoxicity, and multi-drug resistance, all of which offer significant obstacles to effective cancer treatment. Multidrug resistance (MDR) remains a major challenge for effective cancer chemotherapeutic interventions. The advent of nanotechnology approach has developed the field of tumor diagnosis and treatment. Cancer nanote
... Show More