Autism is a lifelong developmental deficit that affects how people perceive the world and interact with each others. An estimated one in more than 100 people has autism. Autism affects almost four times as many boys than girls. The commonly used tools for analyzing the dataset of autism are FMRI, EEG, and more recently "eye tracking". A preliminary study on eye tracking trajectories of patients studied, showed a rudimentary statistical analysis (principal component analysis) provides interesting results on the statistical parameters that are studied such as the time spent in a region of interest. Another study, involving tools from Euclidean geometry and non-Euclidean, the trajectory of eye patients also showed interesting results. In this research, need confirm the results of the preliminary study but also going forward in understanding the processes involved in these experiments. Two tracks are followed, first will concern with the development of classifiers based on statistical data already provided by the system "eye tracking", second will be more focused on finding new descriptors from the eye trajectories. In this paper, study used K-mean with Vector Measure Constructor Method (VMCM). In addition, briefly reflect used other method support vector machine (SVM) technique. The methods are playing important role to classify the people with and without autism specter disorder. The research paper is comparative study between these two methods.
The most popular medium that being used by people on the internet nowadays is video streaming. Nevertheless, streaming a video consumes much of the internet traffics. The massive quantity of internet usage goes for video streaming that disburses nearly 70% of the internet. Some constraints of interactive media might be detached; such as augmented bandwidth usage and lateness. The need for real-time transmission of video streaming while live leads to employing of Fog computing technologies which is an intermediary layer between the cloud and end user. The latter technology has been introduced to alleviate those problems by providing high real-time response and computational resources near to the
... Show MoreMagnetic Resonance Imaging (MRI) uses magnetization and radio waves, rather than x-rays to make very detailed, cross- sectional pictures of the brain. In this work we are going to explain some procedures belongs contrast and brightness improvement which is very important in the improvement the image quality such as the manipulation with the image histogram. Its has been explained in this worked the histogram shrink i.e. reducing the size of the gray level gives a dim low contrast picture is produced, where, the histogram stretching of the gray level was distributed on a wide scale but there is no increase in the number of pixels in the bright region. The histogram equalization has also been discuss together with its effects of the improveme
... Show MoreThis paper is concerned with introducing and studying the M-space by using the mixed degree systems which are the core concept in this paper. The necessary and sufficient condition for the equivalence of two reflexive M-spaces is super imposed. In addition, the m-derived graphs, m-open graphs, m-closed graphs, m-interior operators, m-closure operators and M-subspace are introduced. From an M-space, a unique supratopological space is introduced. Furthermore, the m-continuous (m-open and m-closed) functions are defined and the fundamental theorem of the m-continuity is provided. Finally, the m-homeomorphism is defined and some of its properties are investigated.
In this study, dynamic encryption techniques are explored as an image cipher method to generate S-boxes similar to AES S-boxes with the help of a private key belonging to the user and enable images to be encrypted or decrypted using S-boxes. This study consists of two stages: the dynamic generation of the S-box method and the encryption-decryption method. S-boxes should have a non-linear structure, and for this reason, K/DSA (Knutt Durstenfeld Shuffle Algorithm), which is one of the pseudo-random techniques, is used to generate S-boxes dynamically. The biggest advantage of this approach is the production of the inverted S-box with the S-box. Compared to the methods in the literature, the need to store the S-box is eliminated. Also, the fabr
... Show MoreFractal image compression gives some desirable properties like fast decoding image, and very good rate-distortion curves, but suffers from a high encoding time. In fractal image compression a partitioning of the image into ranges is required. In this work, we introduced good partitioning process by means of merge approach, since some ranges are connected to the others. This paper presents a method to reduce the encoding time of this technique by reducing the number of range blocks based on the computing the statistical measures between them . Experimental results on standard images show that the proposed method yields minimize (decrease) the encoding time and remain the quality results passable visually.
Fractal image compression depends on representing an image using affine transformations. The main concern for researches in the discipline of fractal image compression (FIC) algorithm is to decrease encoding time needed to compress image data. The basic technique is that each portion of the image is similar to other portions of the same image. In this process, there are many models that were developed. The presence of fractals was initially noticed and handled using Iterated Function System (IFS); that is used for encoding images. In this paper, a review of fractal image compression is discussed with its variants along with other techniques. A summarized review of contributions is achieved to determine the fulfillment of fractal ima
... Show More