Autism is a lifelong developmental deficit that affects how people perceive the world and interact with each others. An estimated one in more than 100 people has autism. Autism affects almost four times as many boys than girls. The commonly used tools for analyzing the dataset of autism are FMRI, EEG, and more recently "eye tracking". A preliminary study on eye tracking trajectories of patients studied, showed a rudimentary statistical analysis (principal component analysis) provides interesting results on the statistical parameters that are studied such as the time spent in a region of interest. Another study, involving tools from Euclidean geometry and non-Euclidean, the trajectory of eye patients also showed interesting results. In this research, need confirm the results of the preliminary study but also going forward in understanding the processes involved in these experiments. Two tracks are followed, first will concern with the development of classifiers based on statistical data already provided by the system "eye tracking", second will be more focused on finding new descriptors from the eye trajectories. In this paper, study used K-mean with Vector Measure Constructor Method (VMCM). In addition, briefly reflect used other method support vector machine (SVM) technique. The methods are playing important role to classify the people with and without autism specter disorder. The research paper is comparative study between these two methods.
There are several oil reservoirs that had severe from a sudden or gradual decline in their production due to asphaltene precipitation inside these reservoirs. Asphaltene deposition inside oil reservoirs causes damage for permeability and skin factor, wettability alteration of a reservoir, greater drawdown pressure. These adverse changing lead to flow rate reduction, so the economic profit will drop. The aim of this study is using local solvents: reformate, heavy-naphtha and binary of them for dissolving precipitated asphaltene inside the oil reservoir. Three samples of the sand pack had been prepared and mixed with a certain amount of asphaltene. Permeability of these samples calculated before and after mixed with asphaltenes. Then, the
... Show MoreIn this paper, we designed a new efficient stream cipher cryptosystem that depend on a chaotic map to encrypt (decrypt) different types of digital images. The designed encryption system passed all basic efficiency criteria (like Randomness, MSE, PSNR, Histogram Analysis, and Key Space) that were applied to the key extracted from the random generator as well as to the digital images after completing the encryption process.
In This paper, sky radio emission background level associated with radio storm burst for the Sun and Jupiter is determined at frequency (20.1 MHz). The observation data for radio Jove telescope for the Sun and Jupiter radio storm observations data are loaded from NASA radio Jove telescope website, the data of Sunspot number are loaded from National Geophysical Data Center, (NGDC). Two radio Jove stations [(Sula, MT), (Lamy, NM)] are chose from data website for these huge observations data. For the Sun, twelve figures are used to determine the relation between radio background emission, and the daily Sunspot number. For Jupiter a twenty four figures are used to determine the relation between radio background emission and diffraction betwe
... Show MoreAmong the different passive techniques heat pipe heat exchanger (HPHE) seems to be the most effective one for energy saving in heating ventilation and air conditioning system (HVAC). The applications for nanofluids with high conductivity are favorable to increase the thermal performance in HPHE. Even though the nanofluid has the higher heat conduction coefficient that dispels more heat theoretically but the higher concentration will make clustering .Clustering is a problem that must be solved before nanofluids can be considered for long-term practical uses. Results showed that the maximum value of relative power is 0.13 mW at nanofluid compared with other concentrations due to the low density of nanofluid at this concentration. For highe
... Show MoreIn despite of the expansion of using the dummy variables as a explanatory variables, but their using as a dependent variables is still limited, and the reason of that may be return to may problems when using dummy variables as a dependent variables. the study aimed to using the quality Response Models to Measuring Efficiency of cows farms by random sample including (19) farm from (Abi gherak district). The study estimating the transcendental logarithmic production function by using stochastic frontier Analysis (SFA) to interpret the relation between the return achieved from the cows farms as a dependent variables and each of labor and capital as an independent variables. the function indicates that increasing in labor by (100%) will
... Show MoreAbstract :
The study aims at building a mathematical model for the aggregate production planning for Baghdad soft drinks company. The study is based on a set of aggregate planning strategies (Control of working hours, storage level control strategy) for the purpose of exploiting the resources and productive capacities available in an optimal manner and minimizing production costs by using (Matlab) program. The most important finding of the research is the importance of exploiting during the available time of production capacity. In the months when the demand is less than the production capacity available for investment. In the subsequent months when the demand exceeds the available energy and to minimize the use of overti
... Show MoreSurvival analysis is the analysis of data that are in the form of times from the origin of time until the occurrence of the end event, and in medical research, the origin of time is the date of registration of the individual or the patient in a study such as clinical trials to compare two types of medicine or more if the endpoint It is the death of the patient or the disappearance of the individual. The data resulting from this process is called survival times. But if the end is not death, the resulting data is called time data until the event. That is, survival analysis is one of the statistical steps and procedures for analyzing data when the adopted variable is time to event and time. It could be d
... Show More