Astronomy image is regarded main source of information to discover outer space, therefore to know the basic contain for galaxy (Milky way), it was classified using Variable Precision Rough Sets technique to determine the different region within galaxy according different color in the image. From classified image we can determined the percentage for each class and then what is the percentage mean. In this technique a good classified image result and faster time required to done the classification process.
Among several separation processes, the air flotation distinguish as remarkably high potential separation process related to its high separation efficiency and throughput, energy-efficient, simple process, cost-effective, applicable to a wide range of oily wastewater and no by-products. The current study aimed to investigate the effect of the type and concentration of surfactant on the stability of oil-water emulsion and efficiency of the separation process. For this purpose, three types of surfactant where used (anionic SDS, mixed nonionic Span 85/Tween 80, and cationic CTAB). The results demonstrated that the Span 85/Tween 80 surfactant has the best stability, and it increases with the surfactant concentration augmentation. The removal ef
... Show MoreWheat straw was modified with malonic acid in order to get low cost adsorbent have a good ability to remove copper and ferric ions from aqueous solutions, chemical modification temperature was 120°C and the time was 12 h. Parameters that affect the adsorption experiments were studied and found the optimum pH were 6 and 5 for copper and iron respectively and the time interval was 120 min and the adsorbent mass was 0.1 g. The values for adsorption isotherms parameters were determined according to Langmuir [qmax were 54.64 and 61.7 mg/g while b values were 0.234 and 0.22 mg/l] , Freundlich [Kf were 16.07 and 18.89 mg/g and n were 2.77 and 3.16], Temkin [B were 0.063 and 0.074 j/mol and At were 0.143 and 1.658 l/g] and for Dubinin-Radushkev
... Show MoreSupport vector machines (SVMs) are supervised learning models that analyze data for classification or regression. For classification, SVM is widely used by selecting an optimal hyperplane that separates two classes. SVM has very good accuracy and extremally robust comparing with some other classification methods such as logistics linear regression, random forest, k-nearest neighbor and naïve model. However, working with large datasets can cause many problems such as time-consuming and inefficient results. In this paper, the SVM has been modified by using a stochastic Gradient descent process. The modified method, stochastic gradient descent SVM (SGD-SVM), checked by using two simulation datasets. Since the classification of different ca
... Show MoreThe existing investigation explains the consequence of irradiation of violet laser on the structure properties of MawsoniteCu6Fe2SnS8 [CFTS] thin films. The film was equipped by the utilization of semi-computerized spray pyrolysis technique (SCSPT), it is the first time that this technique is used in the preparation and irradiation using a laser. when the received films were processed by continuous red laser (700 nm) with power (>1000mW) for different laser irradiation time using different number of times a laser scan (0, 6, 9, 12, 15 and 18 times) with total irradiation time (0,30,45,60,75,90 min) respectively at room temperature.. The XRD diffraction gave polycrysta
... Show MoreCompressing the speech reduces the data storage requirements, leading to reducing the time of transmitting the digitized speech over long-haul links like internet. To obtain best performance in speech compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry.The MCT bases functions are derived from GHM bases function using 2D linear convolution .The fast computation algorithm methods introduced here added desirable features to the current transform. We further assess the performance of the MCT in speech compression application. This paper discusses the effect of using DWT and MCT (one and two dimension) on speech compression. DWT and MCT performances in terms of comp
... Show MoreAs we live in the era of the fourth technological revolution, it has become necessary to use artificial intelligence to generate electric power through sustainable solar energy, especially in Iraq and what it has gone through in terms of crises and what it suffers from a severe shortage of electric power because of the wars and calamities it went through. During that period of time, its impact is still evident in all aspects of daily life experienced by Iraqis because of the remnants of wars, siege, terrorism, wrong policies ruling before and later, regional interventions and their consequences, such as the destruction of electric power stations and the population increase, which must be followed by an increase in electric power stations,
... Show MoreNeural cryptography deals with the problem of “key exchange” between two neural networks by using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between two communicating parties ar eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process.
Shadow detection and removal is an important task when dealing with color outdoor images. Shadows are generated by a local and relative absence of light. Shadows are, first of all, a local decrease in the amount of light that reaches a surface. Secondly, they are a local change in the amount of light rejected by a surface toward the observer. Most shadow detection and segmentation methods are based on image analysis. However, some factors will affect the detection result due to the complexity of the circumstances. In this paper a method of segmentation test present to detect shadows from an image and a function concept is used to remove the shadow from an image.
The penalized least square method is a popular method to deal with high dimensional data ,where the number of explanatory variables is large than the sample size . The properties of penalized least square method are given high prediction accuracy and making estimation and variables selection
At once. The penalized least square method gives a sparse model ,that meaning a model with small variables so that can be interpreted easily .The penalized least square is not robust ,that means very sensitive to the presence of outlying observation , to deal with this problem, we can used a robust loss function to get the robust penalized least square method ,and get robust penalized estimator and
... Show More