Used automobile oils were subjected to filtration to remove solid material and dehydration to remove water, gasoline and light components by using vacuum distillation under moderate pressure, and then the dehydrated waste oil is subjected to extraction by using liquid solvents. Two solvents, namely n-butanol and n-hexane were used to extract base oil from automobile used oil, so that the expensive base oil can be reused again.
The recovered base oil by using n-butanol solvent gives (88.67%) reduction in carbon residue, (75.93%) reduction in ash content, (93.73%) oil recovery, (95%) solvent recovery and (100.62) viscosity index, at (5:1) solvent to used oil ratio and (40 oC) extraction temperature, while using n-hexane solvent gives (60.25%) reduction in carbon residue, (76.54%) reduction in ash content, (89.06%) oil recovery, (94.78%) solvent recovery and (100.3) viscosity index, at (6:1) solvent to used oil ratio and (50 oC) extraction temperature.
م.د. فاطمة حميد ،أ.م.د وفاء صباح محمد الخفاجي, International Journal of Psychosocial Rehabilitation,, 2020 - Cited by 1
Coronavirus disease (COVID-19) is an acute disease that affects the respiratory system which initially appeared in Wuhan, China. In Feb 2019 the sickness began to spread swiftly throughout the entire planet, causing significant health, social, and economic problems. Time series is an important statistical method used to study and analyze a particular phenomenon, identify its pattern and factors, and use it to predict future values. The main focus of the research is to shed light on the study of SARIMA, NARNN, and hybrid models, expecting that the series comprises both linear and non-linear compounds, and that the ARIMA model can deal with the linear component and the NARNN model can deal with the non-linear component. The models
... Show MoreThis research aims to study the methods of reduction of dimensions that overcome the problem curse of dimensionality when traditional methods fail to provide a good estimation of the parameters So this problem must be dealt with directly . Two methods were used to solve the problem of high dimensional data, The first method is the non-classical method Slice inverse regression ( SIR ) method and the proposed weight standard Sir (WSIR) method and principal components (PCA) which is the general method used in reducing dimensions, (SIR ) and (PCA) is based on the work of linear combinations of a subset of the original explanatory variables, which may suffer from the problem of heterogeneity and the problem of linear
... Show MoreA newly developed analytical method was conducted for the determination of Ketotifen fumarate (KTF) in pharmaceuticals drugs via quenching of continuous fluorescence of 9(10H)-Acridone (ACD). The method was applied using flow injection system of a new homemade ISNAG fluorimeter with fluorescence measurements at ± 90◦ via 2×4 solar cell. The calibration graph was linear in the range of 1-45 mmol/L, with correlation coefficient r = 0.9762 and the limit of detection 29.785 µg/sample from the stepwise dilution for the minimum concentration in the linear dynamic ranged of the calibration graph. The method was successfully applied to the determination of Ketotifen fumarate in two different pharma
... Show MoreCodes of red, green, and blue data (RGB) extracted from a lab-fabricated colorimeter device were used to build a proposed classifier with the objective of classifying colors of objects based on defined categories of fundamental colors. Primary, secondary, and tertiary colors namely red, green, orange, yellow, pink, purple, blue, brown, grey, white, and black, were employed in machine learning (ML) by applying an artificial neural network (ANN) algorithm using Python. The classifier, which was based on the ANN algorithm, required a definition of the mentioned eleven colors in the form of RGB codes in order to acquire the capability of classification. The software's capacity to forecast the color of the code that belongs to an object under de
... Show MoreIn Automatic Speech Recognition (ASR) the non-linear data projection provided by a one hidden layer Multilayer Perceptron (MLP), trained to recognize phonemes, and has previous experiments to provide feature enhancement substantially increased ASR performance, especially in noise. Previous attempts to apply an analogous approach to speaker identification have not succeeded in improving performance, except by combining MLP processed features with other features. We present test results for the TIMIT database which show that the advantage of MLP preprocessing for open set speaker identification increases with the number of speakers used to train the MLP and that improved identification is obtained as this number increases beyond sixty.
... Show MoreIn this paper, a subspace identification method for bilinear systems is used . Wherein a " three-block " and " four-block " subspace algorithms are used. In this algorithms the input signal to the system does not have to be white . Simulation of these algorithms shows that the " four-block " gives fast convergence and the dimensions of the matrices involved are significantly smaller so that the computational complexity is lower as a comparison with " three-block " algorithm .
Digital image is widely used in computer applications. This paper introduces a proposed method of image zooming based upon inverse slantlet transform and image scaling. Slantlet transform (SLT) is based on the principle of designing different filters for different scales.
First we apply SLT on color image, the idea of transform color image into slant, where large coefficients are mainly the signal and smaller one represent the noise. By suitably modifying these coefficients , using scaling up image by box and Bartlett filters so that the image scales up to 2X2 and then inverse slantlet transform from modifying coefficients using to the reconstructed image .
&nbs
... Show More