A method for Approximated evaluation of linear functional differential equations is described. where a function approximation as a linear combination of a set of orthogonal basis functions which are chebyshev functions .The coefficients of the approximation are determined by (least square and Galerkin’s) methods. The property of chebyshev polynomials leads to good results , which are demonstrated with examples.
Numeral recognition is considered an essential preliminary step for optical character recognition, document understanding, and others. Although several handwritten numeral recognition algorithms have been proposed so far, achieving adequate recognition accuracy and execution time remain challenging to date. In particular, recognition accuracy depends on the features extraction mechanism. As such, a fast and robust numeral recognition method is essential, which meets the desired accuracy by extracting the features efficiently while maintaining fast implementation time. Furthermore, to date most of the existing studies are focused on evaluating their methods based on clean environments, thus limiting understanding of their potential a
... Show MoreComputer vision seeks to mimic the human visual system and plays an essential role in artificial intelligence. It is based on different signal reprocessing techniques; therefore, developing efficient techniques becomes essential to achieving fast and reliable processing. Various signal preprocessing operations have been used for computer vision, including smoothing techniques, signal analyzing, resizing, sharpening, and enhancement, to reduce reluctant falsifications, segmentation, and image feature improvement. For example, to reduce the noise in a disturbed signal, smoothing kernels can be effectively used. This is achievedby convolving the distributed signal with smoothing kernels. In addition, orthogonal moments (OMs) are a cruc
... Show MoreIn this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.
In this paper, a computational method for solving optimal problem is presented, using indirect method (spectral methodtechnique) which is based on Boubaker polynomial. By this method the state and the adjoint variables are approximated by Boubaker polynomial with unknown coefficients, thus an optimal control problem is transformed to algebraic equations which can be solved easily, and then the numerical value of the performance index is obtained. Also the operational matrices of differentiation and integration have been deduced for the same polynomial to help solving the problems easier. A numerical example was given to show the applicability and efficiency of the method. Some characteristics of this polynomial which can be used for solvin
... Show MoreThis study aims to model the flank wear prediction equation in metal cutting, depending on the workpiece material properties and almost cutting conditions. A new method of energy transferred solution between the cutting tool and workpiece was introduced through the flow stress of chip formation by using the Johnson-Cook model. To investigate this model, an orthogonal cutting test coupled with finite element analysis was carried out to solve this model and finding a wear coefficient of cutting 6061-T6 aluminum and the given carbide tool.
This work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show MoreIn this paper we define and study new generalizations of continuous functions namely, -weakly (resp., w-closure, w-strongly) continuous and the main properties are studies: (a) If f : X®Y is w-weakly (resp., w-closure, w-strongly) continuous, then for any AÌX and any BÌY the restrictions fïA : A®Y and fB : f -1(B)®B are w-weakly (resp., w-closure, w-strongly) continuous. (b) Comparison between deferent forms of generalizations of continuous functions. (c) Relationship between compositions of deferent forms of generalizations of continuous functions. Moreover, we expanded the above generalizations and namely almost w-weakly (resp., w-closure, w-strongly) continuous functions and we state and prove several results concerning it.
Continuous functions are novel concepts in topology. Many topologists contributed to the theory of continuous functions in topology. The present authors continued the study on continuous functions by utilizing the concept of gpα-closed sets in topology and introduced the concepts of weakly, subweakly and almost continuous functions. Further, the properties of these functions are established.
In this paper, a simple fast lossless image compression method is introduced for compressing medical images, it is based on integrates multiresolution coding along with polynomial approximation of linear based to decompose image signal followed by efficient coding. The test results indicate that the suggested method can lead to promising performance due to flexibility in overcoming the limitations or restrictions of the model order length and extra overhead information required compared to traditional predictive coding techniques.