This work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it is obvious that the number of moments selected by the SP should exceed 30% of the overall EEG samples for accuracy to be over 90%.
Steganography is defined as hiding confidential information in some other chosen media without leaving any clear evidence of changing the media's features. Most traditional hiding methods hide the message directly in the covered media like (text, image, audio, and video). Some hiding techniques leave a negative effect on the cover image, so sometimes the change in the carrier medium can be detected by human and machine. The purpose of suggesting hiding information is to make this change undetectable. The current research focuses on using complex method to prevent the detection of hiding information by human and machine based on spiral search method, the Structural Similarity Index Metrics measures are used to get the accuracy and quality
... Show MoreThis paper proposes feedback linearization control (FBLC) based on function approximation technique (FAT) to regulate the vibrational motion of a smart thin plate considering the effect of axial stretching. The FBLC includes designing a nonlinear control law for the stabilization of the target dynamic system while the closedloop dynamics are linear with ensured stability. The objective of the FAT is to estimate the cubic nonlinear restoring force vector using the linear parameterization of weighting and orthogonal basis function matrices. Orthogonal Chebyshev polynomials are used as strong approximators for adaptive schemes. The proposed control architecture is applied to a thin plate with a large deflection that stimulates the axial loadin
... Show MoreMulti-walled carbon nanotubes (MWCNTs) were functionalized by hexylamine (HA) in a promising, cost-effective, rapid and microwave-assisted approach. In order to decrease defects and remove acid-treatment stage, functionalization of MWCNTs with HA was carried out in the presence of diazonium reaction. Surface functionality groups and morphology of chemically-functionalized MWCNTS were characterized by FTIR, Raman spectroscopy, thermogravimetric analysis (DTG), and transmission electron microscopy (TEM). To reach a promising dispersibility in oil media, MWCNTs were functionalized with HA. While the cylindrical structures of MWCNTs were remained reasonably intact, characterization results consistently confirmed the sidewall-functionalization o
... Show MoreThe influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show MoreThe present study aims to investigate the various request constructions used in Classical Arabic and Modern Arabic language by identifying the differences in their usage in these two different genres. Also, the study attempts to trace the cases of felicitous and infelicitous requests in the Arabic language. Methodologically, the current study employs a web-based corpus tool (Sketch Engine) to analyze different corpora: the first one is Classical Arabic, represented by King Saud University Corpus of Classical Arabic, while the second is The Arabic Web Corpus “arTenTen” representing Modern Arabic. To do so, the study relies on felicity conditions to qualitatively interpret the quantitative data, i.e., following a mixed mode method
... Show More