Codes of red, green, and blue data (RGB) extracted from a lab-fabricated colorimeter device were used to build a proposed classifier with the objective of classifying colors of objects based on defined categories of fundamental colors. Primary, secondary, and tertiary colors namely red, green, orange, yellow, pink, purple, blue, brown, grey, white, and black, were employed in machine learning (ML) by applying an artificial neural network (ANN) algorithm using Python. The classifier, which was based on the ANN algorithm, required a definition of the mentioned eleven colors in the form of RGB codes in order to acquire the capability of classification. The software's capacity to forecast the color of the code that belongs to an object under detection is one of the results of the proposed classifier. The work demanded the collection of about 5000 color codes which in turn were subjected to algorithms for training and testing. The open-source platform TensorFlow for ML and the open-source neural network library Keras were used to construct the algorithm for the study. The results showed an acceptable efficiency of the built classifier represented by an accuracy of 90% which can be considered applicable, especially after some improvements in the future to makes it more effective as a trusted colorimeter.
Alice Walker (1944) ranks among the greatest American writers of the twentieth century. Through her novels Meridian and the color Purple, Alice outlines many issues concerning the effect of authoritarian thinking upon its often innocent victims, and the possibility of meaningful, productive resistance. Walker's texts aim at giving a voice to those who have no voice especially those poor, rural black women who are robbed of power and the right to make decisions about their own lives by a range of forces standing against them. These texts also aim at clarifying how Walker's female protagonists, Meridian and Celia try to free themselves from oppression, misery , fear and underestimation
... Show MoreThe current theoretical research targeted to construct a model of terrorist personality and its differentiation from psychopathic personality . Several assumptions or theories of perspectives of psychopathic personality have been compared with the terrorist personality studies that concerned . The suggested theoretical model is interrupting the terrorist personality . The conclusions , discussions are mentioned. Finally, recommendation is suggested .
The efficiency of Nd:YAG laser radiation in removing debris and smear layer from prepared root
canal walls was studied. Fifty-seven human extracted single rooted anterior teeth were divided into three
groups. A group that was not lased is considered as a control group. The remaining teeth were exposed to
different laser parameters regarding laser energy, repetition rate and exposure time. For the case of the set of
parameters of 7 mJ laser energy, the cleaning was maximum at 3 p.p.s. repetition rate for 3 seconds exposure
time for, the coronal, middle and apical thirds. Above and below this energy level, there was an overdose
(melting) or under dose (no effect). Nevertheless for 10mJ laser energy case, the cleaning effi
In this paper two main stages for image classification has been presented. Training stage consists of collecting images of interest, and apply BOVW on these images (features extraction and description using SIFT, and vocabulary generation), while testing stage classifies a new unlabeled image using nearest neighbor classification method for features descriptor. Supervised bag of visual words gives good result that are present clearly in the experimental part where unlabeled images are classified although small number of images are used in the training process.
This study discussed a biased estimator of the Negative Binomial Regression model known as (Liu Estimator), This estimate was used to reduce variance and overcome the problem Multicollinearity between explanatory variables, Some estimates were used such as Ridge Regression and Maximum Likelihood Estimators, This research aims at the theoretical comparisons between the new estimator (Liu Estimator) and the estimators
Statistical learning theory serves as the foundational bedrock of Machine learning (ML), which in turn represents the backbone of artificial intelligence, ushering in innovative solutions for real-world challenges. Its origins can be linked to the point where statistics and the field of computing meet, evolving into a distinct scientific discipline. Machine learning can be distinguished by its fundamental branches, encompassing supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning. Within this tapestry, supervised learning takes center stage, divided in two fundamental forms: classification and regression. Regression is tailored for continuous outcomes, while classification specializes in c
... Show MoreMedicine is one of the fields where the advancement of computer science is making significant progress. Some diseases require an immediate diagnosis in order to improve patient outcomes. The usage of computers in medicine improves precision and accelerates data processing and diagnosis. In order to categorize biological images, hybrid machine learning, a combination of various deep learning approaches, was utilized, and a meta-heuristic algorithm was provided in this research. In addition, two different medical datasets were introduced, one covering the magnetic resonance imaging (MRI) of brain tumors and the other dealing with chest X-rays (CXRs) of COVID-19. These datasets were introduced to the combination network that contained deep lea
... Show MoreImage classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class
... Show MoreImage classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class
... Show More