During COVID-19, wearing a mask was globally mandated in various workplaces, departments, and offices. New deep learning convolutional neural network (CNN) based classifications were proposed to increase the validation accuracy of face mask detection. This work introduces a face mask model that is able to recognize whether a person is wearing mask or not. The proposed model has two stages to detect and recognize the face mask; at the first stage, the Haar cascade detector is used to detect the face, while at the second stage, the proposed CNN model is used as a classification model that is built from scratch. The experiment was applied on masked faces (MAFA) dataset with images of 160x160 pixels size and RGB color. The model achieved lower computational complexity and number of layers, while being more reliable compared with other algorithms applied to recognize face masks. The findings reveal that the model's validation accuracy reaches 97.55% to 98.43% at different learning rates and different values of features vector in the dense layer, which represents a neural network layer that is connected deeply of the CNN proposed model training. Finally, the suggested model enhances recognition performance parameters such as precision, recall, and area under the curve (AUC).
Objective: To conduct a standardized method for cavity preparation on the palatal surface of rat maxillary molars and to introduce a standardized method for tooth correct alignment within the specimen during the wax embedding procedure to better detect cavity position within the examined slides. Materials and methods: Six male Wistar rats, aged 4-6 weeks, were used. The maxillary molars of three animals were sectioned in the frontal plane to identify the thickness of hard tissue on the palatal surface of the first molar which was (250-300µm). The end-cutting bur (with a cutting head diameter of 0.2mm) was suitable for preparing a dentinal cavity (70-80µm) depth. Cavity preparation was then performed using the same bur on the tooth
... Show MoreA multivariate multisite hydrological data forecasting model was derived and checked using a case study. The philosophy is to use simultaneously the cross-variable correlations, cross-site correlations and the time lag correlations. The case study is of two variables, three sites, the variables are the monthly rainfall and evaporation; the sites are Sulaimania, Dokan, and Darbandikhan.. The model form is similar to the first order auto regressive model, but in matrices form. A matrix for the different relative correlations mentioned above and another for their relative residuals were derived and used as the model parameters. A mathematical filter was used for both matrices to obtain the elements. The application of this model indicates i
... Show MoreThe purchase of a home and access to housing is one of the most important requirements for the life of the individual and the stability of living and the development of the prices of houses in general and in Baghdad in particular affected by several factors, including the basic area of the house, the age of the house, the neighborhood in which the housing is available and the basic services, Where the statistical model SSM model was used to model house prices over a period of time from 2000 to 2018 and forecast until 2025 The research is concerned with enhancing the importance of this model and describing it as a standard and important compared to the models used in the analysis of time series after obtaining the
... Show Moremodel is derived, and the methodology is given in detail. The model is constructed depending on some measurement criteria, Akaike and Bayesian information criterion. For the new time series model, a new algorithm has been generated. The forecasting process, one and two steps ahead, is discussed in detail. Some exploratory data analysis is given in the beginning. The best model is selected based on some criteria; it is compared with some naïve models. The modified model is applied to a monthly chemical sales dataset (January 1992 to Dec 2019), where the dataset in this work has been downloaded from the United States of America census (www.census.gov). Ultimately, the forecasted sales
This study aims at shedding light on the linguistic significance of collocation networks in the academic writing context. Following Firth’s principle “You shall know a word by the company it keeps.” The study intends to examine three selected nodes (i.e. research, study, and paper) shared collocations in an academic context. This is achieved by using the corpus linguistic tool; GraphColl in #LancsBox software version 5 which was announced in June 2020 in analyzing selected nodes. The study focuses on academic writing of two corpora which were designed and collected especially to serve the purpose of the study. The corpora consist of a collection of abstracts extracted from two different academic journals that publish for writ
... Show More