Facial recognition has been an active field of imaging science. With the recent progresses in computer vision development, it is extensively applied in various areas, especially in law enforcement and security. Human face is a viable biometric that could be effectively used in both identification and verification. Thus far, regardless of a facial model and relevant metrics employed, its main shortcoming is that it requires a facial image, against which comparison is made. Therefore, closed circuit televisions and a facial database are always needed in an operational system. For the last few decades, unfortunately, we have experienced an emergence of asymmetric warfare, where acts of terrorism are often committed in secluded area with no camera installed and possibly by persons whose photos have never been kept in any official database prior to the event. During subsequent investigations, the authorities thus had to rely on traumatized and frustrated witnesses, whose testimonial accounts regarding suspect’s appearance are dubious and often misleading. To address this issue, this paper presents an application of a statistical appearance model of human face in assisting suspect identification based on witness’s visual recollection. An online prototype system was implemented to demonstrate its core functionalities. Both visual and numerical assessments reported herein evidentially indicated potential benefits of the system for the intended purpose.
In this research , we study the inverse Gompertz distribution (IG) and estimate the survival function of the distribution , and the survival function was evaluated using three methods (the Maximum likelihood, least squares, and percentiles estimators) and choosing the best method estimation ,as it was found that the best method for estimating the survival function is the squares-least method because it has the lowest IMSE and for all sample sizes
For several applications, it is very important to have an edge detection technique matching human visual contour perception and less sensitive to noise. The edge detection algorithm describes in this paper based on the results obtained by Maximum a posteriori (MAP) and Maximum Entropy (ME) deblurring algorithms. The technique makes a trade-off between sharpening and smoothing the noisy image. One of the advantages of the described algorithm is less sensitive to noise than that given by Marr and Geuen techniques that considered to be the best edge detection algorithms in terms of matching human visual contour perception.
Advertising technology represents a component of elements of the visual attraction in the urban scape, made its way transmission process of messages between the ends of the source ofinformation (sender) and the Destination information (receiver) of the final recipient of themessage, It serves as a social marked and a means of cultural expression, It is part of the inalienable in creating identity and determine the spatial relationships and also is a reflection ofurban culture to the community. This technology has become an increasing feature of the present era, characterized as the era of the three revolutions: (the information revolution, the technologyrevolution, and the media revolution), Where it became an integral part of the visual
... Show MoreIn this work, functionally graded materials were synthesized by centrifugal technique at different
volume fractions 0.5, 1, 1.5, and 2% Vf with a rotation speed of 1200 rpm and a constant rotation time, T
= 6 min . The mechanical properties were characterized to study the graded and non-graded nanocomposites
and the pure epoxy material. The mechanical tests showed that graded and non-graded added alumina
(Al2O3) nanoparticles enhanced the effect more than pure epoxy. The maximum difference in impact strength
occurred at (FGM), which was loaded from the rich side of the nano-alumina where the maximum value was
at 1% Vf by 133.33% of the sample epoxy side. The flexural strength and Young modulus of the fu
With its rapid spread, the coronavirus infection shocked the world and had a huge effect on billions of peoples' lives. The problem is to find a safe method to diagnose the infections with fewer casualties. It has been shown that X-Ray images are an important method for the identification, quantification, and monitoring of diseases. Deep learning algorithms can be utilized to help analyze potentially huge numbers of X-Ray examinations. This research conducted a retrospective multi-test analysis system to detect suspicious COVID-19 performance, and use of chest X-Ray features to assess the progress of the illness in each patient, resulting in a "corona score." where the results were satisfactory compared to the benchmarked techniques. T
... Show MoreChaotic features of nuclear energy spectrum in 68Ge nucleus are investigated by nuclear shell model. The energies are calculated through doing shell model calculations employing the OXBASH computer code with effective interaction of F5PVH. The 68Ge nucleus is supposed to have an inert core of 56Ni with 12 nucleons (4 protons and 8 neutrons) move in the f5p-model space ( and ). The nuclear level density of considered classes of states is seen to have a Gaussian form, which is in accord with the prediction of other theoretical studies. The statistical fluctuations of the energy spectrum (the level spacing P(s) and the Dyson-Mehta (or statistics) are well described by the Gaussian orthogonal ens
... Show MoreThere is a set of economic factors that affect the rationalization of decisions on unexploited resources within the economic unit and here determines the problem of the search for the question of what economic factors cause the emergence of asymmetric costs, and aims to identify these factors in the costs of adjustment to resources, change in The size of the activity of the economic unit, the general trend of sales change in the previous period, and the economic level of the country. Rh measure the impact of these factors on economic unity, and taking into consideration the impact when formulating decisions.
Researchers have increased interest in recent years in determining the optimum sample size to obtain sufficient accuracy and estimation and to obtain high-precision parameters in order to evaluate a large number of tests in the field of diagnosis at the same time. In this research, two methods were used to determine the optimum sample size to estimate the parameters of high-dimensional data. These methods are the Bennett inequality method and the regression method. The nonlinear logistic regression model is estimated by the size of each sampling method in high-dimensional data using artificial intelligence, which is the method of artificial neural network (ANN) as it gives a high-precision estimate commensurate with the dat
... Show MoreCancer stem cells (CSCs) are defined as a population of cells present in tumours, which can undergo self-renewal and differentiation. Identification and isolation of these CSCs using putative surface markers have been a priority of research in cancer. With this background we selected pancreatic normal and tumor cells for this study and passaged them into animal tissue culture medium. Further staining was done using alkaline phosphatase and heamatoxilin staining. Blue to purple colored zones in undifferentiated pluripotent stem cells and clear coloration in the chromatin material indicated pancreatic cells. Further studies on the cell surface marker CD 44 were done using ELISA. For this, the protein was extracted from cultivated normal and t
... Show More