Shadow removal is crucial for robot and machine vision as the accuracy of object detection is greatly influenced by the uncertainty and ambiguity of the visual scene. In this paper, we introduce a new algorithm for shadow detection and removal based on different shapes, orientations, and spatial extents of Gaussian equations. Here, the contrast information of the visual scene is utilized for shadow detection and removal through five consecutive processing stages. In the first stage, contrast filtering is performed to obtain the contrast information of the image. The second stage involves a normalization process that suppresses noise and generates a balanced intensity at a specific position compared to the neighboring intensities. In the third stage, the boundary of the target object is extracted, and in the fourth and fifth stages, respectively, the region of interest (ROI) is highlighted and reconstructed. Our model was tested and evaluated using realistic scenarios which include outdoor and indoor scenes. The results reflect the ability of our approach to detect and remove shadows and reconstruct a shadow free image with a small error of approximately 6%.
Convolutional Neural Networks (CNN) have high performance in the fields of object recognition and classification. The strength of CNNs comes from the fact that they are able to extract information from raw-pixel content and learn features automatically. Feature extraction and classification algorithms can be either hand-crafted or Deep Learning (DL) based. DL detection approaches can be either two stages (region proposal approaches) detector or a single stage (non-region proposal approach) detector. Region proposal-based techniques include R-CNN, Fast RCNN, and Faster RCNN. Non-region proposal-based techniques include Single Shot Detector (SSD) and You Only Look Once (YOLO). We are going to compare the speed and accuracy of Faster RCNN,
... Show MoreIn our work present, the application of strong-Lensing observations for some gravitational lenses have been adopted to study the geometry of the universe and to explain the physics and the size of the quasars. The first procedure was to study the geometrical of the Lensing system to determine the relation between the redshift of the gravitational observations with its distances. The second procedure was to compare between the angular diameter distances "DA" calculated from the Euclidean case with that from the Freedman models, then evaluating the diameter of the system lens. The results concluded that the phenomena are restricted to the ratio of distance between lens and source with the diameter of the lens noticing.
Copper Telluride Thin films of thickness 700nm and 900nm, prepared thin films using thermal evaporation on cleaned Si substrates kept at 300K under the vacuum about (4x10-5 ) mbar. The XRD analysis and (AFM) measurements use to study structure properties. The sensitivity (S) of the fabricated sensors to NO2 and H2 was measured at room temperature. The experimental relationship between S and thickness of the sensitive film was investigated, and higher S values were recorded for thicker sensors. Results showed that the best sensitivity was attributed to the Cu2Te film of 900 nm thickness at the H2 gas.
<p><span>A Botnet is one of many attacks that can execute malicious tasks and develop continuously. Therefore, current research introduces a comparison framework, called BotDetectorFW, with classification and complexity improvements for the detection of Botnet attack using CICIDS2017 dataset. It is a free online dataset consist of several attacks with high-dimensions features. The process of feature selection is a significant step to obtain the least features by eliminating irrelated features and consequently reduces the detection time. This process implemented inside BotDetectorFW using two steps; data clustering and five distance measure formulas (cosine, dice, driver & kroeber, overlap, and pearson correlation
... Show More