Segmentation of urban features is considered a major research challenge in the fields of photogrammetry and remote sensing. However, the dense datasets now readily available through airborne laser scanning (ALS) offer increased potential for 3D object segmentation. Such potential is further augmented by the availability of full-waveform (FWF) ALS data. FWF ALS has demonstrated enhanced performance in segmentation and classification through the additional physical observables which can be provided alongside standard geometric information. However, use of FWF information is not recommended without prior radiometric calibration, taking into account all parameters affecting the backscatter energy. This paper reports the implementation of a radiometric calibration workflow for FWF ALS data, and demonstrates how the resultant FWF information can be used to improve segmentation of an urban area. The developed segmentation algorithm presents a novel approach which uses the calibrated backscatter cross-section as a weighting function to estimate the segmentation similarity measure. The normal vector and the local Euclidian distance are used as criteria to segment the point clouds through a region growing approach. The paper demonstrates the potential to enhance 3D object segmentation in urban areas by integrating the FWF physical backscattered energy alongside geometric information. The method is demonstrated through application to an interest area sampled from a relatively dense FWF ALS dataset. The results are assessed through comparison to those delivered from utilising only geometric information. Validation against a manual segmentation demonstrates a successful automatic implementation, achieving a segmentation accuracy of 82%, and out-performs a purely geometric approach.
The Dagum Regression Model, introduced to address limitations in traditional econometric models, provides enhanced flexibility for analyzing data characterized by heavy tails and asymmetry, which is common in income and wealth distributions. This paper develops and applies the Dagum model, demonstrating its advantages over other distributions such as the Log-Normal and Gamma distributions. The model's parameters are estimated using Maximum Likelihood Estimation (MLE) and the Method of Moments (MoM). A simulation study evaluates both methods' performance across various sample sizes, showing that MoM tends to offer more robust and precise estimates, particularly in small samples. These findings provide valuable insights into the ana
... Show MoreAccording to the current situation of peroxidase (POD), the relevant studies on this enzyme indicated its importance as a tool in clinical biochemistry and different industrial fields. Most of these studies used the fruits and vegetables as source of this enzyme. So that in order to couple the growing requirements for POD with the recent demands for reduc-ing disposal volume by recycling the plant waste, the aim of the present study was to extract POD through management of municipal bio-waste of Iraqi maize species. A simple, green and economical method was used to extract this enzyme. Our results revealed that maize cobs are rich sources of POD, where the activity of this enzyme was found to be 7035.54 U/g of cobs. In pilot experiments thi
... Show MoreCognitive radios have the potential to greatly improve spectral efficiency in wireless networks. Cognitive radios are considered lower priority or secondary users of spectrum allocated to a primary user. Their fundamental requirement is to avoid interference to potential primary users in their vicinity. Spectrum sensing has been identified as a key enabling functionality to ensure that cognitive radios would not interfere with primary users, by reliably detecting primary user signals. In addition, reliable sensing creates spectrum opportunities for capacity increase of cognitive networks. One of the key challenges in spectrum sensing is the robust detection of primary signals in highly negative signal-to-noise regimes (SNR).In this paper ,
... Show MoreGlobally, Sustainability is very quickly becoming a fundamental requirement of the construction industry as it delivers its projects; whether buildings or infrastructures. Throughout more than two decades, many modeling schemes, evaluation tools, and rating systems have been introduced en route to realizing sustainable construction. Many of these, however, lack consensus on evaluation criteria, a robust scientific model that captures the logic behind their sustainability performance evaluation, and therefore experience discrepancies between rated results and actual performance. Moreover, very few of the evaluation tools available satisfactorily address infrastructure projects. The res
In this research, the results of x-ray diffraction method were used to determine the uniform stress deformation and microstructure parameters of CuO nanoparticles to determine the lattice strain obtained and crystallite size and then to compare the results obtained by two model Halder Wagner and Size Strain Plot with the results of these methods of the same powder using equations during which the calculation of the size of the crystallite size and lattice strain, It was found that the results obtained the values of the crystallite size (19.81nm) and the lattice strain (0.004065) of the Halder-wagner model respectively and for the ssp method were the results of the crystallite size (17.20nm) and lattice strain (0.000305) respectively. The sa
... Show MoreRecently, complementary perfect corona domination in graphs was introduced. A dominating set S of a graph G is said to be a complementary perfect corona dominating set (CPCD – set) if each vertex in is either a pendent vertex or a support vertex and has a perfect matching. The minimum cardinality of a complementary perfect corona dominating set is called the complementary perfect corona domination number and is denoted by . In this paper, our parameter hasbeen discussed for power graphs of path and cycle.
Advances in digital technology and the World Wide Web has led to the increase of digital documents that are used for various purposes such as publishing and digital library. This phenomenon raises awareness for the requirement of effective techniques that can help during the search and retrieval of text. One of the most needed tasks is clustering, which categorizes documents automatically into meaningful groups. Clustering is an important task in data mining and machine learning. The accuracy of clustering depends tightly on the selection of the text representation method. Traditional methods of text representation model documents as bags of words using term-frequency index document frequency (TFIDF). This method ignores the relationship an
... Show More