In this work, results from an optical technique (laser speckle technique) for measuring surface roughness was done by using statistical properties of speckle pattern from the point of view of computer image texture analysis. Four calibration relationships were used to cover wide range of measurement with the same laser speckle technique. The first one is based on intensity contrast of the speckle, the second is based on analysis of speckle binary image, the third is on size of speckle pattern spot, and the latest one is based on characterization of the energy feature of the gray level co-occurrence matrices for the speckle pattern. By these calibration relationships surface roughness of an object surface can be evaluated within these relations ranges from single speckle pattern image which was taken from the surface.
In this paper we present the first ever measured experimental electron momentum density of Cu2Sb at an intermediate resolution (0.6 a.u.) using 59.54 keV 241Am Compton spectrometer. The measurements are compared with the theoretical Compton profiles using density function theory (DFT) within a linear combination of an atomic orbitals (LCAO) method. In DFT calculation, Perdew-Burke-Ernzerhof (PBE) scheme is employed to treat correlation whereas exchange is included by following the Becke scheme. It is seen that various approximations within LCAO-DFT show relatively better agreement with the experimental Compton data. Ionic model calculations for a number of configurations (Cu+x/2)2(Sb-x) (0.0≤x≤2.0) are also performed utilizing free a
... Show MoreImage segmentation can be defined as a cutting or segmenting process of the digital image into many useful points which are called segmentation, that includes image elements contribute with certain attributes different form Pixel that constitute other parts. Two phases were followed in image processing by the researcher in this paper. At the beginning, pre-processing image on images was made before the segmentation process through statistical confidence intervals that can be used for estimate of unknown remarks suggested by Acho & Buenestado in 2018. Then, the second phase includes image segmentation process by using "Bernsen's Thresholding Technique" in the first phase. The researcher drew a conclusion that in case of utilizing
... Show MoreInformation security is a crucial factor when communicating sensitive information between two parties. Steganography is one of the most techniques used for this purpose. This paper aims to enhance the capacity and robustness of hiding information by compressing image data to a small size while maintaining high quality so that the secret information remains invisible and only the sender and recipient can recognize the transmission. Three techniques are employed to conceal color and gray images, the Wavelet Color Process Technique (WCPT), Wavelet Gray Process Technique (WGPT), and Hybrid Gray Process Technique (HGPT). A comparison between the first and second techniques according to quality metrics, Root-Mean-Square Error (RMSE), Compression-
... Show MoreCopper, and its, alloys and composites (being the matrix), are broadly used in the electronic as well as bearing materials due to the excellent thermal and electrical conductivities it has.
In this study, powder metallurgy technique was used for the production of copper graphite composite with three volume perc ent of graphite. Processing parameters selected is (900) °C sintering temperature and (90) minutes holding time for samples that were heated in an inert atmosphere (argon gas). Wear test results showed a pronounced improvement in wear resistance as the percent of graphite increased which acts as solid lubricant (where wear rate was decreased by about 88% as compared with pure Cu). Microhardness and
... Show MoreThree-dimensional (3D) image and medical image processing, which are considered big data analysis, have attracted significant attention during the last few years. To this end, efficient 3D object recognition techniques could be beneficial to such image and medical image processing. However, to date, most of the proposed methods for 3D object recognition experience major challenges in terms of high computational complexity. This is attributed to the fact that the computational complexity and execution time are increased when the dimensions of the object are increased, which is the case in 3D object recognition. Therefore, finding an efficient method for obtaining high recognition accuracy with low computational complexity is essentia
... Show MoreFractal image compression depends on representing an image using affine transformations. The main concern for researches in the discipline of fractal image compression (FIC) algorithm is to decrease encoding time needed to compress image data. The basic technique is that each portion of the image is similar to other portions of the same image. In this process, there are many models that were developed. The presence of fractals was initially noticed and handled using Iterated Function System (IFS); that is used for encoding images. In this paper, a review of fractal image compression is discussed with its variants along with other techniques. A summarized review of contributions is achieved to determine the fulfillment of fractal ima
... Show MoreThe aim of this research is to study the surface alteration characteristics and surface morphology of the superhydrophobic/hydrophobic nanocomposite coatings prepared by an electrospinning method to coat various materials such as glass and metal. This is considered as a low cost method of fabrication for polymer solutions of Polystyrene (PS), Polymethylmethacrylate (PMMA) and Silicone Rubber (RTV). Si were prepared in various wt% of composition for each solutions. Contact angle measurement, surface tension, viscosity, roughness tests were calculated for all specimens. SEM showed the morphology of the surfaces after coated. PS and PMMA showed superhydrophobic properties for metal substrate, while Si showed hydroph
... Show MoreFractal image compression gives some desirable properties like fast decoding image, and very good rate-distortion curves, but suffers from a high encoding time. In fractal image compression a partitioning of the image into ranges is required. In this work, we introduced good partitioning process by means of merge approach, since some ranges are connected to the others. This paper presents a method to reduce the encoding time of this technique by reducing the number of range blocks based on the computing the statistical measures between them . Experimental results on standard images show that the proposed method yields minimize (decrease) the encoding time and remain the quality results passable visually.
This study discussed a biased estimator of the Negative Binomial Regression model known as (Liu Estimator), This estimate was used to reduce variance and overcome the problem Multicollinearity between explanatory variables, Some estimates were used such as Ridge Regression and Maximum Likelihood Estimators, This research aims at the theoretical comparisons between the new estimator (Liu Estimator) and the estimators