The demand for single photon sources in quantum key distribution (QKD) systems has necessitated the use of weak coherent pulses (WCPs) characterized by a Poissonian distribution. Ensuring security against eavesdropping attacks requires keeping the mean photon number (µ) small and known to legitimate partners. However, accurately determining µ poses challenges due to discrepancies between theoretical calculations and practical implementation. This paper introduces two experiments. The first experiment involves theoretical calculations of µ using several filters to generate the WCPs. The second experiment utilizes a variable attenuator to generate the WCPs, and the value of µ was estimated from the photons detected by the BB84 detection setup. The second experiment represents an accurate method for estimating the value of µ because of using single photon detectors with high timing resolution and low dark counts, in addition to using a Time-to-digital convertor with a bin size of 81 ps.
The effect of some environmental factors in the loss rate for high weights virgins are full to the screwworm fly of the ancient world and included temperatures 15,20,25,30,35,40 study showed that the rate of loss in weight virgins advanced to full participants at a temperature of 15 C while notgets evolution
Achieving reliable operation under the influence of deep-submicrometer noise sources including crosstalk noise at low voltage operation is a major challenge for network on chip links. In this paper, we propose a coding scheme that simultaneously addresses crosstalk effects on signal delay and detects up to seven random errors through wire duplication and simple parity checks calculated over the rows and columns of the two-dimensional data. This high error detection capability enables the reduction of operating voltage on the wire leading to energy saving. The results show that the proposed scheme reduces the energy consumption up to 53% as compared to other schemes at iso-reliability performance despite the increase in the overhead number o
... Show MoreThe prevalence of using the applications for the internet of things (IoT) in many human life fields such as economy, social life, and healthcare made IoT devices targets for many cyber-attacks. Besides, the resource limitation of IoT devices such as tiny battery power, small storage capacity, and low calculation speed made its security a big challenge for the researchers. Therefore, in this study, a new technique is proposed called intrusion detection system based on spike neural network and decision tree (IDS-SNNDT). In this method, the DT is used to select the optimal samples that will be hired as input to the SNN, while SNN utilized the non-leaky integrate neurons fire (NLIF) model in order to reduce latency and minimize devices
... Show MoreThe prevalence of using the applications for the internet of things (IoT) in many human life fields such as economy, social life, and healthcare made IoT devices targets for many cyber-attacks. Besides, the resource limitation of IoT devices such as tiny battery power, small storage capacity, and low calculation speed made its security a big challenge for the researchers. Therefore, in this study, a new technique is proposed called intrusion detection system based on spike neural network and decision tree (IDS-SNNDT). In this method, the DT is used to select the optimal samples that will be hired as input to the SNN, while SNN utilized the non-leaky integrate neurons fire (NLIF) model in order to reduce latency and minimize devices
... Show MoreIn the geotechnical and terramechanical engineering applications, precise understandings are yet to be established on the off-road structures interacting with complex soil profiles. Several theoretical and experimental approaches have been used to measure the ultimate bearing capacity of the layered soil, but with a significant level of differences depending on the failure mechanisms assumed. Furthermore, local displacement fields in layered soils are not yet studied well. Here, the bearing capacity of a dense sand layer overlying loose sand beneath a rigid beam is studied under the plain-strain condition. The study employs using digital particle image velocimetry (DPIV) and finite element method (FEM) simulations. In the FEM, an experiment
... Show MoreNatural gas and oil are one of the mainstays of the global economy. However, many issues surround the pipelines that transport these resources, including aging infrastructure, environmental impacts, and vulnerability to sabotage operations. Such issues can result in leakages in these pipelines, requiring significant effort to detect and pinpoint their locations. The objective of this project is to develop and implement a method for detecting oil spills caused by leaking oil pipelines using aerial images captured by a drone equipped with a Raspberry Pi 4. Using the message queuing telemetry transport Internet of Things (MQTT IoT) protocol, the acquired images and the global positioning system (GPS) coordinates of the images' acquisition are
... Show MoreExperimental measurements of viscosity and thermal conductivity of single layer of graphene . based DI-water nanofluid are performed as a function of concentrations (0.1-1wt%) and temperatures between (5 to 35ºC). The result reveals that the thermal conductivity of GNPs nanofluids was increased with increasing the nanoparticle weight fraction concentration and temperature, while the maximum enhancement was about 22% for concentration of 1 wt.% at
35ºC. These experimental results were compared with some theoretical models and a good agreement between Nan’s model and the experimental results was observed. The viscosity of the graphene nanofluid displays Newtonian and Non-Newtonian behaviors with respect to nanoparticles concen
In this article, we developed a new loss function, as the simplification of linear exponential loss function (LINEX) by weighting LINEX function. We derive a scale parameter, reliability and the hazard functions in accordance with upper record values of the Lomax distribution (LD). To study a small sample behavior performance of the proposed loss function using a Monte Carlo simulation, we make a comparison among maximum likelihood estimator, Bayesian estimator by means of LINEX loss function and Bayesian estimator using square error loss (SE) function. The consequences have shown that a modified method is the finest for valuing a scale parameter, reliability and hazard functions.
Regression Discontinuity (RD) means a study that exposes a definite group to the effect of a treatment. The uniqueness of this design lies in classifying the study population into two groups based on a specific threshold limit or regression point, and this point is determined in advance according to the terms of the study and its requirements. Thus , thinking was focused on finding a solution to the issue of workers retirement and trying to propose a scenario to attract the idea of granting an end-of-service reward to fill the gap ( discontinuity point) if it had not been granted. The regression discontinuity method has been used to study and to estimate the effect of the end -service reward on the cutoff of insured workers as well as t
... Show More