Energy savings are very common in IoT sensor networks because IoT sensor nodes operate with their own limited battery. The data transmission in the IoT sensor nodes is very costly and consume much of the energy while the energy usage for data processing is considerably lower. There are several energy-saving strategies and principles, mainly dedicated to reducing the transmission of data. Therefore, with minimizing data transfers in IoT sensor networks, can conserve a considerable amount of energy. In this research, a Compression-Based Data Reduction (CBDR) technique was suggested which works in the level of IoT sensor nodes. The CBDR includes two stages of compression, a lossy SAX Quantization stage which reduces the dynamic range of the sensor data readings, after which a lossless LZW compression to compress the loss quantization output. Quantizing the sensor node data readings down to the alphabet size of SAX results in lowering, to the advantage of the best compression sizes, which contributes to greater compression from the LZW end of things. Also, another improvement was suggested to the CBDR technique which is to add a Dynamic Transmission (DT-CBDR) to decrease both the total number of data sent to the gateway and the processing required. OMNeT++ simulator along with real sensory data gathered at Intel Lab is used to show the performance of the proposed technique. The simulation experiments illustrate that the proposed CBDR technique provides better performance than the other techniques in the literature.
In this study, an efficient compression system is introduced, it is based on using wavelet transform and two types of 3Dimension (3D) surface representations (i.e., Cubic Bezier Interpolation (CBI)) and 1 st order polynomial approximation. Each one is applied on different scales of the image; CBI is applied on the wide area of the image in order to prune the image components that show large scale variation, while the 1 st order polynomial is applied on the small area of residue component (i.e., after subtracting the cubic Bezier from the image) in order to prune the local smoothing components and getting better compression gain. Then, the produced cubic Bezier surface is subtracted from the image signal to get the residue component. Then, t
... Show MoreSo I present in the hands of the honorable reader what God Almighty has made easy for me in terms of what I dealt with in the rule (the principle of permissive things) and what branches from it and what is related to it.
This research was divided into an introduction, a preface, three demands, and a conclusion.
The preamble is to explain the meaning of the rule in language and terminology and the definition of the legal rule and what is related to it. The first requirement is to explain the rule that we have in our hands and the words related to its text - and is it a fundamentalist or jurisprudential rule? .
As for the second requirement - in the difference of scholars and their opinions in whether the origin of things is permi
Image is an important digital information that used in many internet of things (IoT) applications such as transport, healthcare, agriculture, military, vehicles and wildlife. etc. Also, any image has very important characteristic such as large size, strong correlation and huge redundancy, therefore, encrypting it by using single key Advanced Encryption Standard (AES) through IoT communication technologies makes it vulnerable to many threats, thus, the pixels that have the same values will be encrypted to another pixels that have same values when they use the same key. The contribution of this work is to increase the security of transferred image. This paper proposed multiple key AES algorithm (MECCAES) to improve the security of the tran
... Show MoreThis paper introduces a non-conventional approach with multi-dimensional random sampling to solve a cocaine abuse model with statistical probability. The mean Latin hypercube finite difference (MLHFD) method is proposed for the first time via hybrid integration of the classical numerical finite difference (FD) formula with Latin hypercube sampling (LHS) technique to create a random distribution for the model parameters which are dependent on time t . The LHS technique gives advantage to MLHFD method to produce fast variation of the parameters’ values via number of multidimensional simulations (100, 1000 and 5000). The generated Latin hypercube sample which is random or non-deterministic in nature is further integrated with the FD method t
... Show MoreData <span>transmission in orthogonal frequency division multiplexing (OFDM) system needs source and channel coding, the transmitted data suffers from the bad effect of large peak to average power ratio (PAPR). Source code and channel codes can be joined using different joined codes. Variable length error correcting code (VLEC) is one of these joined codes. VLEC is used in mat lab simulation for image transmission in OFDM system, different VLEC code length is used and compared to find that the PAPR decreased with increasing the code length. Several techniques are used and compared for PAPR reduction. The PAPR of OFDM signal is measured for image coding with VLEC and compared with image coded by Huffman source coding and Bose-
... Show MorePhotonic Crystal Fiber Interferometers (PCFIs) are widely used for sensing applications. This work presents the fabrication and study the characterization of a relative humidity sensor based on a polymer-infiltrated photonic crystal fiber that operates in a Mach- Zehnder Interferometer (MZI) reflection mode. The fabrication of the sensor only involves splicing and cleaving Photonic Crystal Fiber (PCF) with Single Mode Fiber (SMF). A stub of (LMA-10) PCF spliced to SMF (Corning-28). In the splice regions. The PCFI sensor operation based on the adsorption and desorption of water vapour at the silica-air interface within the PCF. The sensor shows a high sensitivity to RH variations from (27% RH - 95% RH), with a change in its reflected powe
... Show MoreDifferent ANN architectures of MLP have been trained by BP and used to analyze Landsat TM images. Two different approaches have been applied for training: an ordinary approach (for one hidden layer M-H1-L & two hidden layers M-H1-H2-L) and one-against-all strategy (for one hidden layer (M-H1-1)xL, & two hidden layers (M-H1-H2-1)xL). Classification accuracy up to 90% has been achieved using one-against-all strategy with two hidden layers architecture. The performance of one-against-all approach is slightly better than the ordinary approach
Big data analysis is essential for modern applications in areas such as healthcare, assistive technology, intelligent transportation, environment and climate monitoring. Traditional algorithms in data mining and machine learning do not scale well with data size. Mining and learning from big data need time and memory efficient techniques, albeit the cost of possible loss in accuracy. We have developed a data aggregation structure to summarize data with large number of instances and data generated from multiple data sources. Data are aggregated at multiple resolutions and resolution provides a trade-off between efficiency and accuracy. The structure is built once, updated incrementally, and serves as a common data input for multiple mining an
... Show More