Wireless Multimedia Sensor Networks (WMSNs) are networks of wirelessly interconnected sensor nodes equipped with multimedia devices, such as cameras and microphones. Thus a WMSN will have the capability to transmit multimedia data, such as video and audio streams, still images, and scalar data from the environment. Most applications of WMSNs require the delivery of multimedia information with a certain level of Quality of Service (QoS). This is a challenging task because multimedia applications typically produce huge volumes of data requiring high transmission rates and extensive processing; the high data transmission rate of WMSNs usually leads to congestion, which in turn reduces the Quality of Service (QoS) of multimedia applications. To address this challenge, This paper proposes the Neural Control Exponential Weight of Priority Based Rate Control (NEWPBRC) algorithm for adjusting the node transmission rate and facilitate the problem of congestion occur in WMSNs. The proposed algorithm combines Neural Network Controller (NC) with the Exponential Weight of Priority Based Rate Control (EWPBRC) algorithms. The NC controller can calculate the appropriate weight parameter λ in the Exponential Weight (EW) algorithm for estimating the output transmission rate of the sink node, and then ,on the basis of the priority of each child node , an appropriate transmission rate is assigned . The proposed algorithm can support four different traffic classes namely, Real Time traffic class (RT class); High priority, Non Real-Time traffic class (NRT1 class); Medium priority, Non Real-Time traffic class (NRT2 class); and Low priority, Non Real-Time traffic class (NRT3 class). Simulation result shows that the proposed algorithm can effectively reduce congestion and enhance the transmission rate. Furthermore, the proposed algorithm can enhance Quality of Service (QoS) by achieve better throughput, and reduced the transmission delay and loss probability.
Recently, Image enhancement techniques can be represented as one of the most significant topics in the field of digital image processing. The basic problem in the enhancement method is how to remove noise or improve digital image details. In the current research a method for digital image de-noising and its detail sharpening/highlighted was proposed. The proposed approach uses fuzzy logic technique to process each pixel inside entire image, and then take the decision if it is noisy or need more processing for highlighting. This issue is performed by examining the degree of association with neighboring elements based on fuzzy algorithm. The proposed de-noising approach was evaluated by some standard images after corrupting them with impulse
... Show MoreThis work aims to develop a secure lightweight cipher algorithm for constrained devices. A secure communication among constrained devices is a critical issue during the data transmission from the client to the server devices. Lightweight cipher algorithms are defined as a secure solution for constrained devices that require low computational functions and small memory. In contrast, most lightweight algorithms suffer from the trade-off between complexity and speed in order to produce robust cipher algorithm. The PRESENT cipher has been successfully experimented on as a lightweight cryptography algorithm, which transcends other ciphers in terms of its computational processing that required low complexity operations. The mathematical model of
... Show MoreTremendous efforts have been exerted to understand first language acquisition to facilitate second language learning. The problem lies in the difficulty of mastering English language and adapting a theory that helps in overcoming the difficulties facing students. This study aims to apply Thomasello's theory of language mastery through usage. It assumes that adults can learn faster than children and can learn the language separately, and far from academic education. Tomasello (2003) studied the stages of language acquisition for children, and developed his theory accordingly. Some studies, such as: (Ghalebi and Sadighi, 2015, Arvidsson, 2019; Munoz, 2019; Verspoor and Hong, 2013) used this theory when examining language acquisition. Thus,
... Show MoreSemantic segmentation realization and understanding is a stringent task not just for computer vision but also in the researches of the sciences of earth, semantic segmentation decompose compound architectures in one elements, the most mutual object in a civil outside or inside senses must classified then reinforced with information meaning of all object, it’s a method for labeling and clustering point cloud automatically. Three dimensions natural scenes classification need a point cloud dataset to representation data format as input, many challenge appeared with working of 3d data like: little number, resolution and accurate of three Dimensional dataset . Deep learning now is the po
In this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.
Image pattern classification is considered a significant step for image and video processing.Although various image pattern algorithms have been proposed so far that achieved adequate classification,achieving higher accuracy while reducing the computation time remains challenging to date. A robust imagepattern classification method is essential to obtain the desired accuracy. This method can be accuratelyclassify image blocks into plain, edge, and texture (PET) using an efficient feature extraction mechanism.Moreover, to date, most of the existing studies are focused on evaluating their methods based on specificorthogonal moments, which limits the understanding of their potential application to various DiscreteOrthogonal Moments (DOMs). The
... Show MoreImplementation of TSFS (Transposition, Substitution, Folding, and Shifting) algorithm as an encryption algorithm in database security had limitations in character set and the number of keys used. The proposed cryptosystem is based on making some enhancements on the phases of TSFS encryption algorithm by computing the determinant of the keys matrices which affects the implementation of the algorithm phases. These changes showed high security to the database against different types of security attacks by achieving both goals of confusion and diffusion.
This research includes the synthesis, characterization, and investigation of liquid crystalline properties of new rod-shaped liquid crystal compounds 1,4- phenylene bis(2-(5-(four-alkoxybenzylidene)-2,4-dioxothiazolidin-3- yl)acetate), prepared thiazolidine-2,4-dione (I) by the thiourea reaction with chloroacetic acid and water in the presence of the concentrated hydrochloric acid. The n-alkoxy benzaldehyde (II)n synthesized from the reacted 4- hydreoxybenzaldehyde and n-alkyl bromide with potassium hydroxide, and then the compound (I) was reacted with (II)n in the presence of piperidine to produce compounds (III)n. Also, hydroquinone was converted into a corresponding compound (IV) by refluxing with two moles of chloracetyl chloride in pyr
... Show More