The load shedding scheme has been extensively implemented as a fast solution for unbalance conditions. Therefore, it's crucial to investigate supply-demand balancing in order to protect the network from collapsing and to sustain stability as possible, however its implementation is mostly undesirable. One of the solutions to minimize the amount of load shedding is the integration renewable energy resources, such as wind power, in the electric power generation could contribute significantly to minimizing power cuts as it is ability to positively improving the stability of the electric grid. In this paper propose a method for shedding the load base on the priority demands with incorporating the wind power generated. The higher priority demands are fed with a reliable wind energy resource in order to protect them from shedding under contingency condition such as high overloading by the real time monitoring of the network accompanied with power reducing for the lower priority demands. The simulation results prove effectiveness and practicality of the applied method paving the way for possible applications in power systems.
In this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
Image pattern classification is considered a significant step for image and video processing.Although various image pattern algorithms have been proposed so far that achieved adequate classification,achieving higher accuracy while reducing the computation time remains challenging to date. A robust imagepattern classification method is essential to obtain the desired accuracy. This method can be accuratelyclassify image blocks into plain, edge, and texture (PET) using an efficient feature extraction mechanism.Moreover, to date, most of the existing studies are focused on evaluating their methods based on specificorthogonal moments, which limits the understanding of their potential application to various DiscreteOrthogonal Moments (DOMs). The
... Show MoreA series of liquid crystals comprising a heterocyclics dihydro pyrrole and 1,2,3-triazole rings [VII]-[X] were synthesized by many steps starting from a reaction of 3,3'-dimethyl-[1,1'-biphenyl]- 4,4'-diamine with chloroacetyl chloride in a mixture of solutions DMF and TEA to synthesise the compounds [I], then the compounds [I] reacted with malononitrile in 1,4-dioxane and TEA solutions to produce compounds [II], then the first step is repeated with compound [II] where it reacted with chloroacetyl chloride in mixture of DMF and TEA to give compound [III], this compound reacted with sodium azide in the presence of sodium chloride and DMF as solvent to produce the compound [IV], which reacted with acrylic acid by a 1.3 dipolar reaction in sol
... Show MoreWireless channels are typically much more noisy than wired links and subjected to fading due to multipath propagation which result in ISI and hence high error rate. Adaptive modulation is a powerful technique to improve the tradeoff between spectral efficiency and Bit Error Rate (BER). In order to adjust the transmission rate, channel state information (CSI) is required at the transmitter side.
In this paper the performance enhancement of using linear prediction along with channel estimation to track the channel variations and adaptive modulation were examined. The simulation results shows that the channel estimation is sufficient for low Doppler frequency shifts (<30 Hz), while channel prediction is much more suited at
... Show MoreBotnet detection develops a challenging problem in numerous fields such as order, cybersecurity, law, finance, healthcare, and so on. The botnet signifies the group of co-operated Internet connected devices controlled by cyber criminals for starting co-ordinated attacks and applying various malicious events. While the botnet is seamlessly dynamic with developing counter-measures projected by both network and host-based detection techniques, the convention techniques are failed to attain sufficient safety to botnet threats. Thus, machine learning approaches are established for detecting and classifying botnets for cybersecurity. This article presents a novel dragonfly algorithm with multi-class support vector machines enabled botnet
... Show MoreIn this paper, a method is proposed to increase the compression ratio for the color images by
dividing the image into non-overlapping blocks and applying different compression ratio for these
blocks depending on the importance information of the block. In the region that contain important
information the compression ratio is reduced to prevent loss of the information, while in the
smoothness region which has not important information, high compression ratio is used .The
proposed method shows better results when compared with classical methods(wavelet and DCT).
Features is the description of the image contents which could be corner, blob or edge. Corners are one of the most important feature to describe image, therefore there are many algorithms to detect corners such as Harris, FAST, SUSAN, etc. Harris is a method for corner detection and it is an efficient and accurate feature detection method. Harris corner detection is rotation invariant but it isn’t scale invariant. This paper presents an efficient harris corner detector invariant to scale, this improvement done by using gaussian function with different scales. The experimental results illustrate that it is very useful to use Gaussian linear equation to deal with harris weakness.
The main reason for the emergence of a deepfake (deep learning and fake) term is the evolution in artificial intelligence techniques, especially deep learning. Deep learning algorithms, which auto-solve problems when giving large sets of data, are used to swap faces in digital media to create fake media with a realistic appearance. To increase the accuracy of distinguishing a real video from fake one, a new model has been developed based on deep learning and noise residuals. By using Steganalysis Rich Model (SRM) filters, we can gather a low-level noise map that is used as input to a light Convolution neural network (CNN) to classify a real face from fake one. The results of our work show that the training accuracy of the CNN model
... Show More