Krawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the initial value of the KP parameter. In addition, a new diagonal recurrence relation is introduced and used in the proposed algorithm. The diagonal recurrence algorithm was derived from the existing n direction and x direction recurrence algorithms. The diagonal and existing recurrence algorithms were subsequently exploited to compute the KP coefficients. First, the KP coefficients were computed for one partition after dividing the KP plane into four. To compute the KP coefficients in the other partitions, the symmetry relations were exploited. The performance evaluation of the proposed recurrence algorithm was determined through different comparisons which were carried out in state-of-the-art works in terms of reconstruction error, polynomial size, and computation cost. The obtained results indicate that the proposed algorithm is reliable and computes lesser coefficients when compared to the existing algorithms across wide ranges of parameter values of p and polynomial sizes N. The results also show that the improvement ratio of the computed coefficients ranges from 18.64% to 81.55% in comparison to the existing algorithms. Besides this, the proposed algorithm can generate polynomials of an order ∼8.5 times larger than those generated using state-of-the-art algorithms.
JPEG is most popular image compression and encoding, this technique is widely used in many applications (images, videos and 3D animations). Meanwhile, researchers are very interested to develop this massive technique to compress images at higher compression ratios with keeping image quality as much as possible. For this reason in this paper we introduce a developed JPEG based on fast DCT and removed most of zeros and keeps their positions in a transformed block. Additionally, arithmetic coding applied rather than Huffman coding. The results showed up, the proposed developed JPEG algorithm has better image quality than traditional JPEG techniques.
The Internet is providing vital communications between millions of individuals. It is also more and more utilized as one of the commerce tools; thus, security is of high importance for securing communications and protecting vital information. Cryptography algorithms are essential in the field of security. Brute force attacks are the major Data Encryption Standard attacks. This is the main reason that warranted the need to use the improved structure of the Data Encryption Standard algorithm. This paper proposes a new, improved structure for Data Encryption Standard to make it secure and immune to attacks. The improved structure of Data Encryption Standard was accomplished using standard Data Encryption Standard with a new way of two key gene
... Show MoreVisible-light photodetectors constructed Fe2O3 were manufactured effectively concluded chemical precipitation technique, films deposited on glass substrate and Si wafer below diverse dopant (0,2,4,6)% of Cl, enhancement in intensity with X-ray diffraction analysis was showed through favored orientation along the (110) plane, the optical measurement presented direct allowed with reduced band gap energies thru variation doping ratio , current–voltage characteristics Fe2O3 /p-Si heterojunction revealed respectable correcting performance in dark, amplified by way of intensity of incident light, moreover good photodetector properties with enhancement in responsivity occurred at wavelength between 400 nm and 470 nm.
The esterification of oleic acid with 2-ethylhexanol in presence of sulfuric acid as homogeneous catalyst was investigated in this work to produce 2-ethylhexyl oleate (biodiesel) by using semi batch reactive distillation. The effect of reaction temperature (100 to 130°C), 2-ethylhexanol:oleic acid molar ratio (1:1 to 1:3) and catalysts concentration (0.2 to 1wt%) were studied. Higher conversion of 97% was achieved with operating conditions of reaction temperature of 130°C, molar ratio of free fatty acid to alcohol of 1:2 and catalyst concentration of 1wt%. A simulation was adopted from basic principles of the reactive distillation using MATLAB to describe the process. Good agreement was achieved.
A simple, precise, rapid, and accurate reversed – phase high performance liquid chromatographic method has been developed for the determination of guaifenesin in pure from pharmaceutical formulations.andindustrial effluent. Chromatography was carried out on supelco L7 reversed- phase column (25cm × 4.6mm), 5 microns, using a mixture of methanol –acetonitrile-water: (80: 10 :10 v/v/v) as a mobile phase at a flow rate of 1.0 ml.min-1. Detection was performed at 254nm at ambient temperature. The retention time for guaifenesin was found 2.4 minutes. The calibration curve was linear (r= 0.9998) over a concentration range from 0.08 to 0.8mg/ml. Limit of detection (LOD) and limit of quantification ( LOQ) were found 6µg/ml and 18µg/ml res
... Show MoreIn order to obtain a mixed model with high significance and accurate alertness, it is necessary to search for the method that performs the task of selecting the most important variables to be included in the model, especially when the data under study suffers from the problem of multicollinearity as well as the problem of high dimensions. The research aims to compare some methods of choosing the explanatory variables and the estimation of the parameters of the regression model, which are Bayesian Ridge Regression (unbiased) and the adaptive Lasso regression model, using simulation. MSE was used to compare the methods.
In the presence of deep submicron noise, providing reliable and energy‐efficient network on‐chip operation is becoming a challenging objective. In this study, the authors propose a hybrid automatic repeat request (HARQ)‐based coding scheme that simultaneously reduces the crosstalk induced bus delay and provides multi‐bit error protection while achieving high‐energy savings. This is achieved by calculating two‐dimensional parities and duplicating all the bits, which provide single error correction and six errors detection. The error correction reduces the performance degradation caused by retransmissions, which when combined with voltage swing reduction, due to its high error detection, high‐energy savings are achieved. The res
... Show MoreOscillation criterion is investigated for all solutions of the first-order linear neutral differential equations with positive and negative coefficients. Some sufficient conditions are established so that every solution of eq.(1.1) oscillate. Generalizing of some results in [4] and [5] are given. Examples are given to illustrated our main results.