Krawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the initial value of the KP parameter. In addition, a new diagonal recurrence relation is introduced and used in the proposed algorithm. The diagonal recurrence algorithm was derived from the existing n direction and x direction recurrence algorithms. The diagonal and existing recurrence algorithms were subsequently exploited to compute the KP coefficients. First, the KP coefficients were computed for one partition after dividing the KP plane into four. To compute the KP coefficients in the other partitions, the symmetry relations were exploited. The performance evaluation of the proposed recurrence algorithm was determined through different comparisons which were carried out in state-of-the-art works in terms of reconstruction error, polynomial size, and computation cost. The obtained results indicate that the proposed algorithm is reliable and computes lesser coefficients when compared to the existing algorithms across wide ranges of parameter values of p and polynomial sizes N. The results also show that the improvement ratio of the computed coefficients ranges from 18.64% to 81.55% in comparison to the existing algorithms. Besides this, the proposed algorithm can generate polynomials of an order ∼8.5 times larger than those generated using state-of-the-art algorithms.
Data Driven Requirement Engineering (DDRE) represents a vision for a shift from the static traditional methods of doing requirements engineering to dynamic data-driven user-centered methods. Data available and the increasingly complex requirements of system software whose functions can adapt to changing needs to gain the trust of its users, an approach is needed in a continuous software engineering process. This need drives the emergence of new challenges in the discipline of requirements engineering to meet the required changes. The problem in this study was the method in data discrepancies which resulted in the needs elicitation process being hampered and in the end software development found discrepancies and could not meet the need
... Show MoreMost of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve B
... Show MoreIn this paper, the proposed phase fitted and amplification fitted of the Runge-Kutta-Fehlberg method were derived on the basis of existing method of 4(5) order to solve ordinary differential equations with oscillatory solutions. The recent method has null phase-lag and zero dissipation properties. The phase-lag or dispersion error is the angle between the real solution and the approximate solution. While the dissipation is the distance of the numerical solution from the basic periodic solution. Many of problems are tested over a long interval, and the numerical results have shown that the present method is more precise than the 4(5) Runge-Kutta-Fehlberg method.
There are many methods of searching large amount of data to find one particular piece of information. Such as find name of person in record of mobile. Certain methods of organizing data make the search process more efficient the objective of these methods is to find the element with least cost (least time). Binary search algorithm is faster than sequential and other commonly used search algorithms. This research develops binary search algorithm by using new structure called Triple, structure in this structure data are represented as triple. It consists of three locations (1-Top, 2-Left, and 3-Right) Binary search algorithm divide the search interval in half, this process makes the maximum number of comparisons (Average case com
... Show MoreThe issue of penalized regression model has received considerable critical attention to variable selection. It plays an essential role in dealing with high dimensional data. Arctangent denoted by the Atan penalty has been used in both estimation and variable selection as an efficient method recently. However, the Atan penalty is very sensitive to outliers in response to variables or heavy-tailed error distribution. While the least absolute deviation is a good method to get robustness in regression estimation. The specific objective of this research is to propose a robust Atan estimator from combining these two ideas at once. Simulation experiments and real data applications show that the proposed LAD-Atan estimator
... Show More<span lang="EN-GB">Transmitting the highest capacity throughput over the longest possible distance without any regeneration stage is an important goal of any long-haul optical network system. Accordingly, Polarization-Multiplexed Quadrature Phase-Shift-Keying (PM-QPSK) was introduced lately to achieve high bit-rate with relatively high spectral efficiency. Unfortunately, the required broad bandwidth of PM-QPSK increases the linear and nonlinear impairments in the physical layer of the optical fiber network. Increased attention has been spent to compensate for these impairments in the last years. In this paper, Single Mode Fiber (SMF), single channel, PM-QPSK transceiver was simulated, with a mix of optical and electrical (Digi
... Show MorePyrolysis of high density polyethylene (HDPE) was carried out in a 750 cm3 stainless steel autoclave reactor, with temperature ranging from 470 to 495° C and reaction times up to 90 minute. The influence of the operating conditions on the component yields was studied. It was found that the optimum cracking condition for HDPE that maximized the oil yield to 70 wt. % was 480°C and 20 minutes. The results show that for higher cracking temperature, and longer reaction times there was higher production of gas and coke. Furthermore, higher temperature increases the aromatics and produce lighter oil with lower viscosity.