Preferred Language
Articles
/
1xYFBYcBVTCNdQwCMC3d
Reliable Recurrence Algorithm for High-Order Krawtchouk Polynomials
...Show More Authors

Krawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the initial value of the KP parameter. In addition, a new diagonal recurrence relation is introduced and used in the proposed algorithm. The diagonal recurrence algorithm was derived from the existing n direction and x direction recurrence algorithms. The diagonal and existing recurrence algorithms were subsequently exploited to compute the KP coefficients. First, the KP coefficients were computed for one partition after dividing the KP plane into four. To compute the KP coefficients in the other partitions, the symmetry relations were exploited. The performance evaluation of the proposed recurrence algorithm was determined through different comparisons which were carried out in state-of-the-art works in terms of reconstruction error, polynomial size, and computation cost. The obtained results indicate that the proposed algorithm is reliable and computes lesser coefficients when compared to the existing algorithms across wide ranges of parameter values of p and polynomial sizes N. The results also show that the improvement ratio of the computed coefficients ranges from 18.64% to 81.55% in comparison to the existing algorithms. Besides this, the proposed algorithm can generate polynomials of an order ∼8.5 times larger than those generated using state-of-the-art algorithms.

Scopus Clarivate Crossref
View Publication
Publication Date
Sun Feb 25 2024
Journal Name
Baghdad Science Journal
Natural Language Processing For Requirement Elicitation In University Using Kmeans And Meanshift Algorithm
...Show More Authors

 Data Driven Requirement Engineering (DDRE) represents a vision for a shift from the static traditional methods of doing requirements engineering to dynamic data-driven user-centered methods. Data available and the increasingly complex requirements of system software whose functions can adapt to changing needs to gain the trust of its users, an approach is needed in a continuous software engineering process. This need drives the emergence of new challenges in the discipline of requirements engineering to meet the required changes. The problem in this study was the method in data discrepancies which resulted in the needs elicitation process being hampered and in the end software development found discrepancies and could not meet the need

... Show More
View Publication Preview PDF
Scopus (2)
Scopus Crossref
Publication Date
Mon Jan 01 2024
Journal Name
Aip Conference Proceedings
Modeling and analysis of thermal contrast based on LST algorithm for Baghdad city
...Show More Authors

View Publication
Scopus (2)
Crossref (2)
Scopus Crossref
Publication Date
Wed Jan 01 2020
Journal Name
Journal Of Building Engineering
Development of gravitational search algorithm model for predicting packing density of cementitious pastes
...Show More Authors

View Publication Preview PDF
Scopus (23)
Crossref (11)
Scopus Clarivate Crossref
Publication Date
Thu Jun 01 2023
Journal Name
Bulletin Of Electrical Engineering And Informatics
A missing data imputation method based on salp swarm algorithm for diabetes disease
...Show More Authors

Most of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve B

... Show More
View Publication
Scopus (7)
Crossref (2)
Scopus Crossref
Publication Date
Sun Nov 01 2020
Journal Name
2020 2nd Annual International Conference On Information And Sciences (aicis)
An Enhanced Multi-Objective Evolutionary Algorithm with Decomposition for Signed Community Detection Problem
...Show More Authors

View Publication
Scopus (7)
Crossref (2)
Scopus Crossref
Publication Date
Mon Jun 22 2020
Journal Name
Baghdad Science Journal
Phase Fitted And Amplification Fitted Of Runge-Kutta-Fehlberg Method Of Order 4(5) For Solving Oscillatory Problems
...Show More Authors

In this paper, the proposed phase fitted and amplification fitted of the Runge-Kutta-Fehlberg method were derived on the basis of existing method of 4(5) order to solve ordinary differential equations with oscillatory solutions. The recent method has null phase-lag and zero dissipation properties. The phase-lag or dispersion error is the angle between the real solution and the approximate solution. While the dissipation is the distance of the numerical solution from the basic periodic solution. Many of problems are tested over a long interval, and the numerical results have shown that the present method is more precise than the 4(5) Runge-Kutta-Fehlberg method.

View Publication Preview PDF
Scopus (2)
Scopus Clarivate Crossref
Publication Date
Sun Jun 12 2011
Journal Name
Baghdad Science Journal
Development Binary Search Algorithm
...Show More Authors

There are many methods of searching large amount of data to find one particular piece of information. Such as find name of person in record of mobile. Certain methods of organizing data make the search process more efficient the objective of these methods is to find the element with least cost (least time). Binary search algorithm is faster than sequential and other commonly used search algorithms. This research develops binary search algorithm by using new structure called Triple, structure in this structure data are represented as triple. It consists of three locations (1-Top, 2-Left, and 3-Right) Binary search algorithm divide the search interval in half, this process makes the maximum number of comparisons (Average case com

... Show More
View Publication Preview PDF
Crossref
Publication Date
Mon May 11 2020
Journal Name
Baghdad Science Journal
Proposing Robust LAD-Atan Penalty of Regression Model Estimation for High Dimensional Data
...Show More Authors

         The issue of penalized regression model has received considerable critical attention to variable selection. It plays an essential role in dealing with high dimensional data. Arctangent denoted by the Atan penalty has been used in both estimation and variable selection as an efficient method recently. However, the Atan penalty is very sensitive to outliers in response to variables or heavy-tailed error distribution. While the least absolute deviation is a good method to get robustness in regression estimation. The specific objective of this research is to propose a robust Atan estimator from combining these two ideas at once. Simulation experiments and real data applications show that the proposed LAD-Atan estimator

... Show More
View Publication Preview PDF
Crossref (1)
Crossref
Publication Date
Tue Dec 01 2020
Journal Name
Indonesian Journal Of Electrical Engineering And Computer Science
Hybrid compensation of polarization-multiplexed QPSK optical format for high bit rate networks
...Show More Authors

<span lang="EN-GB">Transmitting the highest capacity throughput over the longest possible distance without any regeneration stage is an important goal of any long-haul optical network system. Accordingly, Polarization-Multiplexed Quadrature Phase-Shift-Keying (PM-QPSK) was introduced lately to achieve high bit-rate with relatively high spectral efficiency. Unfortunately, the required broad bandwidth of PM-QPSK increases the linear and nonlinear impairments in the physical layer of the optical fiber network. Increased attention has been spent to compensate for these impairments in the last years. In this paper, Single Mode Fiber (SMF), single channel, PM-QPSK transceiver was simulated, with a mix of optical and electrical (Digi

... Show More
View Publication
Scopus (3)
Crossref (1)
Scopus Crossref
Publication Date
Sun Mar 30 2008
Journal Name
Iraqi Journal Of Chemical And Petroleum Engineering
Pyrolysis of High-density Polyethylene for the Production of Fuel-like Liquid Hydrocarbon
...Show More Authors

Pyrolysis of high density polyethylene (HDPE) was carried out in a 750 cm3 stainless steel autoclave reactor, with temperature ranging from 470 to 495° C and reaction times up to 90 minute. The influence of the operating conditions on the component yields was studied. It was found that the optimum cracking condition for HDPE that maximized the oil yield to 70 wt. % was 480°C and 20 minutes. The results show that for higher cracking temperature, and longer reaction times there was higher production of gas and coke. Furthermore, higher temperature increases the aromatics and produce lighter oil with lower viscosity.

View Publication Preview PDF