Data Driven Requirement Engineering (DDRE) represents a vision for a shift from the static traditional methods of doing requirements engineering to dynamic data-driven user-centered methods. Data available and the increasingly complex requirements of system software whose functions can adapt to changing needs to gain the trust of its users, an approach is needed in a continuous software engineering process. This need drives the emergence of new challenges in the discipline of requirements engineering to meet the required changes. The problem in this study was the method in data discrepancies which resulted in the needs elicitation process being hampered and in the end software development found discrepancies and could not meet the needs of stakeholders and the goals of the organization. The research objectives in this research to the process collected and integrating data from multiple sources and ensuring interoperability. Conclusion in this research is determining is the clustering algorithm help the collection data and elicitation process has a somewhat greater impact on the ratings provided by professionals for pairs that belong to the same cluster. However, the influence of POS tagging on the ratings given by professionals is relatively consistent for pairs within the same cluster and pairs in different clusters.
This paper proposes a novel meta-heuristic optimization algorithm called the fine-tuning meta-heuristic algorithm (FTMA) for solving global optimization problems. In this algorithm, the solutions are fine-tuned using the fundamental steps in meta-heuristic optimization, namely, exploration, exploitation, and randomization, in such a way that if one step improves the solution, then it is unnecessary to execute the remaining steps. The performance of the proposed FTMA has been compared with that of five other optimization algorithms over ten benchmark test functions. Nine of them are well-known and already exist in the literature, while the tenth one is proposed by the authors and introduced in this article. One test trial was shown t
... Show MoreAn Optimal Algorithm for HTML Page Building Process
Krawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the
... Show MoreData security is an important component of data communication and transmission systems. Its main role is to keep sensitive information safe and integrated from the sender to the receiver. The proposed system aims to secure text messages through two security principles encryption and steganography. The system produced a novel method for encryption using graph theory properties; it formed a graph from a password to generate an encryption key as a weight matrix of that graph and invested the Least Significant Bit (LSB) method for hiding the encrypted message in a colored image within a green component. Practical experiments of (perceptibility, capacity, and robustness) were calculated using similarity measures like PSNR, MSE, and
... Show MoreLongitudinal data is becoming increasingly common, especially in the medical and economic fields, and various methods have been analyzed and developed to analyze this type of data.
In this research, the focus was on compiling and analyzing this data, as cluster analysis plays an important role in identifying and grouping co-expressed subfiles over time and employing them on the nonparametric smoothing cubic B-spline model, which is characterized by providing continuous first and second derivatives, resulting in a smoother curve with fewer abrupt changes in slope. It is also more flexible and can pick up on more complex patterns and fluctuations in the data.
The longitudinal balanced data profile was compiled into subgroup
... Show MoreThe aim of the study was to find out the correlations and impact between the variable of ethical leadership behavior and university performance at Sumer University. Use the descriptive analytical method by adopting the questionnaire tool to collect data. The questionnaire was distributed electronically to 113 teachers at Sumer University and the response was from 105 teachers. The research results showed that there is a correlation and effect relationship between the search variables. In addition, the responding university does not have ethically defined standards in terms of performance of the work of the cadres working there. Finally, the research presented a set of recommendations aimed at tackling problems in the ethical lead
... Show MoreResearch was: 1- known as self-efficacy when students perceived the university. 2- know the significance of statistical differences in perceived self-efficacy according to gender and specialty. Formed the research sample of (300) students were chosen from the original research community by way of random (150) male specialization and scientific and humanitarian (150) females specialized scientific and humanitarian. The search tool to prepare the yard tool to measure perceived self-efficacy based on measurements and previous literature on the subject of perceived self-efficacy. The researcher using a number of means, statistical, including test Altaúa and analysis of variance of bilateral and results showed the enjoyment of the research s
... Show More
The research aims to measure, assess and evaluate the efficiency of the directorates of Anbar Municipalities by using the Data Envelopment Analysis method (DEA). This is because the municipality sector is consider an important sector and has a direct contact with the citizen’s life. Provides essential services to citizens. The researcher used a case study method, and the sources of information collection based on data were monthly reports, the research population is represented by the Directorate of Anbar Municipalities, and the research sample consists of 7 municipalities which are different in terms of category and size of different types. The most important conclusion reached by the research i
... Show MoreRNA Sequencing (RNA-Seq) is the sequencing and analysis of transcriptomes. The main purpose of RNA-Seq analysis is to find out the presence and quantity of RNA in an experimental sample under a specific condition. Essentially, RNA raw sequence data was massive. It can be as big as hundreds of Gigabytes (GB). This massive data always makes the processing time become longer and take several days. A multicore processor can speed up a program by separating the tasks and running the tasks’ errands concurrently. Hence, a multicore processor will be a suitable choice to overcome this problem. Therefore, this study aims to use an Intel multicore processor to improve the RNA-Seq speed and analyze RNA-Seq analysis's performance with a multiproce
... Show More