Orthogonal Frequency Division Multiplexing (OFDM) is an efficient multi-carrier technique.The core operation in the OFDM systems is the FFT/IFFT unit that requires a large amount of hardware resources and processing delay. The developments in implementation techniques likes Field Programmable Gate Array (FPGA) technologies have made OFDM a feasible option. The goal of this paper is to design and implement an OFDM transmitter based on Altera FPGA using Quartus software. The proposed transmitter is carried out to simplify the Fourier transform calculation by using decoder instead of multipliers. After programming ALTERA DE2 FPGA kit with implemented project, several practical tests have been done starting from monitoring all the results of the implemented blocks (VHDL code) and compare them with corresponding results from simulation system implemented in matlab 2010a. The results of these practical tests show that the suggested approach gives a significant improvement in reducing complexity and processing delays (45 nsec) in comparison with the conventional implementations of OFDM transmitter.
Abstract
The study discussed three areas in strategic thinking, namely, (patterns elements, outcomes) , this study aimed to measure extent to which strategic leaders have the type or types of patterns of strategic thinking, and measure the extent of their use of the elements of strategic thinking, and measurement of strategic thinking outcomes for managers at various levels , And to know the relationship between the modes of strategic thinking, elements and outcomes in organizations. the study included five banks and four hospitals and four colleges and universities, has been a research sample consisted of 168 individuals, distributed in positions (Director General , Director of Directorate , Director of
... Show MoreMental disorders (MDs) are a common problem in Primary Health Care Centers (PHCCs). Many people with serious MDs are challenged by symptoms and disabilities that result from the disease and by stereotypes and prejudice due to misconceptions about mental illness. This study aims at evaluating the knowledge, and attitude toward mental health concepts and services and causes of the reluctance to seek those services among people attending PHCCs. A descriptive cross-sectional study was conducted. The random sampling technique was used to include (10) of Directorates of Health (DoHs) coverage north, middle, and south of Iraq. The study was executed in (50) selected PHCs, (5) PHCCs in each DoH involved randomly selected (30) people attending th
... Show MoreAim: The Aim of the study is to compare between Er,Cr:YSGG 2780 nm laser and carbide fissure bur in root-end resection regarding the morphological variations, temperature changes and the duration of resection process.
Settings and Design: 5 W, 25 Hz, 50% water, 80% air,25.47 J/cm2 .
Material and method: twenty-one extracted single rooted teeth endodontically were treated, twenty teeth were obturated and divided into two groups according to method of resection. Group 1 root-end resected using cross cut carbide bur while group 2 root-end resected using laser with MGG6 sapphire tip of 600 μm diameter. Temperature on external root surface and duration of resection were recor
... Show MoreThis article showcases the development and utilization of a side-polished fiber optic sensor that can identify altered refractive index levels within a glucose solution through the investigation of the surface Plasmon resonance (SPR) effect. The aim was to enhance efficiency by means of the placement of a 50 nm-thick layer of gold at the D-shape fiber sensing area. The detector was fabricated by utilizing a silica optical fiber (SOF), which underwent a cladding stripping process that resulted in three distinct lengths, followed by a polishing method to remove a portion of the fiber diameter and produce a cross-sectional D-shape. During experimentation with glucose solution, the side-polished fiber optic sensor revealed an adept detection
... Show MoreReservoir characterization is an important component of hydrocarbon exploration and production, which requires the integration of different disciplines for accurate subsurface modeling. This comprehensive research paper delves into the complex interplay of rock materials, rock formation techniques, and geological modeling techniques for improving reservoir quality. The research plays an important role dominated by petrophysical factors such as porosity, shale volume, water content, and permeability—as important indicators of reservoir properties, fluid behavior, and hydrocarbon potential. It examines various rock cataloging techniques, focusing on rock aggregation techniques and self-organizing maps (SOMs) to identify specific and
... Show MoreFeature selection (FS) constitutes a series of processes used to decide which relevant features/attributes to include and which irrelevant features to exclude for predictive modeling. It is a crucial task that aids machine learning classifiers in reducing error rates, computation time, overfitting, and improving classification accuracy. It has demonstrated its efficacy in myriads of domains, ranging from its use for text classification (TC), text mining, and image recognition. While there are many traditional FS methods, recent research efforts have been devoted to applying metaheuristic algorithms as FS techniques for the TC task. However, there are few literature reviews concerning TC. Therefore, a comprehensive overview was systematicall
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for