The undetected error probability is an important measure to assess the communication reliability provided by any error coding scheme. Two error coding schemes namely, Joint crosstalk avoidance and Triple Error Correction (JTEC) and JTEC with Simultaneous Quadruple Error Detection (JTEC-SQED), provide both crosstalk reduction and multi-bit error correction/detection features. The available undetected error probability model yields an upper bound value which does not give accurate estimation on the reliability provided. This paper presents an improved mathematical model to estimate the undetected error probability of these two joint coding schemes. According to the decoding algorithm the errors are classified into patterns and their decoding result is checked for failures. The probabilities of the failing patterns are used to build the new models. The improved models have less than 1% error
In this paper, the density of state (DOS) at Fe metal contact to Titanium dioxide semiconductor (TiO2) has been studied and investigated using quantum consideration approaches. The study and calculations of (DOS) depended on the orientation and driving energies. was a function of TiO2 and Fe materials' refractive index and dielectric constant. Attention has focused on the effect of on the characteristic of (DOS), which increased with the increasing of refractive index and dielectric constant of Fe metal and vice versa. The results of (DOS) and its relation with and values of system have been discussed. As for contact system is increased, (DOS) values increased at first, but the relation is disturbed later and transforms into an inve
... Show MoreIn this paper, an algorithm for binary codebook design has been used in vector quantization technique, which is used to improve the acceptability of the absolute moment block truncation coding (AMBTC) method. Vector quantization (VQ) method is used to compress the bitmap (the output proposed from the first method (AMBTC)). In this paper, the binary codebook can be engender for many images depending on randomly chosen to the code vectors from a set of binary images vectors, and this codebook is then used to compress all bitmaps of these images. The chosen of the bitmap of image in order to compress it by using this codebook based on the criterion of the average bitmap replacement error (ABPRE). This paper is suitable to reduce bit rates
... Show MoreHigh peak to average power ration (PAPR) in orthogonal frequency division multiplexing (OFDM) is an important problem, which increase the cost and complexity of high power amplifiers. One of the techniques used to reduce the PAPR in OFDM system is the tone reservation method (TR). In our work we propose a modified tone reservation method to decrease the PAPR with low complexity compared with the conventional TR method by process the high and low amplitudes at the same time. An image of size 128×128 is used as a source of data that transmitted using OFDM system. The proposed method decrease the PAPR by 2dB compared with conventional method with keeping the performance unchanged. The performance of the proposed method is tested with
... Show MoreAmong the metaheuristic algorithms, population-based algorithms are an explorative search algorithm superior to the local search algorithm in terms of exploring the search space to find globally optimal solutions. However, the primary downside of such algorithms is their low exploitative capability, which prevents the expansion of the search space neighborhood for more optimal solutions. The firefly algorithm (FA) is a population-based algorithm that has been widely used in clustering problems. However, FA is limited in terms of its premature convergence when no neighborhood search strategies are employed to improve the quality of clustering solutions in the neighborhood region and exploring the global regions in the search space. On the
... Show MoreEnsuring reliable data transmission in Network on Chip (NoC) is one of the most challenging tasks, especially in noisy environments. As crosstalk, interference, and radiation were increased with manufacturers' increasing tendency to reduce the area, increase the frequencies, and reduce the voltages. So many Error Control Codes (ECC) were proposed with different error detection and correction capacities and various degrees of complexity. Code with Crosstalk Avoidance and Error Correction (CCAEC) for network-on-chip interconnects uses simple parity check bits as the main technique to get high error correction capacity. Per this work, this coding scheme corrects up to 12 random errors, representing a high correction capac
... Show MoreIn this paper, the effect of changes in bank deposits on the money supply in Iraq was studied by estimating the error correction model (ECM) for monthly time series data for the period (2010-2015) . The Philips Perron was used to test the stationarity and also we used Engle and Granger to test the cointegration . we used cubic spline and local polynomial estimator to estimate regression function .The result show that local polynomial was better than cubic spline with the first level of cointegration.
This paper presents a study of a syndrome coding scheme for different binary linear error correcting codes that refer to the code families such as BCH, BKLC, Golay, and Hamming. The study is implemented on Wyner’s wiretap channel model when the main channel is error-free and the eavesdropper channel is a binary symmetric channel with crossover error probability (0 < Pe ≤ 0.5) to show the security performance of error correcting codes while used in the single-staged syndrome coding scheme in terms of equivocation rate. Generally, these codes are not designed for secure information transmission, and they have low equivocation rates when they are used in the syndrome coding scheme. Therefore, to improve the transmiss
... Show MoreBreast cancer has got much attention in the recent years as it is a one of the complex diseases that can threaten people lives. It can be determined from the levels of secreted proteins in the blood. In this project, we developed a method of finding a threshold to classify the probability of being affected by it in a population based on the levels of the related proteins in relatively small case-control samples. We applied our method to simulated and real data. The results showed that the method we used was accurate in estimating the probability of being diseased in both simulation and real data. Moreover, we were able to calculate the sensitivity and specificity under the null hypothesis of our research question of being diseased o
... Show MoreIn this work the effect of choosing tri-circular tube section had been addressed to minimize the end effector’s error, a comparison had been made between the tri-tube section and the traditional square cross section for a robot arm, the study shows that for the same weight of square section and tri-tube section the error may be reduced by about 33%.
A program had been built up by the use of MathCAD software to calculate the minimum weight of a square section robot arm that could with stand a given pay load and gives a minimum deflection. The second part of the program makes an optimization process for the dimension of the cross section and gives the dimensions of the tri-circular tube cross section that have the same weight of
... Show More