In data transmission a change in single bit in the received data may lead to miss understanding or a disaster. Each bit in the sent information has high priority especially with information such as the address of the receiver. The importance of error detection with each single change is a key issue in data transmission field.
The ordinary single parity detection method can detect odd number of errors efficiently, but fails with even number of errors. Other detection methods such as two-dimensional and checksum showed better results and failed to cope with the increasing number of errors.
Two novel methods were suggested to detect the binary bit change errors when transmitting data in a noisy media.Those methods were: 2D-Checksum method and Modified 2D-Checksum. In 2D-checksum method, summing process was done for 7×7 patterns in row direction and then in column direction to result 8×8 patterns. While in modified method, an additional parity diagonal vector was added to the pattern to be 8×9. By combining the benefits of using single parity (detecting odd number of error bits) and the benefits of checksum (reducing the effect of 4-bit errors) and combining them in 2D shape, the detection process was improved. By contaminating any sample of data with up to 33% of noise (change 0 to 1 and vice versa), the detecting process in first method was improved by approximately 50% compared to the ordinary traditional two dimensional-parity method and gives best detection results in second novel method
It is well-known that the existence of outliers in the data will adversely affect the efficiency of estimation and results of the current study. In this paper four methods will be studied to detect outliers for the multiple linear regression model in two cases : first, in real data; and secondly, after adding the outliers to data and the attempt to detect it. The study is conducted for samples with different sizes, and uses three measures for comparing between these methods . These three measures are : the mask, dumping and standard error of the estimate.
Binary relations or interactions among bio-entities, such as proteins, set up the essential part of any living biological system. Protein-protein interactions are usually structured in a graph data structure called "protein-protein interaction networks" (PPINs). Analysis of PPINs into complexes tries to lay out the significant knowledge needed to answer many unresolved questions, including how cells are organized and how proteins work. However, complex detection problems fall under the category of non-deterministic polynomial-time hard (NP-Hard) problems due to their computational complexity. To accommodate such combinatorial explosions, evolutionary algorithms (EAs) are proven effective alternatives to heuristics in solvin
... Show MoreThis paper proposes a new encryption method. It combines two cipher algorithms, i.e., DES and AES, to generate hybrid keys. This combination strengthens the proposed W-method by generating high randomized keys. Two points can represent the reliability of any encryption technique. Firstly, is the key generation; therefore, our approach merges 64 bits of DES with 64 bits of AES to produce 128 bits as a root key for all remaining keys that are 15. This complexity increases the level of the ciphering process. Moreover, it shifts the operation one bit only to the right. Secondly is the nature of the encryption process. It includes two keys and mixes one round of DES with one round of AES to reduce the performance time. The W-method deals with
... Show MoreThe aim of this paper is to derive a posteriori error estimates for semilinear parabolic interface problems. More specifically, optimal order a posteriori error analysis in the - norm for semidiscrete semilinear parabolic interface problems is derived by using elliptic reconstruction technique introduced by Makridakis and Nochetto in (2003). A key idea for this technique is the use of error estimators derived for elliptic interface problems to obtain parabolic estimators that are of optimal order in space and time.
Automatic speaker recognition may achieve remarkable performance in matched training and test conditions. Conversely, results drop significantly in incompatible noisy conditions. Furthermore, feature extraction significantly affects performance. Mel-frequency cepstral coefficients MFCCs are most commonly used in this field of study. The literature has reported that the conditions for training and testing are highly correlated. Taken together, these facts support strong recommendations for using MFCC features in similar environmental conditions (train/test) for speaker recognition. However, with noise and reverberation present, MFCC performance is not reliable. To address this, we propose a new feature 'entrocy' for accurate and robu
... Show Moremodel is derived, and the methodology is given in detail. The model is constructed depending on some measurement criteria, Akaike and Bayesian information criterion. For the new time series model, a new algorithm has been generated. The forecasting process, one and two steps ahead, is discussed in detail. Some exploratory data analysis is given in the beginning. The best model is selected based on some criteria; it is compared with some naïve models. The modified model is applied to a monthly chemical sales dataset (January 1992 to Dec 2019), where the dataset in this work has been downloaded from the United States of America census (www.census.gov). Ultimately, the forecasted sales
A simulation study of using 2D tomography to reconstruction a 3D object is presented. The 2D Radon transform is used to create a 2D projection for each slice of the 3D object at different heights. The 2D back-projection and the Fourier slice theorem methods are used to reconstruction each 2D projection slice of the 3D object. The results showed the ability of the Fourier slice theorem method to reconstruct the general shape of the body with its internal structure, unlike the 2D Radon method, which was able to reconstruct the general shape of the body only because of the blurring artefact, Beside that the Fourier slice theorem could not remove all blurring artefact, therefore, this research, suggested the threshold technique to eliminate the
... Show MoreEducation specialists have differed about determining the best ways to detect the
talented. Since the appearance of the mental and psychological measurement movement, some
scholars adopted intelligence ratios as a criterion to identify the talented and others went to
rely on the degree of academic achievement. Each of these two methods has its own flaws and
mistakes and a large number of talented children were victims of these two methods.
Therefore the need to use other scales for the purpose of detection of talented children
appeared because they provide valuable information which may not be obtained easily
through objective tests and these scales are derived from consecutive studies of gifted andtalented children
An impressed current cathodic protection system (ICCP) requires measurements of extremely low-level quantities of its electrical characteristics. The current experimental work utilized the Adafruit INA219 sensor module for acquiring the values for voltage, current, and power of a default load, which consumes quite low power and simulates an ICCP system. The main problem is the adaptation of the INA219 sensor to the LabVIEW environment due to the absence of the library of this sensor. This work is devoted to the adaptation of the Adafruit INA219 sensor module in the LabVIEW environment through creating, developing, and successfully testing a Sub VI to be ready for employment in an ICCP system. The sensor output was monitored with an Ardui
... Show More