Survival analysis is widely applied in data describing for the life time of item until the occurrence of an event of interest such as death or another event of understudy . The purpose of this paper is to use the dynamic approach in the deep learning neural network method, where in this method a dynamic neural network that suits the nature of discrete survival data and time varying effect. This neural network is based on the Levenberg-Marquardt (L-M) algorithm in training, and the method is called Proposed Dynamic Artificial Neural Network (PDANN). Then a comparison was made with another method that depends entirely on the Bayes methodology is called Maximum A Posterior (MAP) method. This method was carried out using numerical algorithms represented by Iteratively Weighted Kalman Filter Smoothing (IWKFS) algorithm and in combination with the Expectation Maximization (EM) algorithm. Average Mean Square Error (AMSE) and Cross Entropy Error (CEE) were used as comparison’s criteria. The methods and procedures were applied to data generated by simulation using a different combination of sample sizes and the number of intervals.
Database is characterized as an arrangement of data that is sorted out and disseminated in a way that allows the client to get to the data being put away in a simple and more helpful way. However, in the era of big-data the traditional methods of data analytics may not be able to manage and process the large amount of data. In order to develop an efficient way of handling big-data, this work studies the use of Map-Reduce technique to handle big-data distributed on the cloud. This approach was evaluated using Hadoop server and applied on EEG Big-data as a case study. The proposed approach showed clear enhancement for managing and processing the EEG Big-data with average of 50% reduction on response time. The obtained results provide EEG r
... Show MoreIn this paper, we derived an estimator of reliability function for Laplace distribution with two parameters using Bayes method with square error loss function, Jeffery’s formula and conditional probability random variable of observation. The main objective of this study is to find the efficiency of the derived Bayesian estimator compared to the maximum likelihood of this function and moment method using simulation technique by Monte Carlo method under different Laplace distribution parameters and sample sizes. The consequences have shown that Bayes estimator has been more efficient than the maximum likelihood estimator and moment estimator in all samples sizes
In this paper, the methods of weighted residuals: Collocation Method (CM), Least Squares Method (LSM) and Galerkin Method (GM) are used to solve the thin film flow (TFF) equation. The weighted residual methods were implemented to get an approximate solution to the TFF equation. The accuracy of the obtained results is checked by calculating the maximum error remainder functions (MER). Moreover, the outcomes were examined in comparison with the 4th-order Runge-Kutta method (RK4) and good agreements have been achieved. All the evaluations have been successfully implemented by using the computer system Mathematica®10.
Encryption of data is translating data to another shape or symbol which enables people only with an access to the secret key or a password that can read it. The data which are encrypted are generally referred to as cipher text, while data which are unencrypted are known plain text. Entropy can be used as a measure which gives the number of bits that are needed for coding the data of an image. As the values of pixel within an image are dispensed through further gray-levels, the entropy increases. The aim of this research is to compare between CAST-128 with proposed adaptive key and RSA encryption methods for video frames to determine the more accurate method with highest entropy. The first method is achieved by applying the "CAST-128" and
... Show MoreIn this paper, wavelets were used to study the multivariate fractional Brownian motion through the deviations of the random process to find an efficient estimation of Hurst exponent. The results of simulations experiments were shown that the performance of the proposed estimator was efficient. The estimation process was made by taking advantage of the detail coefficients stationarity from the wavelet transform, as the variance of this coefficient showed the power-low behavior. We use two wavelet filters (Haar and db5) to manage minimizing the mean square error of the model.
This study aimed to determine the possibility of culturing genus Artemia in under laboratory conditions for locally culturing and producing. Different salinity concentrations were used, ranging from 5-40g/l . the results showed that the concentration 30g/l is the best for hatching. This concentration recorded hatching efficiency of 68800 nauplii/g cysts and hatching percentage of 45.86%, while the concentration 5g/l recorded less hatching efficiency and hatching percentage of 20266 nauplii/g and 13.5% respectively . Investigating the effect of salinity on individuals survival and growth using saline concentrations ranging from 30to 100g/l, revealed that the best percentage was 75.00% in the first week with 70g/l, whilst the best rates of
... Show MoreThe growing use of tele
This paper presents a new secret diffusion scheme called Round Key Permutation (RKP) based on the nonlinear, dynamic and pseudorandom permutation for encrypting images by block, since images are considered particular data because of their size and their information, which are two-dimensional nature and characterized by high redundancy and strong correlation. Firstly, the permutation table is calculated according to the master key and sub-keys. Secondly, scrambling pixels for each block to be encrypted will be done according the permutation table. Thereafter the AES encryption algorithm is used in the proposed cryptosystem by replacing the linear permutation of ShiftRows step with the nonlinear and secret pe
... Show More
