Background: This study was conducted to evaluate the hard palate bone density and thickness during 3rd and 4th decades and their relationships with body mass index (BMI) and compositions, to allow more accurate mini-implant placement. Materials and method: Computed tomographic (CT) images were obtained for 60 patients (30 males and 30 females) with age range 20-39 years. The hard palate bone density and thickness were measured at 20 sites at the intersection of five anterioposterior and four mediolateral reference lines with 6 and 3 mm intervals from incisive foramen and mid-palatal suture respectively. Diagnostic scale operates according to the bioelectric impedance analysis principle was used to measure body weight; percentages of body fat, water, and muscle; bone mass; and basal and active metabolic rates. Results: No significant difference in overall bone density and thickness of hard palate during 3rd and 4th decades. The gender should be considered in regard to bone thickness. Cortical bone density and thickness showed a tendency to decrease posteriorly, while the cancellous bone density showed a tendency to increase posteriorly. In the mediolateral areas, no specific patterns were observed. With increasing BMI, the cortical bone density was increased. The relationships of bone density and thickness with most scale measurements were not significant. Conclusion: Mini-implants for orthodontic anchorage can be effectively placed in most areas of hard palate regarding the bone density. While regarding bone thickness, care should be taken during the planning of their placement in hard palate. A new classification for bone thickness of hard palate has been developed.
The goal of this research is to develop a numerical model that can be used to simulate the sedimentation process under two scenarios: first, the flocculation unit is on duty, and second, the flocculation unit is out of commission. The general equation of flow and sediment transport were solved using the finite difference method, then coded using Matlab software. The result of this study was: the difference in removal efficiency between the coded model and operational model for each particle size dataset was very close, with a difference value of +3.01%, indicating that the model can be used to predict the removal efficiency of a rectangular sedimentation basin. The study also revealed
The biosorption of Pb (II), Cd (II), and Hg (II) from simulated aqueous solutions using baker’s yeast biomass was investigated. Batch type experiments were carried out to find the equilibrium isotherm data for each component (single, binary, and ternary), and the adsorption rate constants. Kinetics pseudo-first and second order rate models applied to the adsorption data to estimate the rate constant for each solute, the results showed that the Cd (II), Pb (II), and Hg (II) uptake process followed the pseudo-second order rate model with (R2) 0.963, 0.979, and 0.960 respectively. The equilibrium isotherm data were fitted with five theoretical models. Langmuir model provides the best fitting for the experimental results with (R2) 0.992, 0
... Show More<span>Digital audio is required to transmit large sizes of audio information through the most common communication systems; in turn this leads to more challenges in both storage and archieving. In this paper, an efficient audio compressive scheme is proposed, it depends on combined transform coding scheme; it is consist of i) bi-orthogonal (tab 9/7) wavelet transform to decompose the audio signal into low & multi high sub-bands, ii) then the produced sub-bands passed through DCT to de-correlate the signal, iii) the product of the combined transform stage is passed through progressive hierarchical quantization, then traditional run-length encoding (RLE), iv) and finally LZW coding to generate the output mate bitstream.
... Show MoreThe aim of this research is to compare traditional and modern methods to obtain the optimal solution using dynamic programming and intelligent algorithms to solve the problems of project management.
It shows the possible ways in which these problems can be addressed, drawing on a schedule of interrelated and sequential activities And clarifies the relationships between the activities to determine the beginning and end of each activity and determine the duration and cost of the total project and estimate the times used by each activity and determine the objectives sought by the project through planning, implementation and monitoring to maintain the budget assessed
... Show MoreIn this paper we present a method to analyze five types with fifteen wavelet families for eighteen different EMG signals. A comparison study is also given to show performance of various families after modifying the results with back propagation Neural Network. This is actually will help the researchers with the first step of EMG analysis. Huge sets of results (more than 100 sets) are proposed and then classified to be discussed and reach the final.
The problem of the high peak to average ratio (PAPR) in OFDM signals is investigated with a brief presentation of the various methods used to reduce the PAPR with special attention to the clipping method. An alternative approach of clipping is presented, where the clipping is performed right after the IFFT stage unlike the conventional clipping that is performed in the power amplifier stage, which causes undesirable out of signal band spectral growth. In the proposed method, there is clipping of samples not clipping of wave, therefore, the spectral distortion is avoided. Coding is required to correct the errors introduced by the clipping and the overall system is tested for two types of modulations, the QPSK as a constant amplitude modul
... Show MoreIn this study, a fast block matching search algorithm based on blocks' descriptors and multilevel blocks filtering is introduced. The used descriptors are the mean and a set of centralized low order moments. Hierarchal filtering and MAE similarity measure were adopted to nominate the best similar blocks lay within the pool of neighbor blocks. As next step to blocks nomination the similarity of the mean and moments is used to classify the nominated blocks and put them in one of three sub-pools, each one represents certain nomination priority level (i.e., most, less & least level). The main reason of the introducing nomination and classification steps is a significant reduction in the number of matching instances of the pixels belong to the c
... Show More