Image compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurrence through the sub intervals between the range 0 and 1. Finally, the stream of compressed tables is reassembled for decompressing (image restoration). The results showed a compression gain of 10-12% and less time consumption when applying this type of coding to each block rather than the entire image. To improve the compression ratio, the second approach was used based on the YCbCr colour model. In this regard, images were decomposed into four sub-bands (low-low, high-low, low-high, and high-high) by using the discrete wavelet transform compression algorithm. Then, the low-low sub-band was transmuted to frequency components (low and high) via discrete wavelet transform. Next, these components were quantized by using scalar quantization and then scanning in a zigzag way. The compression ratio result is 15.1 to 27.5 for magnetic resonance imaging with a different peak signal to noise ratio and mean square error; 25 to 43 for X-ray images; 32 to 46 for computed tomography scan images; and 19 to 36 for magnetic resonance imaging brain images. The second approach showed an improved compression scheme compared to the first approach considering compression ratio, peak signal to noise ratio, and mean square error.
Ischemic stroke is a significant cause of morbidity and mortality worldwide. Autophagy, a process of intracellular degradation, has been shown to play a crucial role in the pathogenesis of ischemic stroke. Long non-coding RNAs (lncRNAs) have emerged as essential regulators of autophagy in various diseases, including ischemic stroke. Recent studies have identified several lncRNAs that modulate autophagy in ischemic stroke, including MALAT1, MIAT, SNHG12, H19, AC136007. 2, C2dat2, MEG3, KCNQ1OT1, SNHG3, and RMRP. These lncRNAs regulate autophagy by interacting with key proteins involved in the autophagic process, such as Beclin-1, ATG7, and LC3. Understanding the role of lncRNAs in regulating auto
In this paper, we will focus to one of the recent applications of PU-algebras in the coding theory, namely the construction of codes by soft sets PU-valued functions. First, we shall introduce the notion of soft sets PU-valued functions on PU-algebra and investigate some of its related properties.Moreover, the codes generated by a soft sets PU-valued function are constructed and several examples are given. Furthermore, example with graphs of binary block code constructed from a soft sets PU-valued function is constructed.
Most of the Weibull models studied in the literature were appropriate for modelling a continuous random variable which assumes the variable takes on real values over the interval [0,∞]. One of the new studies in statistics is when the variables take on discrete values. The idea was first introduced by Nakagawa and Osaki, as they introduced discrete Weibull distribution with two shape parameters q and β where 0 < q < 1 and b > 0. Weibull models for modelling discrete random variables assume only non-negative integer values. Such models are useful for modelling for example; the number of cycles to failure when components are subjected to cyclical loading. Discrete Weibull models can be obta
... Show MoreThis article aims to estimate the partially linear model by using two methods, which are the Wavelet and Kernel Smoothers. Simulation experiments are used to study the small sample behavior depending on different functions, sample sizes, and variances. Results explained that the wavelet smoother is the best depending on the mean average squares error criterion for all cases that used.
DNA methylation is one of the main epigenetic mechanisms in cancer development and progression. Aberrant DNA methylation of CpG islands within promoter regions contributes to the dysregulation of various tumor suppressors and oncogenes; this leads to the appearance of malignant features, including rapid proliferation, metastasis, stemness, and drug resistance. The discovery of two important protein families, DNA methyltransferases (DNMTs) and Ten-eleven translocation (TET) dioxygenases, respectively, which are responsible for deregulated transcription of genes that play pivotal roles in tumorigenesis, led to further understanding of DNA methylation-related pathways. But how these enzymes can target specific genes in different malignancies;
... Show MoreImage classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class
... Show MoreImage classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class
... Show MoreThe research deals with an analytical approach between new media and traditional one in the light of the changes imposed by technology, which has been able to change a number of common concepts in the field of communication and media. The researcher tries to find an analytical explanation of the relationship between technology by being an influential factor in building the information society, which is the basis of new media, and the technical output that influenced the forms of social relations and linguistic construction as a human communication tool. The research deals with an analytical approach between new media and traditional one in the light of the changes imposed by technology, which has been able to change a number of comm
... Show MoreThe concept of employees voice has received a great deal of attention by researchers in the field of organizational behavior and human resources management especially in the last three decades of the twentieth century , this importance has deep ranges limits in terms of its discussion history, so it became a behavioral variable received a great attention and care in managerial and organizational studied and basic pillar in the success and excellence of organizations in maintaining its human resources, the research explain the concept and benefits of paying attention to the voice of employees in business organizations , and the theories interpreted to employees voicing and clearing the motivations behind employees voicing, and dis
... Show MoreThis paper critically looks at the studies that investigated the Social Network Sites in the Arab region asking whether they made a practical addition to the field of information and communication sciences or not. The study tried to lift the ambiguity of the variety of names, as well as the most important theoretical and methodological approaches used by these studies highlighting its scientific limitations. The research discussed the most important concepts used by these studies such as Interactivity, Citizen Journalism, Public Sphere, and Social Capital and showed the problems of using them because each concept comes out of a specific view to these websites. The importation of these concepts from a cultural and social context to an Ara
... Show More