A resume is the first impression between you and a potential employer. Therefore, the importance of a resume can never be underestimated. Selecting the right candidates for a job within a company can be a daunting task for recruiters when they have to review hundreds of resumes. To reduce time and effort, we can use NLTK and Natural Language Processing (NLP) techniques to extract essential data from a resume. NLTK is a free, open source, community-driven project and the leading platform for building Python programs to work with human language data. To select the best resume according to the company’s requirements, an algorithm such as KNN is used. To be selected from hundreds of resumes, your resume must be one of the best. Therefore, our work also focuses on creating an automated system that can recommend the right skills and courses to help the desired candidates by using Natural Language Processing to analyze writing style (linguistic fingerprints) and also used to measure style and analyze word frequency from the submitted resume. Through semantic search and relying on individual resumes, forensic experts can query the huge semantic datasets provided to companies and institutions and facilitate the work of government forensics by obtaining official institutional databases. With global cybercrime and the increase in applicants seeking work and leveraging their multilingual data, Natural Language Processing (NLP) is making it easier. Through the important relationship between Natural Language Processing (NLP) and digital forensics, NLP techniques are increasingly being used to enhance investigations involving digital evidence and leverage the support of NLP for open-source data by analyzing massive amounts of public data.
Geophysical data interpretation is crucial in characterizing the subsurface structure. The Bouguer gravity map analysis of the W-NW region of Iraq serves as the basis for the current geophysical research. The Bouguer gravity data were processed using the Power Spectrum Analysis method. Four depth slices have been acquired after the PSA process, which are: 390 m, 1300 m, 3040 m, and 12600 m depth. The gravity anomaly depth maps show that shallow-depth anomalies are mainly related to the sedimentary cover layers and structures, while the gravity anomaly of the deeper depth slice of 12600 m is more presented to the basement rocks and mantle uplift. The 2D modeling technique was used for
This paper present the fast and robust approach of English text encryption and decryption based on Pascal matrix. The technique of encryption the Arabic or English text or both and show the result when apply this method on plain text (original message) and how will form the intelligible plain text to be unintelligible plain text in order to secure information from unauthorized access and from steel information, an encryption scheme usually uses a pseudo-random enecryption key generated by an algorithm. All this done by using Pascal matrix. Encryption and decryption are done by using MATLAB as programming language and notepad ++to write the input text.This paper present the fast and robust approach of English text encryption and decryption b
... Show MoreRecently, the internet has made the users able to transmit the digital media in the easiest manner. In spite of this facility of the internet, this may lead to several threats that are concerned with confidentiality of transferred media contents such as media authentication and integrity verification. For these reasons, data hiding methods and cryptography are used to protect the contents of digital media. In this paper, an enhanced method of image steganography combined with visual cryptography has been proposed. A secret logo (binary image) of size (128x128) is encrypted by applying (2 out 2 share) visual cryptography on it to generate two secret share. During the embedding process, a cover red, green, and blue (RGB) image of size (512
... Show MoreIn this paper, a description of a design for new DES block cipher, namely DES64X and DES128X. The goals of this design, a part of its security level, are large implementation flexibility on various operating systems as well as high performances. The high level structure is based on the principle of DES and Feistel schema, and proposes the design of an efficient key-schedule algorithm which will output pseudorandomsequences of subkeys. The main goal is to reach the highest possible flexibility, in terms of round numbers, key size, and block size. A comparison of the proposed systems on 32-bit, 64-bit operating system, using 32-bit and 64-bit Java Virtual Machine (JVM), showed that the latter has much better performance than the former.
... Show MoreSara and other kid's Agony: - Back to Innocence to Save Iraq
Humans knew writing and to blog motivated by the need for registration and documentation, and tried from the very beginning of research to find the most suitable material for this purpose, he used many different materials in form, nature, and composition, so it is written on the mud by the ancient Sumerian people in different forms and when the text is long Numbered as the pages of the book at the present time, this research will deal with the damage to manuscripts and then find ways to address them.
background: osteoporosis is a metabolic bone disease that affects women more than men, it is characterized by generalizes reduction of bone mineral density (BMD) leaving a fragile weak bone that is liable to fracture, gonial angle index (GAI) is one of the radio-morphometric indices, it has been controversial whether it is related to bone mineral density or ageing or none of them. The aim of study is to evaluate the role of cone beam computed tomography (CBCT) as a screening tool for diagnosis of osteoporosis and age effect in females using gonial angle index. Material and method: 60 females were divided into 3 groups according to age and (BMD) status into: Group1 (non-osteoporosis 20-30 years), Group2 (non-osteoporosis 50years and above),
... Show MoreAbstract:
This research aims to compare Bayesian Method and Full Maximum Likelihood to estimate hierarchical Poisson regression model.
The comparison was done by simulation using different sample sizes (n = 30, 60, 120) and different Frequencies (r = 1000, 5000) for the experiments as was the adoption of the Mean Square Error to compare the preference estimation methods and then choose the best way to appreciate model and concluded that hierarchical Poisson regression model that has been appreciated Full Maximum Likelihood Full Maximum Likelihood with sample size (n = 30) is the best to represent the maternal mortality data after it has been reliance value param
... Show More