350 research outputs found

    Forensics Writer Identification using Text Mining and Machine Learning

    Get PDF
    Constant technological growth has resulted in the danger and seriousness of cyber-attacks, which has recently unmistakably developed in various institutions that have complex Information Technology (IT) infrastructure. For instance, for the last three (3) years, the most horrendous instances of cybercrimes were perceived globally with enormous information breaks, fake news spreading, cyberbullying, crypto-jacking, and cloud computing services. To this end, various agencies improvised techniques to curb this vice and bring perpetrators, both real and perceived, to book in relation to such serious cybersecurity issues. Consequently, Forensic Writer Identification was introduced as one of the most effective remedies to the concerned issue through a stylometry application. Indeed, the Forensic Writer Identification is a complex forensic science technology that utilizes Artificial Intelligence (AI) technology to safeguard, recognize proof, extraction, and documentation of the computer or digital explicit proof that can be utilized by the official courtroom, especially, the investigative officers in case of a criminal issue or just for data analytics. This research\u27s fundamental objective was to scrutinize Forensic Writer Identification technology aspects in twitter authorship analytics of various users globally and apply it to reduce the time to find criminals by providing the Police with the most accurate methodology. As well as compare the accuracy of different techniques. The report shall analytically follow a logical literature review that observes the vital text analysis techniques. Additionally, the research applied agile text mining methodology to extract and analyze various Twitter users\u27 texts. In essence, digital exploration for appropriate academics and scholarly artifacts was affected in various online and offline databases to expedite this research. Forensic Writer Identification for text extraction, analytics have recently appreciated reestablished attention, with extremely encouraging outcomes. In fact, this research presents an overall foundation and reason for text and author identification techniques. Scope of current techniques and applications are given, additionally tending to the issue of execution assessment. Results on various strategies are summed up, and a more inside and out illustration of two consolidated methodologies are introduced. By encompassing textural, algorithms, and allographic, emerging technologies are beginning to show valuable execution levels. Nevertheless, user acknowledgment would play a vital role with regards to the future of technology. To this end, the goal of coming up with a project proposal was to come up with an analytical system that would automate the process of authorship identification methodology in various Web 2.0 Technologies aspects globally, hence addressing the contemporary cybercrime issues

    The Stylometric Processing of Sensory Open Source Data

    Get PDF
    This research project’s end goal is on the Lone Wolf Terrorist. The project uses an exploratory approach to the self-radicalisation problem by creating a stylistic fingerprint of a person's personality, or self, from subtle characteristics hidden in a person's writing style. It separates the identity of one person from another based on their writing style. It also separates the writings of suicide attackers from ‘normal' bloggers by critical slowing down; a dynamical property used to develop early warning signs of tipping points. It identifies changes in a person's moods, or shifts from one state to another, that might indicate a tipping point for self-radicalisation. Research into authorship identity using personality is a relatively new area in the field of neurolinguistics. There are very few methods that model how an individual's cognitive functions present themselves in writing. Here, we develop a novel algorithm, RPAS, which draws on cognitive functions such as aging, sensory processing, abstract or concrete thinking through referential activity emotional experiences, and a person's internal gender for identity. We use well-known techniques such as Principal Component Analysis, Linear Discriminant Analysis, and the Vector Space Method to cluster multiple anonymous-authored works. Here we use a new approach, using seriation with noise to separate subtle features in individuals. We conduct time series analysis using modified variants of 1-lag autocorrelation and the coefficient of skewness, two statistical metrics that change near a tipping point, to track serious life events in an individual through cognitive linguistic markers. In our journey of discovery, we uncover secrets about the Elizabethan playwrights hidden for over 400 years. We uncover markers for depression and anxiety in modern-day writers and identify linguistic cues for Alzheimer's disease much earlier than other studies using sensory processing. In using these techniques on the Lone Wolf, we can separate their writing style used before their attacks that differs from other writing

    Something borrowed: sequence alignment and the identification of similar passages in large text collections

    No full text
    The following article describes a simple technique to identify lexically-similar passages in large collections of text using sequence alignment algorithms. Primarily used in the field of bioinformatics to identify similar segments of DNA in genome research, sequence alignment has also been employed in many other domains, from plagiarism detection to image processing. While we have applied this approach to a wide variety of diverse text collections, we will focus our discussion here on the identification of similar passages in the famous 18th-century Encyclopédie of Denis Diderot and Jean d'Alembert. Reference works, such as encyclopedias and dictionaries, are generally expected to "reuse" or "borrow" passages from many sources and Diderot and d'Alembert's Encyclopédie was no exception. Drawn from an immense variety of source material, both French and non-French, many, if not most, of the borrowings that occur in the Encyclopédie are not sufficiently identified (according to our standards of modern citation), or are only partially acknowledged in passing. The systematic identification of recycled passages can thus offer us a clear indication of the sources the philosophes were exploiting as well as the extent to which the intertextual relations that accompanied its composition and subsequent reception can be explored. In the end,we hope this approach to "Encyclopedic intertextuality" using sequence alignment can broaden the discussion concerning the relationship of Enlightenment thought to previous intellectual traditions as well as its reuse in the centuries that followed

    Language and Linguistics in a Complex World Data, Interdisciplinarity, Transfer, and the Next Generation. ICAME41 Extended Book of Abstracts

    Get PDF
    This is a collection of papers, work-in-progress reports, and other contributions that were part of the ICAME41 digital conference

    The statistics of topic modelling.

    Get PDF
    This research project aims to provide a clear and concise guide to latent dirichlet allocation which is a form of topic modelling. The aim is to help researchers who do not have a strong background in mathematics or statistics to feel comfortable with using topic modelling in their work. In order to achieve this, the thesis provides a step-by-step explanation of how topic modelling works. A range of tools that can be used to perform a topic model analysis are also described. The first chapter gives an explanation of how topic modelling, and (more specifically), latent dirichlet allocation works; it offers a very basic explanation and then provides an easy to follow mathematical explanation. The second chapter explains how to perform a topic model analysis; this is done through an explanation of each step used to run a topic model analysis, starting from the type of dataset through to the software packages available to use. The third section provides an example topic model analysis, based on the Philpapers dataset. The final section provides a discussion on the highlights of each chapter and areas for further research
    corecore