5,837 research outputs found

    Artificial Intelligence Adoption in Criminal Incestigations: Challenges and Opportunities for Research

    Get PDF
    Artificial Intelligence (AI) offers the potential to transform organisational decision-making and knowledge-sharing processes that support criminal investigations. Yet, there is still limited evidence-based knowledge concerning the successful use of AI for criminal investigations in literature. This paper identifies the main areas and current dynamics of the adoption of AI in criminal investigations using bibliometric analysis. We synthesise existing research by identifying key themes researchers have delved into on AI in criminal investigations. The themes include crime prediction and human-centred issues relating to AI use in criminal investigations. Finally, the paper elaborates on the challenges that may influence AI adoption in criminal investigations by police professionals. These challenges include possible laggard effects with AI adoption, implementation challenges, lack of government oversight, and a skills gap

    Don’t Do Evil: Implementing Artificial Intelligence in Universities

    Get PDF
    Artificial Intelligence (AI) is changing the ways in which we experience everyday tasks, and its reach is extending into education. Promises of AI-driven personalised learning, learner agency, adaptive teaching and changes to teacher roles are increasingly becoming realistic but the ethical considerations surrounding these, and even simpler innovations are far from clear. Various ethical standards are proposed for AI, though these tend to be high-level and generic and do not serve to guide education practice. The multiple agencies concerned with AI analytics are also yet to provide a strong sense of direction. The Open University UK has established an AI working group to explore the contribution AI might make to improving student retention, success and satisfaction. With a specific emphasis on Artificial Intelligence in Education (AIEd), this paper proposes eight principles constituting an open ethical framework for implementing AI in educational settings in ways that empower students and provide transparency

    Digitalization and New Public Management

    Get PDF
    The emergence of well-being and quality of life concepts in the workplace is driving significant changes in the public sector. This shift is characterized by the adoption of new labor standards that prioritize the holistic well-being of employees. This transformative approach is often referred to as "transformational management," and it aims to enhance the involvement of various strategic actors within the organization, as noted by Jacobsen in 2017. This transformation in the public sector reflects a broader evolution in the way work is structured, leadership is exercised, and organizational processes are managed. One of the key frameworks underpinning this transformation is the concept of "New Public Management" (NPM). NPM represents a set of principles and practices that seek to make public organizations more efficient, accountable, and responsive to the needs of citizens. It emphasizes results-oriented management, decentralization of authority, and a focus on customer satisfaction. Moreover, digitalization plays a pivotal role in this ongoing transformation. The integration of digital technologies and data-driven approaches into public administration processes is reshaping the way services are delivered and decisions are made. It enhances efficiency, transparency, and accessibility for both employees and citizens. Numerous studies have delved into the conceptual foundations and justifications for these changes introduced by NPM and digitalization in the public sector. These studies help provide a deeper understanding of how these frameworks can lead to improved public services, more engaged employees, and ultimately contribute to the well-being and quality of life of both the workforce and the citizens they serve. As the public sector continues to adapt to evolving demands and expectations, this research becomes increasingly valuable in shaping the future of public administration. In our article, we opted for the wealth of conceptual literature. Through in-depth essays, we explore the complex nuances of abstract ideas that shape our thinking.   Keywords: Digitalization – New public management – public administration. Classification JEL: H111 Paper type: Theoretical Research  L’émergence des concepts de bien-être et de qualité de vie au travail entraîne des changements importants dans le secteur public. Ce changement se caractérise par l’adoption de nouvelles normes du travail qui donnent la priorité au bien-être holistique des salariés. Cette approche transformatrice est souvent appelée « management transformationnel » et vise à renforcer l'implication des différents acteurs stratégiques au sein de l'organisation, comme le notait Jacobsen en 2017. Cette transformation dans le secteur public reflète une évolution plus large dans la façon dont le travail est structuré, le leadership est exercé et les processus organisationnels sont gérés. L'un des cadres clés qui sous-tendent cette transformation est le concept de « nouveau management public » (NPM). Le NMP représente un ensemble de principes et de pratiques visant à rendre les organisations publiques plus efficaces, plus responsables et plus réactives aux besoins des citoyens. Il met l'accent sur une gestion axée sur les résultats, la décentralisation de l'autorité et l'accent mis sur la satisfaction du client. De plus, la numérisation joue un rôle central dans cette transformation en cours. L’intégration des technologies numériques et des approches basées sur les données dans les processus de l’administration publique remodèle la manière dont les services sont fournis et les décisions sont prises. Il améliore l’efficacité, la transparence et l’accessibilité tant pour les employés que pour les citoyens. De nombreuses études se sont penchées sur les fondements conceptuels et les justifications de ces changements introduits par le NMP et la numérisation dans le secteur public. Ces études aident à mieux comprendre comment ces cadres peuvent conduire à des services publics améliorés, à des employés plus engagés et, en fin de compte, contribuer au bien-être et à la qualité de vie de la main-d'œuvre et des citoyens qu'ils servent. À mesure que le secteur public continue de s’adapter à l’évolution des demandes et des attentes, ces recherches deviennent de plus en plus utiles pour façonner l’avenir de l’administration publique. Dans notre article, nous avons opté pour la richesse de la littérature conceptuelle. À travers des essais approfondis, nous explorons les nuances complexes des idées abstraites qui façonnent notre réflexion.   Mots clés : digitalisation – Nouveau management public – administration publique. Classification JEL: H111 Paper type: Theoretical Researc

    Reversed Solutionism. The Two Sided Control of Crowdwork

    Get PDF
    Platform labour is a new phenomenon that brings new forms of labour control with it. The article looks at crowdwork and analyses the various types of control inherent in this phenomenon. It is argued that crowdwork is not controlled by technological instruments alone, such as algorithms. Indeed, the method of organising labour is a complementary control element. Analytically, a technological and an organisational fix can be distinguished. Their specific characteristics are empirically investigated on the basis of crowdwork platforms covering the entire spectrum of qualifications. The result is that already existing asymmetries between capital and labour are intensified by the interaction between the technological and the organisational fix

    Entity Synonym Discovery via Multipiece Bilateral Context Matching

    Full text link
    Being able to automatically discover synonymous entities in an open-world setting benefits various tasks such as entity disambiguation or knowledge graph canonicalization. Existing works either only utilize entity features, or rely on structured annotations from a single piece of context where the entity is mentioned. To leverage diverse contexts where entities are mentioned, in this paper, we generalize the distributional hypothesis to a multi-context setting and propose a synonym discovery framework that detects entity synonyms from free-text corpora with considerations on effectiveness and robustness. As one of the key components in synonym discovery, we introduce a neural network model SYNONYMNET to determine whether or not two given entities are synonym with each other. Instead of using entities features, SYNONYMNET makes use of multiple pieces of contexts in which the entity is mentioned, and compares the context-level similarity via a bilateral matching schema. Experimental results demonstrate that the proposed model is able to detect synonym sets that are not observed during training on both generic and domain-specific datasets: Wiki+Freebase, PubMed+UMLS, and MedBook+MKG, with up to 4.16% improvement in terms of Area Under the Curve and 3.19% in terms of Mean Average Precision compared to the best baseline method.Comment: In IJCAI 2020 as a long paper. Code and data are available at https://github.com/czhang99/SynonymNe

    A systematic survey of online data mining technology intended for law enforcement

    Get PDF
    As an increasing amount of crime takes on a digital aspect, law enforcement bodies must tackle an online environment generating huge volumes of data. With manual inspections becoming increasingly infeasible, law enforcement bodies are optimising online investigations through data-mining technologies. Such technologies must be well designed and rigorously grounded, yet no survey of the online data-mining literature exists which examines their techniques, applications and rigour. This article remedies this gap through a systematic mapping study describing online data-mining literature which visibly targets law enforcement applications, using evidence-based practices in survey making to produce a replicable analysis which can be methodologically examined for deficiencies

    A Comprehensive Evaluation of Large Language Models on Benchmark Biomedical Text Processing Tasks

    Full text link
    Recently, Large Language Models (LLM) have demonstrated impressive capability to solve a wide range of tasks. However, despite their success across various tasks, no prior work has investigated their capability in the biomedical domain yet. To this end, this paper aims to evaluate the performance of LLMs on benchmark biomedical tasks. For this purpose, we conduct a comprehensive evaluation of 4 popular LLMs in 6 diverse biomedical tasks across 26 datasets. To the best of our knowledge, this is the first work that conducts an extensive evaluation and comparison of various LLMs in the biomedical domain. Interestingly, we find based on our evaluation that in biomedical datasets that have smaller training sets, zero-shot LLMs even outperform the current state-of-the-art fine-tuned biomedical models. This suggests that pretraining on large text corpora makes LLMs quite specialized even in the biomedical domain. We also find that not a single LLM can outperform other LLMs in all tasks, with the performance of different LLMs may vary depending on the task. While their performance is still quite poor in comparison to the biomedical models that were fine-tuned on large training sets, our findings demonstrate that LLMs have the potential to be a valuable tool for various biomedical tasks that lack large annotated data.Comment: Extended version of the following BioNLP paper: https://aclanthology.org/2023.bionlp-1.30/ (arXiv:2306.04504). arXiv admin note: substantial text overlap with arXiv:2306.0450

    Credibility analysis of textual claims with explainable evidence

    Get PDF
    Despite being a vast resource of valuable information, the Web has been polluted by the spread of false claims. Increasing hoaxes, fake news, and misleading information on the Web have given rise to many fact-checking websites that manually assess these doubtful claims. However, the rapid speed and large scale of misinformation spread have become the bottleneck for manual verification. This calls for credibility assessment tools that can automate this verification process. Prior works in this domain make strong assumptions about the structure of the claims and the communities where they are made. Most importantly, black-box techniques proposed in prior works lack the ability to explain why a certain statement is deemed credible or not. To address these limitations, this dissertation proposes a general framework for automated credibility assessment that does not make any assumption about the structure or origin of the claims. Specifically, we propose a feature-based model, which automatically retrieves relevant articles about the given claim and assesses its credibility by capturing the mutual interaction between the language style of the relevant articles, their stance towards the claim, and the trustworthiness of the underlying web sources. We further enhance our credibility assessment approach and propose a neural-network-based model. Unlike the feature-based model, this model does not rely on feature engineering and external lexicons. Both our models make their assessments interpretable by extracting explainable evidence from judiciously selected web sources. We utilize our models and develop a Web interface, CredEye, which enables users to automatically assess the credibility of a textual claim and dissect into the assessment by browsing through judiciously and automatically selected evidence snippets. In addition, we study the problem of stance classification and propose a neural-network-based model for predicting the stance of diverse user perspectives regarding the controversial claims. Given a controversial claim and a user comment, our stance classification model predicts whether the user comment is supporting or opposing the claim.Das Web ist eine riesige Quelle wertvoller Informationen, allerdings wurde es durch die Verbreitung von Falschmeldungen verschmutzt. Eine zunehmende Anzahl an Hoaxes, Falschmeldungen und irreführenden Informationen im Internet haben viele Websites hervorgebracht, auf denen die Fakten überprüft und zweifelhafte Behauptungen manuell bewertet werden. Die rasante Verbreitung großer Mengen von Fehlinformationen sind jedoch zum Engpass für die manuelle Überprüfung geworden. Dies erfordert Tools zur Bewertung der Glaubwürdigkeit, mit denen dieser Überprüfungsprozess automatisiert werden kann. In früheren Arbeiten in diesem Bereich werden starke Annahmen gemacht über die Struktur der Behauptungen und die Portale, in denen sie gepostet werden. Vor allem aber können die Black-Box-Techniken, die in früheren Arbeiten vorgeschlagen wurden, nicht erklären, warum eine bestimmte Aussage als glaubwürdig erachtet wird oder nicht. Um diesen Einschränkungen zu begegnen, wird in dieser Dissertation ein allgemeines Framework für die automatisierte Bewertung der Glaubwürdigkeit vorgeschlagen, bei dem keine Annahmen über die Struktur oder den Ursprung der Behauptungen gemacht werden. Insbesondere schlagen wir ein featurebasiertes Modell vor, das automatisch relevante Artikel zu einer bestimmten Behauptung abruft und deren Glaubwürdigkeit bewertet, indem die gegenseitige Interaktion zwischen dem Sprachstil der relevanten Artikel, ihre Haltung zur Behauptung und der Vertrauenswürdigkeit der zugrunde liegenden Quellen erfasst wird. Wir verbessern unseren Ansatz zur Bewertung der Glaubwürdigkeit weiter und schlagen ein auf neuronalen Netzen basierendes Modell vor. Im Gegensatz zum featurebasierten Modell ist dieses Modell nicht auf Feature-Engineering und externe Lexika angewiesen. Unsere beiden Modelle machen ihre Einschätzungen interpretierbar, indem sie erklärbare Beweise aus sorgfältig ausgewählten Webquellen extrahieren. Wir verwenden unsere Modelle zur Entwicklung eines Webinterfaces, CredEye, mit dem Benutzer die Glaubwürdigkeit einer Behauptung in Textform automatisch bewerten und verstehen können, indem sie automatisch ausgewählte Beweisstücke einsehen. Darüber hinaus untersuchen wir das Problem der Positionsklassifizierung und schlagen ein auf neuronalen Netzen basierendes Modell vor, um die Position verschiedener Benutzerperspektiven in Bezug auf die umstrittenen Behauptungen vorherzusagen. Bei einer kontroversen Behauptung und einem Benutzerkommentar sagt unser Einstufungsmodell voraus, ob der Benutzerkommentar die Behauptung unterstützt oder ablehnt

    Dashboard Framework. A Tool for Threat Monitoring on the Example of Covid-19

    Get PDF
    The aim of the study is to create a dashboard framework to monitor the spread of the Covid-19 pandemic based on quantitative and qualitative data processing. The theoretical part propounds the basic assumptions underlying the concept of the dashboard framework. The paper presents the most important functions of the dashboard framework and examples of its adoption. The limitations related to the dashboard framework development are also indicated. As part of empirical research, an original model of the Dash-Cov framework was designed, enabling the acquisition and processing of quantitative and qualitative data on the spread of the SARS-CoV-2 virus. The developed model was pre-validated. Over 25,000 records and around 100,000 tweets were analyzed. The adopted research methods included statistical analysis and text analysis methods, in particular the sentiment analysis and the topic modeling
    corecore