8,071 research outputs found

    Should We Collaborate with AI to Conduct Literature Reviews? Changing Epistemic Values in a Flattening World

    Get PDF
    In this paper, we revisit the issue of collaboration with artificial intelligence (AI) to conduct literature reviews and discuss if this should be done and how it could be done. We also call for further reflection on the epistemic values at risk when using certain types of AI tools based on machine learning or generative AI at different stages of the review process, which often require the scope to be redefined and fundamentally follow an iterative process. Although AI tools accelerate search and screening tasks, particularly when there are vast amounts of literature involved, they may compromise quality, especially when it comes to transparency and explainability. Expert systems are less likely to have a negative impact on these tasks. In a broader context, any AI method should preserve researchers’ ability to critically select, analyze, and interpret the literature

    On Provably Correct Decision-Making for Automated Driving

    Get PDF
    The introduction of driving automation in road vehicles can potentially reduce road traffic crashes and significantly improve road safety. Automation in road vehicles also brings several other benefits such as the possibility to provide independent mobility for people who cannot and/or should not drive. Many different hardware and software components (e.g. sensing, decision-making, actuation, and control) interact to solve the autonomous driving task. Correctness of such automated driving systems is crucial as incorrect behaviour may have catastrophic consequences. Autonomous vehicles operate in complex and dynamic environments, which requires decision-making and planning at different levels. The aim of such decision-making components in these systems is to make safe decisions at all times. The challenge of safety verification of these systems is crucial for the commercial deployment of full autonomy in vehicles. Testing for safety is expensive, impractical, and can never guarantee the absence of errors. In contrast, formal methods, which are techniques that use rigorous mathematical models to build hardware and software systems can provide a mathematical proof of the correctness of the system. The focus of this thesis is to address some of the challenges in the safety verification of decision-making in automated driving systems. A central question here is how to establish formal verification as an efficient tool for automated driving software development.A key finding is the need for an integrated formal approach to prove correctness and to provide a complete safety argument. This thesis provides insights into how three different formal verification approaches, namely supervisory control theory, model checking, and deductive verification differ in their application to automated driving and identifies the challenges associated with each method. It identifies the need for the introduction of more rigour in the requirement refinement process and presents one possible solution by using a formal model-based safety analysis approach. To address challenges in the manual modelling process, a possible solution by automatically learning formal models directly from code is proposed

    A procedure for mediation of queries to sources in disparate contexts

    Get PDF
    Includes bibliographical references (p. 17-19).S. Bressan ... [et al.]

    A conceptual framework and exploratory model for health and social intervention acceptability among African adolescents and youth

    Get PDF
    Intervention acceptability has become an increasingly key consideration in the development, evaluation and implementation of health and social interventions. However, to date this area of investigation has been constrained by the absence of a consistent definition of acceptability, comprehensive conceptual frameworks disaggregating its components, and few reliable assessment measures. This paper aims to contribute to this gap, by proposing a conceptual framework and exploratory model for acceptability with a specific priority population for health and developmental interventions: adolescents and youth in Africa. We document our multi-staged approach to model development, comprising both inductive and deductive components, and both systematic and interpretative review methods. This included thematic analyses of respective acceptability definitions and findings, from 55 studies assessing acceptability of 60 interventions conducted with young people aged 10–24 in (mainly Southern and Eastern) Africa over a decade; a consideration of these findings in relation to Sekhon et al.‘s Theoretical Framework of Acceptability (TFA); a cross-disciplinary review of acceptability definitions and models; a review of key health behavioural change models; and expert consultation with interdisciplinary researchers. Our proposed framework incorporates nine component constructs: affective attitude, intervention understanding, perceived positive effects, relevance, perceived social acceptability, burden, ethicality, perceived negative effects and self-efficacy. We discuss the rationale for the inclusion and definition of each component, highlighting key behavioural models that adopt similar constructs. We then extend this framework to develop an exploratory model for acceptability with young people, that links the framework components to each other and to intervention engagement. Acceptability is represented as an emergent property of a complex, adaptive system of interacting components, which can influence user engagement directly and indirectly, and in turn be influenced by user engagement. We discuss opportunities for applying and further refining or developing these models, and their value as a point of reference for the development of acceptability assessment tools

    Transitions: An Institutionalist Perspective

    Get PDF
    A transition to a new technological regime is complete (and stable) when accompanied with a co-stabilization between the mode of regulation and the regime of accumulation. Key to understanding the dynamics of transitions are the factors, including institutions, that “regulate” and stabilize the regime of accumulation over time. However, the available frameworks for institutional analysis employ arbitrary and narrow definition of institutions, focus mainly on the policy domain, and do not pay sufficient attention to the evolutionary characteristics of change as manifested in emergence of numerous institutions that underlie transitions. This paper consists of three parts. The first part critically reviews and synthesizes some of the main approaches for conducting institutional analysis. The second part rearticulates the concept of “transitions”, or technological regime shifts, from a systems perspective to make a case for investigating transitions as multi-level, multi-scale, and multi-system phenomena best understood in their institutional contexts. The third part proposes a framework for examining institutional change and demonstrates how this framework may be used to identify the key factors and conditions whose convergence might result in transitions in a given subsystem. Examples are drawn from the Dutch waste management subsystem to demonstrate how this framework should be operationalized.economics of technology ;
    corecore