249,345 research outputs found

    Structural Equation Modeling as a Route to Inform Sustainable Policies: The Case of Private Transportation

    Get PDF
    The availability of big data allows a wide range of predictive analyses that could inform policies for promoting sustainable behaviors. While providing great predictive power, adopted models fall short in explaining the underlying mechanisms of behavior. However, predictive analyses can be enhanced by complementary theory-based inferential analyses, guiding tailored policy design to focus on relevant response mechanisms. This paper illustrates the complementary value of multidisciplinary inferential models in informing large predictive models. We focus on Structural Equation Modeling, an approach suitable for a holistic examination of different pathways and hypotheses from multiple disciplines. Drawing on an interdisciplinary theoretical framework we develop an empirically tractable model and apply it to a sample of household data from Switzerland. The model focuses on the relationships that delineate the underlying mechanisms for energy consumption behaviors in the case of private transportation. The results are discussed in light of possible contributions to policies aiming at the promotion of sustainable travel behavior as well as data requirements for analyses relying on big data

    A European research roadmap for optimizing societal impact of big data on environment and energy efficiency

    Full text link
    We present a roadmap to guide European research efforts towards a socially responsible big data economy that maximizes the positive impact of big data in environment and energy efficiency. The goal of the roadmap is to allow stakeholders and the big data community to identify and meet big data challenges, and to proceed with a shared understanding of the societal impact, positive and negative externalities, and concrete problems worth investigating. It builds upon a case study focused on the impact of big data practices in the context of Earth Observation that reveals both positive and negative effects in the areas of economy, society and ethics, legal frameworks and political issues. The roadmap identifies European technical and non-technical priorities in research and innovation to be addressed in the upcoming five years in order to deliver societal impact, develop skills and contribute to standardization.Comment: 6 pages, 2 figures, 1 tabl

    The Craft of Incentive Prize Design: Lessons from the Public Sector

    Get PDF
    In the last five years, incentive prizes have transformed from an exotic open innovation tool to a proven innovation strategy for the public, private and philanthropic sectors. This report offers practical lessons for public sector leaders and their counterparts in the philanthropic and private sectors to help understand what types of outcomes incentive prizes help to achieve, what design elements prize designers use to create these challenges and how to make smart design choices to achieve a particular outcome. It synthesizes insights from expert interviews and analysis of more than 400 prize

    CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020

    Get PDF
    The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder by design and composed of academics, industry players from various sectors, policymakers and civil society. The Task Force is currently discussing issues such as the state and evolution of the application of AI in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe. As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed at helping the public and the private sector in operationalising Trustworthy AI. The list is composed of 131 items that are supposed to guide AI designers and developers throughout the process of design, development, and deployment of AI, although not intended as guidance to ensure compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a revision that will be finalised in early 2020. This report would like to contribute to this revision by addressing in particular the interplay between AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks are fundamentally different from traditional cyberattacks; whether they are compatible with different risk levels; whether they are flexible enough in terms of clear/easy measurement, implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles for the industry. The HLEG is a diverse group, with more than 50 members representing different stakeholders, such as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of producing a simple checklist for a complex issue. The public engagement exercise looks successful overall in that more than 450 stakeholders have signed in and are contributing to the process. The next sections of this report present the items listed by the HLEG followed by the analysis and suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)
    • 

    corecore