475 research outputs found

    Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns

    Get PDF
    Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect to this requirement by focusing on the significance of contextual and performative factors in the implementation of transparency. We show that human–computer interaction and human-robot interaction literature do not provide clear results with respect to the benefits of transparency for users of artificial intelligence technologies due to the impact of a wide range of contextual factors, including performative aspects. We conclude by integrating the information- and explanation-based approach to transparency with the critical contextual approach, proposing that transparency as required by the General Data Protection Regulation in itself may be insufficient to achieve the positive goals associated with transparency. Instead, we propose to understand transparency relationally, where information provision is conceptualized as communication between technology providers and users, and where assessments of trustworthiness based on contextual factors mediate the value of transparency communications. This relational concept of transparency points to future research directions for the study of transparency in artificial intelligence systems and should be taken into account in policymaking.Horizon 2020(H2020)707404Article / Letter to editorInstituut voor Metajuridic

    Photonic Crystal-Based Compact High-Power Vacuum Electronic Devices

    Get PDF
    This paper considers how the finite dimensions of a photonic crystal placed inside a resonator or waveguide affect the law of electron beam instability. The dispersion equations describing e-beam instability in the finite photonic crystal placed inside the resonator or waveguide (a bounded photonic crystal) are obtained. Two cases are considered: the conventionally considered case, when diffraction is suppressed, and the case of direct and diffracted waves having almost equal amplitudes. The instability law is shown to be responsible for increase of increment of instability and decrease of length, at which instability develops, for the case when amplitude of diffracted wave is comparable with that of direct one, that happens in the vicinity of π\pi-point of dispersion curve. Application of photonic crystals for development of THz sources at electron beam current densities available at modern accelerators is discussed.Comment: 14 pages, 2 eps figures, presented at Channeling 2018, submitted to Physical Review Accelerators and Beam

    Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns

    Get PDF
    Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect to this requirement by focusing on the significance of contextual and performative factors in the implementation of transparency. We show that human–computer interaction and human-robot interaction literature do not provide clear results with respect to the benefits of transparency for users of artificial intelligence technologies due to the impact of a wide range of contextual factors, including performative aspects. We conclude by integrating the information- and explanation-based approach to transparency with the critical contextual approach, proposing that transparency as required by the General Data Protection Regulation in itself may be insufficient to achieve the positive goals associated with transparency. Instead, we propose to understand transparency relationally, where information provision is conceptualized as communication between technology providers and users, and where assessments of trustworthiness based on contextual factors mediate the value of transparency communications. This relational concept of transparency points to future research directions for the study of transparency in artificial intelligence systems and should be taken into account in policymaking.Horizon 2020(H2020)707404Article / Letter to editorInstituut voor Metajuridic

    Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns

    Get PDF
    Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect to this requirement by focusing on the significance of contextual and performative factors in the implementation of transparency. We show that human–computer interaction and human-robot interaction literature do not provide clear results with respect to the benefits of transparency for users of artificial intelligence technologies due to the impact of a wide range of contextual factors, including performative aspects. We conclude by integrating the information- and explanation-based approach to transparency with the critical contextual approach, proposing that transparency as required by the General Data Protection Regulation in itself may be insufficient to achieve the positive goals associated with transparency. Instead, we propose to understand transparency relationally, where information provision is conceptualized as communication between technology providers and users, and where assessments of trustworthiness based on contextual factors mediate the value of transparency communications. This relational concept of transparency points to future research directions for the study of transparency in artificial intelligence systems and should be taken into account in policymaking.Horizon 2020(H2020)707404Effective Protection of Fundamental Rights in a pluralist worl

    The challenges of the expanded availability of genomic information: an agenda-setting paper

    Get PDF
    Rapid advances in microarray and sequencing technologies are making genotyping and genome sequencing more affordable and readily available. There is an expectation that genomic sequencing technologies improve personalized diagnosis and personalized drug therapy. Concurrently, provision of direct-to-consumer genetic testing by commercial providers has enabled individuals’ direct access to their genomic data. The expanded availability of genomic data is perceived as influencing the relationship between the various parties involved including healthcare professionals, researchers, patients, individuals, families, industry, and government. This results in a need to revisit their roles and responsibilities. In a 1-day agenda-setting meeting organized by the COST Action IS1303 “Citizen’s Health through public-private Initiatives: Public health, Market and Ethical perspectives,” participants discussed the main challenges associated with the expanded availability of genomic information, with a specific focus on public-private partnerships, and provided an outline from which to discuss in detail the identified challenges. This paper summarizes the points raised at this meeting in five main parts and highlights the key cross-cutting themes. In light of the increasing availability of genomic information, it is expected that this paper will provide timely direction for future research and policy making in this area.Funding Deborah Mascalzoni is supported under Grant Agreement number 305444. Álvaro Mendes is supported by the FCT—The Portuguese Foundation for Science and Technology under postdoctoral grant SFRH/BPD/88647/2012. Isabelle Budin-Ljøsne receives support from the National Research and Innovation Platform for Personalized Cancer Medicine funded by The Research Council of Norway (NFR BIOTEK2021/ES495029) and Biobank Norway funded by The Research Council of Norway—grant number 245464. Heidi Carmen Howard is partly supported by supported by the Swedish Foundation for Humanities and Social Science under grant M13-0260:1), the Biobanking and Molecular Resource Infrastructure of Sweden (BBMRI.se) and the BBMRI-ERIC. Brígida Riso is supported by the Portuguese Foundation for Science and Technology (FCT) under the PhD grant SFRH/BD/100779/2014. Heidi Beate Bentzen receives support from the project Legal Regulation of Information Processing relating to Personalized Cancer Medicine funded by The Research Council of Norway BIOTEK2021/238999

    Ethical sharing of health data in online platforms – which values should be considered?

    Get PDF
    Intensified and extensive data production and data storage are characteristics of contemporary western societies. Health data sharing is increasing with the growth of Information and Communication Technology (ICT) platforms devoted to the collection of personal health and genomic data. However, the sensitive and personal nature of health data poses ethical challenges when data is disclosed and shared even if for scientific research purposes. With this in mind, the Science and Values Working Group of the COST Action CHIP ME ‘Citizen's Health through public-private Initiatives: Public health, Market and Ethical perspectives’ (IS 1303) identified six core values they considered to be essential for the ethical sharing of health data using ICT platforms. We believe that using this ethical framework will promote respectful scientific practices in order to maintain individuals’ trust in research. We use these values to analyse five ICT platforms and explore how emerging data sharing platforms are reconfiguring the data sharing experience from a range of perspectives. We discuss which types of values, rights and responsibilities they entail and enshrine within their philosophy or outlook on what it means to share personal health information. Through this discussion we address issues of the design and the development process of personal health data and patient-oriented infrastructures, as well as new forms of technologically-mediated empowerment

    Measurements of Higgs boson production and couplings in diboson final states with the ATLAS detector at the LHC

    Get PDF
    Measurements are presented of production properties and couplings of the recently discovered Higgs boson using the decays into boson pairs, H →γ γ, H → Z Z∗ →4l and H →W W∗ →lνlν. The results are based on the complete pp collision data sample recorded by the ATLAS experiment at the CERN Large Hadron Collider at centre-of-mass energies of √s = 7 TeV and √s = 8 TeV, corresponding to an integrated luminosity of about 25 fb−1. Evidence for Higgs boson production through vector-boson fusion is reported. Results of combined fits probing Higgs boson couplings to fermions and bosons, as well as anomalous contributions to loop-induced production and decay modes, are presented. All measurements are consistent with expectations for the Standard Model Higgs boson

    Standalone vertex finding in the ATLAS muon spectrometer

    Get PDF
    A dedicated reconstruction algorithm to find decay vertices in the ATLAS muon spectrometer is presented. The algorithm searches the region just upstream of or inside the muon spectrometer volume for multi-particle vertices that originate from the decay of particles with long decay paths. The performance of the algorithm is evaluated using both a sample of simulated Higgs boson events, in which the Higgs boson decays to long-lived neutral particles that in turn decay to bbar b final states, and pp collision data at √s = 7 TeV collected with the ATLAS detector at the LHC during 2011

    Measurement of the top quark-pair production cross section with ATLAS in pp collisions at \sqrt{s}=7\TeV

    Get PDF
    A measurement of the production cross-section for top quark pairs(\ttbar) in pppp collisions at \sqrt{s}=7 \TeV is presented using data recorded with the ATLAS detector at the Large Hadron Collider. Events are selected in two different topologies: single lepton (electron ee or muon μ\mu) with large missing transverse energy and at least four jets, and dilepton (eeee, μμ\mu\mu or eμe\mu) with large missing transverse energy and at least two jets. In a data sample of 2.9 pb-1, 37 candidate events are observed in the single-lepton topology and 9 events in the dilepton topology. The corresponding expected backgrounds from non-\ttbar Standard Model processes are estimated using data-driven methods and determined to be 12.2±3.912.2 \pm 3.9 events and 2.5±0.62.5 \pm 0.6 events, respectively. The kinematic properties of the selected events are consistent with SM \ttbar production. The inclusive top quark pair production cross-section is measured to be \sigmattbar=145 \pm 31 ^{+42}_{-27} pb where the first uncertainty is statistical and the second systematic. The measurement agrees with perturbative QCD calculations.Comment: 30 pages plus author list (50 pages total), 9 figures, 11 tables, CERN-PH number and final journal adde
    corecore