1,161 research outputs found

    Exploring ethics and human rights in artificial intelligence – a Delphi study

    Get PDF
    Ethical and human rights issues of artificial intelligence (AI) are a prominent topic of research and innovation policy as well as societal and scientific debate. It is broadly recognised that AI-related technologies have properties that can give rise to ethical and human rights concerns, such as privacy, bias and discrimination, safety and security, economic distribution, political participation or the changing nature of warfare. Numerous ways of addressing these issues have been suggested. In light of the complexity of this discussion, we undertook a Delphi study with experts in the field to determine the most pressing issues and prioritise appropriate mitigation strategies. The results of the study demonstrate the difficulty of defining clear priorities. Our findings suggest that the debate around ethics and human rights of AI would benefit from being reframed and more strongly emphasising the systems nature of AI ecosystems

    The Recursive Record Semantics of Objects Revisited

    Get PDF
    In a call-by-value language, representing objects as recursive records requires using an unsafe fixpoint. We design, for a core language including extensible records, a type system which rules out unsafe recursion and still supports the reconstruction of a principal type. We illustrate the expressive power of this language with respect to object-oriented programming by introducing a sub-language for «mixin-based» programming

    Measurement of the residual energy of muons in the Gran Sasso underground Laboratories

    Full text link
    The MACRO detector was located in the Hall B of the Gran Sasso underground Laboratories under an average rock overburden of 3700 hg/cm^2. A transition radiation detector composed of three identical modules, covering a total horizontal area of 36 m^2, was installed inside the empty upper part of the detector in order to measure the residual energy of muons. This paper presents the measurement of the residual energy of single and double muons crossing the apparatus. Our data show that double muons are more energetic than single ones. This measurement is performed over a standard rock depth range from 3000 to 6500 hg/cm^2.Comment: 28 pages, 9 figure

    A systematic review of artificial intelligence impact assessments

    Get PDF
    Artificial intelligence (AI) is producing highly beneficial impacts in many domains, from transport to healthcare, from energy distribution to marketing, but it also raises concerns about undesirable ethical and social consequences. AI impact assessments (AI-IAs) are a way of identifying positive and negative impacts early on to safeguard AI’s benefits and avoid its downsides. This article describes the first systematic review of these AI-IAs. Working with a population of 181 documents, the authors identified 38 actual AI-IAs and subjected them to a rigorous qualitative analysis with regard to their purpose, scope, organisational context, expected issues, timeframe, process and methods, transparency and challenges. The review demonstrates some convergence between AI-IAs. It also shows that the field is not yet at the point of full agreement on content, structure and implementation. The article suggests that AI-IAs are best understood as means to stimulate reflection and discussion concerning the social and ethical consequences of AI ecosystems. Based on the analysis of existing AI-IAs, the authors describe a baseline process of implementing AI-IAs that can be implemented by AI developers and vendors and that can be used as a critical yardstick by regulators and external observers to evaluate organisations’ approaches to AI

    An imperative object calculus

    Full text link

    Time-integrated luminosity recorded by the BABAR detector at the PEP-II e+e- collider

    Get PDF
    This article is the Preprint version of the final published artcile which can be accessed at the link below.We describe a measurement of the time-integrated luminosity of the data collected by the BABAR experiment at the PEP-II asymmetric-energy e+e- collider at the ϒ(4S), ϒ(3S), and ϒ(2S) resonances and in a continuum region below each resonance. We measure the time-integrated luminosity by counting e+e-→e+e- and (for the ϒ(4S) only) e+e-→μ+μ- candidate events, allowing additional photons in the final state. We use data-corrected simulation to determine the cross-sections and reconstruction efficiencies for these processes, as well as the major backgrounds. Due to the large cross-sections of e+e-→e+e- and e+e-→μ+μ-, the statistical uncertainties of the measurement are substantially smaller than the systematic uncertainties. The dominant systematic uncertainties are due to observed differences between data and simulation, as well as uncertainties on the cross-sections. For data collected on the ϒ(3S) and ϒ(2S) resonances, an additional uncertainty arises due to ϒ→e+e-X background. For data collected off the ϒ resonances, we estimate an additional uncertainty due to time dependent efficiency variations, which can affect the short off-resonance runs. The relative uncertainties on the luminosities of the on-resonance (off-resonance) samples are 0.43% (0.43%) for the ϒ(4S), 0.58% (0.72%) for the ϒ(3S), and 0.68% (0.88%) for the ϒ(2S).This work is supported by the US Department of Energy and National Science Foundation, the Natural Sciences and Engineering Research Council (Canada), the Commissariat à l’Energie Atomique and Institut National de Physique Nucléaire et de Physiquedes Particules (France), the Bundesministerium für Bildung und Forschung and Deutsche Forschungsgemeinschaft (Germany), the Istituto Nazionale di Fisica Nucleare (Italy), the Foundation for Fundamental Research on Matter (The Netherlands), the Research Council of Norway, the Ministry of Education and Science of the Russian Federation, Ministerio de Ciencia e Innovación (Spain), and the Science and Technology Facilities Council (United Kingdom). Individuals have received support from the Marie-Curie IEF program (European Union) and the A.P. Sloan Foundation (USA)

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of â„“2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Testing population genetic structure using parametric bootstrapping and M IGRATE-N

    Full text link
    We present a method for investigating genetic population structure using sequence data. Our hypothesis states that the parameters most responsible for the formation of genetic structure among different populations are the relative rates of mutation (μ) and migration (M). The evolution of genetic structure among different populations requires rates of M ≪ μ because this allows population-specific mutation to accumulate. Rates of μ ≪ M will result in populations that are effectively panmictic because genetic differentiation will not develop among demes. Our test is implemented by using a parametric bootstrap to create the null distribution of the likelihood of the data having been produced under an appropriate model of sequence evolution and a migration rate sufficient to approximate panmixia. We describe this test, then apply it to mtDNA data from 243 plethodontid salamanders. We are able to reject the null hypothesis of no population structure on all but smallest geographic scales, a result consistent with the apparent lack of migration in Plethodon idahoensis . This approach represents a new method of investigating population structure with haploid DNA, and as such may be particularly useful for preliminary investigation of non-model organisms in which multi-locus nuclear data are not available.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/42796/1/10709_2004_Article_8358.pd
    • …
    corecore