26,171 research outputs found

    Challenges for an Ontology of Artificial Intelligence

    Get PDF
    Of primary importance in formulating a response to the increasing prevalence and power of artificial intelligence (AI) applications in society are questions of ontology. Questions such as: What “are” these systems? How are they to be regarded? How does an algorithm come to be regarded as an agent? We discuss three factors which hinder discussion and obscure attempts to form a clear ontology of AI: (1) the various and evolving definitions of AI, (2) the tendency for pre-existing technologies to be assimilated and regarded as “normal,” and (3) the tendency of human beings to anthropomorphize. This list is not intended as exhaustive, nor is it seen to preclude entirely a clear ontology, however, these challenges are a necessary set of topics for consideration. Each of these factors is seen to present a 'moving target' for discussion, which poses a challenge for both technical specialists and non-practitioners of AI systems development (e.g., philosophers and theologians) to speak meaningfully given that the corpus of AI structures and capabilities evolves at a rapid pace. Finally, we present avenues for moving forward, including opportunities for collaborative synthesis for scholars in philosophy and science

    Big Universe, Big Data: Machine Learning and Image Analysis for Astronomy

    Get PDF
    Astrophysics and cosmology are rich with data. The advent of wide-area digital cameras on large aperture telescopes has led to ever more ambitious surveys of the sky. Data volumes of entire surveys a decade ago can now be acquired in a single night and real-time analysis is often desired. Thus, modern astronomy requires big data know-how, in particular it demands highly efficient machine learning and image analysis algorithms. But scalability is not the only challenge: Astronomy applications touch several current machine learning research questions, such as learning from biased data and dealing with label and measurement noise. We argue that this makes astronomy a great domain for computer science research, as it pushes the boundaries of data analysis. In the following, we will present this exciting application area for data scientists. We will focus on exemplary results, discuss main challenges, and highlight some recent methodological advancements in machine learning and image analysis triggered by astronomical applications

    Discerning the Form of the Dense Core Mass Function

    Full text link
    We investigate the ability to discern between lognormal and powerlaw forms for the observed mass function of dense cores in star forming regions. After testing our fitting, goodness-of-fit, and model selection procedures on simulated data, we apply our analysis to 14 datasets from the literature. Whether the core mass function has a powerlaw tail or whether it follows a pure lognormal form cannot be distinguished from current data. From our simulations it is estimated that datasets from uniform surveys containing more than approximately 500 cores with a completeness limit below the peak of the mass distribution are needed to definitively discern between these two functional forms. We also conclude that the width of the core mass function may be more reliably estimated than the powerlaw index of the high mass tail and that the width may also be a more useful parameter in comparing with the stellar initial mass function to deduce the statistical evolution of dense cores into stars.Comment: 6 pages, 2 figures, accepted for publication in PAS

    A New Kind of Finance

    Full text link
    Finance has benefited from the Wolfram's NKS approach but it can and will benefit even more in the future, and the gains from the influence may actually be concentrated among practitioners who unintentionally employ those principles as a group.Comment: 13 pages; Forthcoming in "Irreducibility and Computational Equivalence: 10 Years After Wolfram's A New Kind of Science," Hector Zenil, ed., Springer Verlag, 201

    Trying to break new ground in aerial archaeology

    Get PDF
    Aerial reconnaissance continues to be a vital tool for landscape-oriented archaeological research. Although a variety of remote sensing platforms operate within the earth’s atmosphere, the majority of aerial archaeological information is still derived from oblique photographs collected during observer-directed reconnaissance flights, a prospection approach which has dominated archaeological aerial survey for the past century. The resulting highly biased imagery is generally catalogued in sub-optimal (spatial) databases, if at all, after which a small selection of images is orthorectified and interpreted. For decades, this has been the standard approach. Although many innovations, including digital cameras, inertial units, photogrammetry and computer vision algorithms, geographic(al) information systems and computing power have emerged, their potential has not yet been fully exploited in order to re-invent and highly optimise this crucial branch of landscape archaeology. The authors argue that a fundamental change is needed to transform the way aerial archaeologists approach data acquisition and image processing. By addressing the very core concepts of geographically biased aerial archaeological photographs and proposing new imaging technologies, data handling methods and processing procedures, this paper gives a personal opinion on how the methodological components of aerial archaeology, and specifically aerial archaeological photography, should evolve during the next decade if developing a more reliable record of our past is to be our central aim. In this paper, a possible practical solution is illustrated by outlining a turnkey aerial prospection system for total coverage survey together with a semi-automated back-end pipeline that takes care of photograph correction and image enhancement as well as the management and interpretative mapping of the resulting data products. In this way, the proposed system addresses one of many bias issues in archaeological research: the bias we impart to the visual record as a result of selective coverage. While the total coverage approach outlined here may not altogether eliminate survey bias, it can vastly increase the amount of useful information captured during a single reconnaissance flight while mitigating the discriminating effects of observer-based, on-the-fly target selection. Furthermore, the information contained in this paper should make it clear that with current technology it is feasible to do so. This can radically alter the basis for aerial prospection and move landscape archaeology forward, beyond the inherently biased patterns that are currently created by airborne archaeological prospection

    Discrete-Time Chaotic-Map Truly Random Number Generators: Design, Implementation, and Variability Analysis of the Zigzag Map

    Full text link
    In this paper, we introduce a novel discrete chaotic map named zigzag map that demonstrates excellent chaotic behaviors and can be utilized in Truly Random Number Generators (TRNGs). We comprehensively investigate the map and explore its critical chaotic characteristics and parameters. We further present two circuit implementations for the zigzag map based on the switched current technique as well as the current-mode affine interpolation of the breakpoints. In practice, implementation variations can deteriorate the quality of the output sequence as a result of variation of the chaotic map parameters. In order to quantify the impact of variations on the map performance, we model the variations using a combination of theoretical analysis and Monte-Carlo simulations on the circuits. We demonstrate that even in the presence of the map variations, a TRNG based on the zigzag map passes all of the NIST 800-22 statistical randomness tests using simple post processing of the output data.Comment: To appear in Analog Integrated Circuits and Signal Processing (ALOG

    Maximum approximate entropy and r threshold: A new approach for regularity changes detection

    Get PDF
    Approximate entropy (ApEn) has been widely used as an estimator of regularity in many scientific fields. It has proved to be a useful tool because of its ability to distinguish different system's dynamics when there is only available short-length noisy data. Incorrect parameter selection (embedding dimension mm, threshold rr and data length NN) and the presence of noise in the signal can undermine the ApEn discrimination capacity. In this work we show that rmaxr_{max} (ApEn(m,rmax,N)=ApEnmaxApEn(m,r_{max},N)=ApEn_{max}) can also be used as a feature to discern between dynamics. Moreover, the combined use of ApEnmaxApEn_{max} and rmaxr_{max} allows a better discrimination capacity to be accomplished, even in the presence of noise. We conducted our studies using real physiological time series and simulated signals corresponding to both low- and high-dimensional systems. When ApEnmaxApEn_{max} is incapable of discerning between different dynamics because of the noise presence, our results suggest that rmaxr_{max} provides additional information that can be useful for classification purposes. Based on cross-validation tests, we conclude that, for short length noisy signals, the joint use of ApEnmaxApEn_{max} and rmaxr_{max} can significantly decrease the misclassification rate of a linear classifier in comparison with their isolated use

    How Do Fairness Definitions Fare? Examining Public Attitudes Towards Algorithmic Definitions of Fairness

    Full text link
    What is the best way to define algorithmic fairness? While many definitions of fairness have been proposed in the computer science literature, there is no clear agreement over a particular definition. In this work, we investigate ordinary people's perceptions of three of these fairness definitions. Across two online experiments, we test which definitions people perceive to be the fairest in the context of loan decisions, and whether fairness perceptions change with the addition of sensitive information (i.e., race of the loan applicants). Overall, one definition (calibrated fairness) tends to be more preferred than the others, and the results also provide support for the principle of affirmative action.Comment: To appear at AI Ethics and Society (AIES) 201
    corecore