18,935 research outputs found

    The Future of Work In Cities

    Get PDF
    The latest report in our City of the Future series examines societal shifts and advancements in technology that are impacting the rapidly changing American workforce. The report outlines solutions to help city leaders plan for the fast-approaching future, while forecasting the economic viability of two distinct sectors – retail and office administration – in which a quarter of Americans are currently employed

    From Biological to Synthetic Neurorobotics Approaches to Understanding the Structure Essential to Consciousness (Part 3)

    Get PDF
    This third paper locates the synthetic neurorobotics research reviewed in the second paper in terms of themes introduced in the first paper. It begins with biological non-reductionism as understood by Searle. It emphasizes the role of synthetic neurorobotics studies in accessing the dynamic structure essential to consciousness with a focus on system criticality and self, develops a distinction between simulated and formal consciousness based on this emphasis, reviews Tani and colleagues' work in light of this distinction, and ends by forecasting the increasing importance of synthetic neurorobotics studies for cognitive science and philosophy of mind going forward, finally in regards to most- and myth-consciousness

    Spectrum of AI futures imaginaries by AI practitioners in Finland and Singapore: The unimagined speed of AI progress

    Get PDF
    AI teases our imagination: People have created various dystopian and utopian imaginaries of the era of AI. Although the key group creating and realizing AI-related futures imaginaries are the technology practitioners who research, develop, and apply AI, empirical research has focused on collective rather than individually interpreted imaginaries. Empirical research on individual practitioners’ futures imaginaries is necessary, as well as fine-tuning the related vocabulary to support individual and non-linear perspectives. We present an empirical study in which we interviewed 35 AI and robotics practitioners based in Finland and Singapore. We asked (1) what kind of best and worst futures imaginaries the practitioners of AI and robots envision and (2) how the practitioners imagine likely futures will emerge. As a result, we present three continuums. Along these continuums, the practitioners consider variations of likely futures: (1) the best, (2) in between, and (3) the worst. Our analysis reveals decisive questions behind the continuums regarding the agent in control, relations in practitioner communities and society, and justified concentration of power.© 2023 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).fi=vertaisarvioitu|en=peerReviewed

    Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making

    Get PDF
    The era of AI-based decision-making fast approaches, and anxiety is mounting about when, and why, we should keep “humans in the loop” (“HITL”). Thus far, commentary has focused primarily on two questions: whether, and when, keeping humans involved will improve the results of decision-making (making them safer or more accurate), and whether, and when, non-accuracy-related values—legitimacy, dignity, and so forth—are vindicated by the inclusion of humans in decision-making. Here, we take up a related but distinct question, which has eluded the scholarship thus far: does it matter if humans appear to be in the loop of decision-making, independent from whether they actually are? In other words, what is stake in the disjunction between whether humans in fact have ultimate authority over decision-making versus whether humans merely seem, from the outside, to have such authority? Our argument proceeds in four parts. First, we build our formal model, enriching the HITL question to include not only whether humans are actually in the loop of decision-making, but also whether they appear to be so. Second, we describe situations in which the actuality and appearance of HITL align: those that seem to involve human judgment and actually do, and those that seem automated and actually are. Third, we explore instances of misalignment: situations in which systems that seem to involve human judgment actually do not, and situations in which systems that hold themselves out as automated actually rely on humans operating “behind the curtain.” Fourth, we examine the normative issues that result from HITL misalignment, arguing that it challenges individual decision-making about automated systems and complicates collective governance of automation
    • …
    corecore