3,167 research outputs found

    Man and Machine: Questions of Risk, Trust and Accountability in Today's AI Technology

    Full text link
    Artificial Intelligence began as a field probing some of the most fundamental questions of science - the nature of intelligence and the design of intelligent artifacts. But it has grown into a discipline that is deeply entwined with commerce and society. Today's AI technology, such as expert systems and intelligent assistants, pose some difficult questions of risk, trust and accountability. In this paper, we present these concerns, examining them in the context of historical developments that have shaped the nature and direction of AI research. We also suggest the exploration and further development of two paradigms, human intelligence-machine cooperation, and a sociological view of intelligence, which might help address some of these concerns.Comment: Preprin

    What can AI do for you?

    Get PDF
    Simply put, most organizations do not know how to approach the incorporation of AI into their businesses, and few are knowledgeable enough to understand which concepts are applicable to their business models. Doing nothing and waiting is not an option: Mahidar and Davenport (2018) argue that companies that try to play catch-up will ultimately lose to those who invested and began learning early. But how do we bridge the gap between skepticism and adoption? We propose a toolkit, inclusive of people, processes, and technologies, to help companies with discovery and readiness to start their AI journey. Our toolkit will deliver specific and actionable answers to the operative question: What can AI do for you

    Algorithms & Fiduciaries: Existing and Proposed Regulatory Approaches to Artificially Intelligent Financial Planners

    Get PDF
    Artificial intelligence is no longer solely in the realm of science fiction. Today, basic forms of machine learning algorithms are commonly used by a variety of companies. Also, advanced forms of machine learning are increasingly making their way into the consumer sphere and promise to optimize existing markets. For financial advising, machine learning algorithms promise to make advice available 24–7 and significantly reduce costs, thereby opening the market for financial advice to lower-income individuals. However, the use of machine learning algorithms also raises concerns. Among them, whether these machine learning algorithms can meet the existing fiduciary standard imposed on human financial advisers and how responsibility and liability should be partitioned when an autonomous algorithm falls short of the fiduciary standard and harms a client. After summarizing the applicable law regulating investment advisers and the current state of robo-advising, this Note evaluates whether robo-advisers can meet the fiduciary standard and proposes alternate liability schemes for dealing with increasingly sophisticated machine learning algorithms

    Artificial Intelligence:Up to here ... and on again

    Get PDF

    Machine Learning: When and Where the Horses Went Astray?

    Get PDF
    Machine Learning is usually defined as a subfield of AI, which is busy with information extraction from raw data sets. Despite of its common acceptance and widespread recognition, this definition is wrong and groundless. Meaningful information does not belong to the data that bear it. It belongs to the observers of the data and it is a shared agreement and a convention among them. Therefore, this private information cannot be extracted from the data by any means. Therefore, all further attempts of Machine Learning apologists to justify their funny business are inappropriate.Comment: The paper is accepted to be published in the Machine Learning serie of the InTec

    Artificial Intelligence and eLearning 4.0: A New Paradigm in Higher Education

    Get PDF
    John Markoff (2006, para.2) was the first to coin the phrase Web 3.0 in The New York Times in 2006, with the notion the next evolution of the web would contain a layer “that can reason in human fashion.” With the emergence of Web 3.0 technology and the promise of impact on higher education, Web 3.0 will usher in a new age of artificial intelligence by increasing access to a global database of intelligence. Bill Mark, former VP of Siri note, “We’re moving to a world where the technology does a better job of understanding higher level intent and completes the entire task for us” (Temple, 2010, para. 14). This poster provides a quick overview of the developments from Web 1.0 to Web 3.0, the progression of artificial intelligences, as well as possible advances as we move into the era of eLearning 4.0.https://fuse.franklin.edu/ss2014/1032/thumbnail.jp

    Power in AI:Inequality Within and Without the Algorithm

    Get PDF
    The birth of Artificial Intelligence in the 1950s was a birth that did not involve women. Despite female labour being used to solve problems that machines would one day automate, women remained unseen, their voices limited. Today, inequality is still rife. Bias, discrimination, and harm are algorithmically perpetuated and amplified. AI is technocratic, with multinationals in the US and China leading development, and the global South being left behind. Even within Silicon Valley, the environment is hostile towards women: from the products being created to the people calling for ethical and responsible development, women are losing out. This chapter explores the layers of imbalance in gender, race, and power in AI, from the beginnings of the discipline to the present day, from the development environment to the algorithms themselves, and highlights the challenges that need to be faced to move towards a more equitable landscape

    Alan Turing: father of the modern computer

    Get PDF

    Future progress in artificial intelligence: A poll among experts

    Get PDF
    [This is the short version of: MĂŒller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. MĂŒller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050 and move on to superintelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity
    • 

    corecore