3,167 research outputs found
Man and Machine: Questions of Risk, Trust and Accountability in Today's AI Technology
Artificial Intelligence began as a field probing some of the most fundamental
questions of science - the nature of intelligence and the design of intelligent
artifacts. But it has grown into a discipline that is deeply entwined with
commerce and society. Today's AI technology, such as expert systems and
intelligent assistants, pose some difficult questions of risk, trust and
accountability. In this paper, we present these concerns, examining them in the
context of historical developments that have shaped the nature and direction of
AI research. We also suggest the exploration and further development of two
paradigms, human intelligence-machine cooperation, and a sociological view of
intelligence, which might help address some of these concerns.Comment: Preprin
Recommended from our members
Inventing Intelligence: On the History of Complex Information Processing and Artificial Intelligence in the United States in the Mid-Twentieth Century
In the mid-1950s, researchers in the United States melded formal theories of problem solving and intelligence with another powerful new tool for control: the electronic digital computer. Several branches of western mathematical science emerged from this nexus, including computer science (1960sâ), data science (1990sâ) and artificial intelligence (AI). This thesis offers an account of the origins and politics of AI in the mid-twentieth century United States, which focuses on its imbrications in systems of societal control. In an effort to denaturalize the power relations upon which the field came into being, I situate AIâs canonical origin story in relation to the structural and intellectual priorities of the U.S. military and American industry during the Cold War, circa 1952 to 1961.
This thesis offers a detailed and comparative account of the early careers, research interests, and key outputs of four researchers often credited with laying the foundations for AI and machine learningâHerbert A. Simon, Frank Rosenblatt, John McCarthy and Marvin Minsky. It chronicles the distinct ways in which each sought to formalise and simulate human mental behaviour using digital electronic computers. Rather than assess their contributions as discontinuous with what came before, as in mythologies of AI's genesis, I establish continuities with, and borrowings from, management science and operations research (Simon), Hayekian economics and instrumentalist statistics (Rosenblatt), automatic coding techniques and pedagogy (McCarthy), and cybernetics (Minsky), along with the broadscale mobilization of Cold War-era civilian-led military science generally.
I assess how Minskyâs 1961 paper 'Steps Toward Artificial Intelligence' simultaneously consolidated and obscured these entanglements as it set in motion an initial research agenda for AI in the following two decades. I argue that mind-computer metaphors, and research in complex information processing generally, played an important role in normalizing the small- and large-scale structuring of social behaviour using mathematics in the United States from the second half of the twentieth century onward
What can AI do for you?
Simply put, most organizations do not know how to approach the incorporation of AI into their businesses, and few are knowledgeable enough to understand which concepts are applicable to their business models. Doing nothing and waiting is not an option: Mahidar and Davenport (2018) argue that companies that try to play catch-up will ultimately lose to those who invested and began learning early. But how do we bridge the gap between skepticism and adoption? We propose a toolkit, inclusive of people, processes, and technologies, to help companies with discovery and readiness to start their AI journey. Our toolkit will deliver specific and actionable answers to the operative question: What can AI do for you
Algorithms & Fiduciaries: Existing and Proposed Regulatory Approaches to Artificially Intelligent Financial Planners
Artificial intelligence is no longer solely in the realm of science fiction. Today, basic forms of machine learning algorithms are commonly used by a variety of companies. Also, advanced forms of machine learning are increasingly making their way into the consumer sphere and promise to optimize existing markets. For financial advising, machine learning algorithms promise to make advice available 24â7 and significantly reduce costs, thereby opening the market for financial advice to lower-income individuals. However, the use of machine learning algorithms also raises concerns. Among them, whether these machine learning algorithms can meet the existing fiduciary standard imposed on human financial advisers and how responsibility and liability should be partitioned when an autonomous algorithm falls short of the fiduciary standard and harms a client. After summarizing the applicable law regulating investment advisers and the current state of robo-advising, this Note evaluates whether robo-advisers can meet the fiduciary standard and proposes alternate liability schemes for dealing with increasingly sophisticated machine learning algorithms
Machine Learning: When and Where the Horses Went Astray?
Machine Learning is usually defined as a subfield of AI, which is busy with
information extraction from raw data sets. Despite of its common acceptance and
widespread recognition, this definition is wrong and groundless. Meaningful
information does not belong to the data that bear it. It belongs to the
observers of the data and it is a shared agreement and a convention among them.
Therefore, this private information cannot be extracted from the data by any
means. Therefore, all further attempts of Machine Learning apologists to
justify their funny business are inappropriate.Comment: The paper is accepted to be published in the Machine Learning serie
of the InTec
Artificial Intelligence and eLearning 4.0: A New Paradigm in Higher Education
John Markoff (2006, para.2) was the first to coin the phrase Web 3.0 in The New York Times in 2006, with the notion the next evolution of the web would contain a layer âthat can reason in human fashion.â With the emergence of Web 3.0 technology and the promise of impact on higher education, Web 3.0 will usher in a new age of artificial intelligence by increasing access to a global database of intelligence. Bill Mark, former VP of Siri note, âWeâre moving to a world where the technology does a better job of understanding higher level intent and completes the entire task for usâ (Temple, 2010, para. 14). This poster provides a quick overview of the developments from Web 1.0 to Web 3.0, the progression of artificial intelligences, as well as possible advances as we move into the era of eLearning 4.0.https://fuse.franklin.edu/ss2014/1032/thumbnail.jp
Power in AI:Inequality Within and Without the Algorithm
The birth of Artificial Intelligence in the 1950s was a birth that did not involve women. Despite female labour being used to solve problems that machines would one day automate, women remained unseen, their voices limited. Today, inequality is still rife. Bias, discrimination, and harm are algorithmically perpetuated and amplified. AI is technocratic, with multinationals in the US and China leading development, and the global South being left behind. Even within Silicon Valley, the environment is hostile towards women: from the products being created to the people calling for ethical and responsible development, women are losing out. This chapter explores the layers of imbalance in gender, race, and power in AI, from the beginnings of the discipline to the present day, from the development environment to the algorithms themselves, and highlights the challenges that need to be faced to move towards a more equitable landscape
Future progress in artificial intelligence: A poll among experts
[This is the short version of: MĂŒller, Vincent C. and Bostrom, Nick (forthcoming 2016), âFuture progress in artificial intelligence: A survey of expert opinionâ, in Vincent C. MĂŒller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about highâlevel machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to highâlevel machine intelligence coming up within a particular timeâframe, which risks they see with that development and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050 and move on to superintelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be âbadâ or âextremely badâ for humanity
- âŠ