373 research outputs found

    Big Business, Big Government and Big Legal Questions

    Get PDF

    Competing fantasies of humans and machines: Symbolic convergences in artificial intelligence events coverage

    Get PDF
    This research analyzes coverage of major artificial intelligence events representing the thematic concept of "man versus machine." Rooted in grounded theory and rhetorical criticism, this research applies symbolic convergence theory and fantasy theme analysis to reporting from The New York Times, The Wall Street Journal and The Washington Post immediately surrounding three cultural and scientific milestones in the development of artificial intelligence technology: IBM Deep Blue's 1997 defeat of chess grandmaster Garry Kasparov; IBM Watson's 2011 defeat of Jeopardy! champions Ken Jennings and Brad Rutter; and Google DeepMind AlphaGo's 2016 defeat of Lee Sedol. This research analyzes how symbolic realities are dramatized in the context of these events such that the competitions themselves represent ideological battles between humanism or technological superiority. This research also demonstrates subtle variations in how fantasy themes and rhetorical visions manifest in coverage from each outlet, amounting to what is effectively a competition for shared consciousness between these two competing ideological constructs

    Tech Dominance and the Policeman at the Elbow

    Get PDF
    One school of thought takes much of law and the legal system as essentially irrelevant to the process of technological evolution. This view takes as axiomatic that the rate technological change is always accelerating, that any firm or institution dependent on a given technology is therefore doomed to a rapid obsolescence. Law, at best, risks interfering with a natural progression toward a better technological future, hindering “the march of civilization.” This paper discusses the historical role of antitrust investigation in changing the course of technological development by focusing on the example of the IBM litigation (1969 - 1984). While widely derided and seen as a failure, this essay challenges the conventional wisdom and suggests, with the benefit of decades of hindsight, that the IBM lawsuit and trial, despite never reaching a verdict, actually catalyzed numerous transformational developments key to the growth and innovation of the computing industries

    Passion and process in environmental management at IBM

    Get PDF
    Thesis (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2009.Includes bibliographical references (p. 111-118).Sustainability is one of the greatest challenges we are faced with. To be successfully addressed, a variety of stakeholders, including business, must be involved. With this in mind, this thesis seeks to further our understanding of how a firm's response to sustainability can, in addition to making business sense, be effective and sustainable. This inevitably entails dealing with the classic tension between "passion" and "process." Therefore, the thesis explores how a balance between these two may be found by examining IBM's extensive and long-sustained environmental management experience. IBM has a recognized record of environmental responsibility that has matured over almost 40 years, surviving periods of great difficulty for the company. Its environmental sustainability program and its commitment to corporate responsibility, a continuum from legal and compliance activities to engagements that help the company develop value-creation opportunities, is clearly strategic. Its efforts - a combination of activities that address immediate and future business pressures - are in tune with what the literature considers to be "best practice" in environmental corporate sustainability. IBM's experience confirms both the importance of nourishing an emotional commitment to sustainability and of establishing a process - in its case, an environmental management system - that enables the company to systematically identify and manage the environmental impacts of its operations.(cont.) On the one hand, its long-sustained record of environmental commitment, combined with its dedication to being a recognized environmental leader, has instilled a strong passion for sustainability across the company's organizations and employees. On the other hand, IBM's pursuit of a demonstrable record of performance, combined with a commitment to continuous improvement, has led to the development of a carefully designed, effective environmental management system. IBM seems to have optimized the balance between passion and process through a commitment to scientific, fact-based, decision-making, which has allowed the company to design and implement goals and procedures that will have the most impact given its resources and footprint.by Paulina Ponce de León Baridó.S.M.in Technology and Polic

    Big data analytics:Computational intelligence techniques and application areas

    Get PDF
    Big Data has significant impact in developing functional smart cities and supporting modern societies. In this paper, we investigate the importance of Big Data in modern life and economy, and discuss challenges arising from Big Data utilization. Different computational intelligence techniques have been considered as tools for Big Data analytics. We also explore the powerful combination of Big Data and Computational Intelligence (CI) and identify a number of areas, where novel applications in real world smart city problems can be developed by utilizing these powerful tools and techniques. We present a case study for intelligent transportation in the context of a smart city, and a novel data modelling methodology based on a biologically inspired universal generative modelling approach called Hierarchical Spatial-Temporal State Machine (HSTSM). We further discuss various implications of policy, protection, valuation and commercialization related to Big Data, its applications and deployment

    Spartan Daily, November 17, 1967

    Get PDF
    Volume 55, Issue 43https://scholarworks.sjsu.edu/spartandaily/5010/thumbnail.jp

    Graphic design + biomimicry

    Get PDF
    GRAPHIC DESIGN + BIOMIMICRY: Integrating Nature into Modern Design Practices is a thesis that explores how to effectively integrate the methodologies and principles of graphic design and biomimicry. The objective is to create an innovative design process resulting in successful, sustainable and timeless design solutions. This process is meant to remind designers of the benefits nature has to offer in helping us solve many of the problems that society is currently grappling with today. Nature over 3.8 billion years has already used its imaginative prowess to find what works, what is appropriate, and most importantly, what lasts here on Earth. The final print application acts as a resource guidebook cataloging all of the research, processes, and findings throughout the documentation of this thesis. This includes the indirect method; applying nature\u27s fourteen design principles with the fourteen universal design principles and elements, as well as the direct method of the biomimetic design process; applying the six stages: (1) Defining, (2) Analyzing, (3) Observing, (4) Selecting, (5) Implementing, and (6) Evaluating. Each chapter within the resource guidebook is defined by each stage in the graphic design + biomimicry process. Informational charts, diagrams, text and photographs are also included throughout to enhance user comprehension of the subject matter that is presented. Overall, this thesis is meant to encourage designers to think differently, forcing themselves to innovate, experiment, push and adapt their designs further than ever before. The objective at hand is to create good design that also has the potential to do good, for the world and everything that encompasses it. We are on the cusp of great change: will designers curl up at the thought of this or embrace this new mode of thinking and biomimetic mindset to help shape a positive future for design, people, and most importantly, our planet

    Adaptive-Aggressive Traders Don't Dominate

    Get PDF
    For more than a decade Vytelingum's Adaptive-Aggressive (AA) algorithm has been recognized as the best-performing automated auction-market trading-agent strategy currently known in the AI/Agents literature; in this paper, we demonstrate that it is in fact routinely outperformed by another algorithm when exhaustively tested across a sufficiently wide range of market scenarios. The novel step taken here is to use large-scale compute facilities to brute-force exhaustively evaluate AA in a variety of market environments based on those used for testing it in the original publications. Our results show that even in these simple environments AA is consistently out-performed by IBM's GDX algorithm, first published in 2002. We summarize here results from more than one million market simulation experiments, orders of magnitude more testing than was reported in the original publications that first introduced AA. A 2019 ICAART paper by Cliff claimed that AA's failings were revealed by testing it in more realistic experiments, with conditions closer to those found in real financial markets, but here we demonstrate that even in the simple experiment conditions that were used in the original AA papers, exhaustive testing shows AA to be outperformed by GDX. We close this paper with a discussion of the methodological implications of our work: any results from previous papers where any one trading algorithm is claimed to be superior to others on the basis of only a few thousand trials are probably best treated with some suspicion now. The rise of cloud computing means that the compute-power necessary to subject trading algorithms to millions of trials over a wide range of conditions is readily available at reasonable cost: we should make use of this; exhaustive testing such as is shown here should be the norm in future evaluations and comparisons of new trading algorithms.Comment: To be published as a chapter in "Agents and Artificial Intelligence" edited by Jaap van den Herik, Ana Paula Rocha, and Luc Steels; forthcoming 2019/2020. 24 Pages, 1 Figure, 7 Table

    Cognitive Computing: Collected Papers

    Get PDF
    Cognitive Computing' has initiated a new era in computer science. Cognitive computers are not rigidly programmed computers anymore, but they learn from their interactions with humans, from the environment and from information. They are thus able to perform amazing tasks on their own, such as driving a car in dense traffic, piloting an aircraft in difficult conditions, taking complex financial investment decisions, analysing medical-imaging data, and assist medical doctors in diagnosis and therapy. Cognitive computing is based on artificial intelligence, image processing, pattern recognition, robotics, adaptive software, networks and other modern computer science areas, but also includes sensors and actuators to interact with the physical world. Cognitive computers – also called 'intelligent machines' – are emulating the human cognitive, mental and intellectual capabilities. They aim to do for human mental power (the ability to use our brain in understanding and influencing our physical and information environment) what the steam engine and combustion motor did for muscle power. We can expect a massive impact of cognitive computing on life and work. Many modern complex infrastructures, such as the electricity distribution grid, railway networks, the road traffic structure, information analysis (big data), the health care system, and many more will rely on intelligent decisions taken by cognitive computers. A drawback of cognitive computers will be a shift in employment opportunities: A raising number of tasks will be taken over by intelligent machines, thus erasing entire job categories (such as cashiers, mail clerks, call and customer assistance centres, taxi and bus drivers, pilots, grid operators, air traffic controllers, …). A possibly dangerous risk of cognitive computing is the threat by “super intelligent machines” to mankind. As soon as they are sufficiently intelligent, deeply networked and have access to the physical world they may endanger many areas of human supremacy, even possibly eliminate humans. Cognitive computing technology is based on new software architectures – the “cognitive computing architectures”. Cognitive architectures enable the development of systems that exhibit intelligent behaviour.:Introduction 5 1. Applying the Subsumption Architecture to the Genesis Story Understanding System – A Notion and Nexus of Cognition Hypotheses (Felix Mai) 9 2. Benefits and Drawbacks of Hardware Architectures Developed Specifically for Cognitive Computing (Philipp Schröppe)l 19 3. Language Workbench Technology For Cognitive Systems (Tobias Nett) 29 4. Networked Brain-based Architectures for more Efficient Learning (Tyler Butler) 41 5. Developing Better Pharmaceuticals – Using the Virtual Physiological Human (Ben Blau) 51 6. Management of existential Risks of Applications leveraged through Cognitive Computing (Robert Richter) 6
    corecore