410,915 research outputs found

    The Limits of Control, or How I Learned to Stop Worrying and Love Regulation (Discussion)

    Get PDF
    When we want to solve a problem, we talk about how we might manage or regulate—control it. Control is a a central concept in systems science, along with system, environment, utility, and information. With his information-theoretic Law of Requisite Variety, Ashby proved that to control a system we need as much variability in our regulator as we have in our system (“only variety can destroy variety”), something like a method of control for everything we want to control. For engineered systems, this appears to be the case (at least sometimes). But what about for social systems? Does a group of humans behave with the same level of variability as a machine? Not usually. And when control is applied to a human system, in the form of a new law or regulation, individuals within it may deliberately change their behavior. A machine\u27s behavior may also change when a control is applied to it—think of how emissions equipment affects the performance of an automobile (less pollution, but less power too)—but the machine doesn\u27t (typically) adapt. People do. Does this pose a difficulty if we want to employ Ashby\u27s law to solve a control problem in a human system? Or could our ability to adapt provide an advantage?Ashby acknowledged that for very large systems regulation is more difficult, and many social systems are very large. With limited resources we may not be able to control for all the variety and possible disturbances in a very large system, and therefore we must make choices. We can leave a system unregulated; we can reduce the amount of the system we want to control; we can increase control over certain forms of variety and disturbances; or we could find constraint or structure in the system\u27s variety and disturbances—in other words, create better, more accurate models of our system and its environment.Creating better models has always been a driving force in the development of systems science. Conant and Ashby proved that “every good regulator of a system must be a model of that system” in a paper of the same name. Intuitively this makes sense: if we have a better understanding of the system—a better model—we should be better able to control the system. But how well are we able to able to model human systems? For example, how well do we model intersections? Think about your experience in a car or on a bike at a downtown intersection during rush hour. Now think about that same intersection from the perspective of a pedestrian late in the evening. Did the traffic signals control the intersection in an efficient manner under both conditions? What if we consider all the downtown intersections, or the entire Portland-area traffic system? What about even larger systems? How well can we model the U.S. health care system? What is the chance that in a few thousand pages of new controls a few of them will cause some unforeseen consequence? How well do we understand the economy? Enough to create a law limiting CEO compensation? Might just one seemingly straightforward control lead to something unforeseen?!So what level of understanding must we have of a system, i.e., how well must we be able to model it, before we regulate it? We must still react to and manage, as best we can, a man-made or natural disaster, even when we may know very little about it at the start. Our ability to adapt is critical in these situations. But at the same time, with our ability to adapt we can also (with the proper resources) circumvent the intent of regulations or use regulations to protect or increase our influence: consider “loopholes” in the tax code or legislation with which large corporations can easily comply but causes great difficulties for smaller businesses.No matter what problem we have, it\u27s important to understand what limits our ability to control and how controls may cause new and different problems; this will be the general focus of this seminar. A brief overview of Ashby\u27s Law of Requisite Variety, along with a conceptual example, will be presented.https://pdxscholar.library.pdx.edu/systems_science_seminar_series/1023/thumbnail.jp

    Minds, Brains and Programs

    Get PDF
    This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain I assume this is an empirical fact about the actual causal relations between mental processes and brains It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality The main argument of this paper is directed at establishing this claim The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4

    The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence

    Get PDF
    This quote/commented critique of Turing's classical paper suggests that Turing meant -- or should have meant -- the robotic version of the Turing Test (and not just the email version). Moreover, any dynamic system (that we design and understand) can be a candidate, not just a computational one. Turing also dismisses the other-minds problem and the mind/body problem too quickly. They are at the heart of both the problem he is addressing and the solution he is proposing

    Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems

    Get PDF
    Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which they are presumptuous. After elaborating this moral concern, I explore the possibility that carefully procuring the training data for image recognition systems could ensure that the systems avoid the problem. The lesson of this paper extends beyond just the particular case of image recognition systems and the challenge of responsibly identifying a person’s intentions. Reflection on this particular case demonstrates the importance (as well as the difficulty) of evaluating machine learning systems and their training data from the standpoint of moral considerations that are not encompassed by ordinary assessments of predictive accuracy

    The imperfect observer: Mind, machines, and materialism in the 21st century

    Get PDF
    The dualist / materialist debates about the nature of consciousness are based on the assumption that an entirely physical universe must ultimately be observable by humans (with infinitely advanced tools). Thus the dualists claim that anything unobservable must be non-physical, while the materialists argue that in theory nothing is unobservable. However, there may be fundamental limitations in the power of human observation, no matter how well aided, that greatly curtail our ability to know and observe even a fully physical universe. This paper presents arguments to support the model of an inherently limited observer and explores the consequences of this view

    Special Libraries, October 1957

    Get PDF
    Volume 48, Issue 8https://scholarworks.sjsu.edu/sla_sl_1957/1007/thumbnail.jp
    • …
    corecore