600,745 research outputs found

    Animals, Machines, and Moral Responsibility in a Built Environment

    Get PDF
    Nature has ended. Acid rain and global warming leave no place untouched by human hands. We can no longer think of \u27the environment\u27 as synonymous with \u27nature\u27. Instead, Steven Vogel argues that the environment is more like a mall: it is built. And because we build the environment, we are responsible for it. Yet, other things build, too. Animals build and use tools. Machines and algorithms build everything from skyscrapers to cell phones. Are they responsible for what they build? While animals and robots are normally considered in distinct philosophical fields, Vogel’s rejection of the natural-artificial split prompts us to question the distinction between natural and artificial agents. I argue, under consistent reasons, that neither animals nor robots are morally responsible for what they do. When machines act in morally consequential ways, then, we cannot blame the robot. However, we usually think to blame those who built the robot. I present a theory of how a builder may be responsible for what they build. Then, I argue that there are cases where neither the robot nor the engineer can be blamed for the robot\u27s actions. Drawing on Vogel, Karl Marx, and Martin Heidegger, I explore moral and environmental responsibility through meditations on animals and machines

    Machines do not decide hate speech: Machine learning, power, and the intersectional approach

    Get PDF
    The advent of social media has increased digital content - and, with it, hate speech. Advancements in machine learning help detect online hate speech at scale, but scale is only one part of the problem related to moderating it. Machines do not decide what comprises hate speech, which is part of a societal norm. Power relations establish such norms and, thus, determine who can say what comprises hate speech. Without considering this data-generation process, a fair automated hate speech detection system cannot be built. This chapter first examines the relationship between power, hate speech, and machine learning. Then, it examines how the intersectional lens - focusing on power dynamics between and within social groups - helps identify bias in the data sets used to build automated hate speech detection systems

    Interactive semantics

    Get PDF
    Much research pursues machine intelligence through better representation of semantics. What is semantics? People in different areas view semantics from different facets although it accompanies interaction through civilization. Some researchers believe that humans have some innate structure in mind for processing semantics. Then, what the structure is like? Some argue that humans evolve a structure for processing semantics through constant learning. Then, how the process is like? Humans have invented various symbol systems to represent semantics. Can semantics be accurately represented? Turing machines are good at processing symbols according to algorithms designed by humans, but they are limited in ability to process semantics and to do active interaction. Super computers and high-speed networks do not help solve this issue as they do not have any semantic worldview and cannot reflect themselves. Can future cyber-society have some semantic images that enable machines and individuals (humans and agents) to reflect themselves and interact with each other with knowing social situation through time? This paper concerns these issues in the context of studying an interactive semantics for the future cyber-society. It firstly distinguishes social semantics from natural semantics, and then explores the interactive semantics in the category of social semantics. Interactive semantics consists of an interactive system and its semantic image, which co-evolve and influence each other. The semantic worldview and interactive semantic base are proposed as the semantic basis of interaction. The process of building and explaining semantic image can be based on an evolving structure incorporating adaptive multi-dimensional classification space and self-organized semantic link network. A semantic lens is proposed to enhance the potential of the structure and help individuals build and retrieve semantic images from different facets, abstraction levels and scales through time

    On the computational complexity of ethics: moral tractability for minds and machines

    Get PDF
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative ethics through the lens of computational complexity. First, we introduce computational complexity for the uninitiated reader and discuss how the complexity of ethical problems can be framed within Marr’s three levels of analysis. We then study a range of ethical problems based on consequentialism, deontology, and virtue ethics, with the aim of elucidating the complexity associated with the problems themselves (e.g., due to combinatorics, uncertainty, strategic dynamics), the computational methods employed (e.g., probability, logic, learning), and the available resources (e.g., time, knowledge, learning). The results indicate that most problems the normative frameworks pose lead to tractability issues in every category analyzed. Our investigation also provides several insights about the computational nature of normative ethics, including the differences between rule- and outcome-based moral strategies, and the implementation-variance with regard to moral resources. We then discuss the consequences complexity results have for the prospect of moral machines in virtue of the trade-off between optimality and efficiency. Finally, we elucidate how computational complexity can be used to inform both philosophical and cognitive-psychological research on human morality by advancing the moral tractability thesis

    Computers will not acquire general intelligence, but may still rule the world

    Get PDF
    Jobst Langrebe’s and Barry Smith’s book Why Machines Will Never Rule the World argues that artificial general intelligence (AGI) will never be realized. Drawing on theories of complexity they argue that it is not only technically, but mathematically impossible to realize AGI. The book is the result of cooperation between a philosopher and a mathematician. In addition to a thorough treatment of mathematical modelling of complex systems the book addresses many fundamental philosophical questions. The authors show that philosophy is still relevant for questions of information technology in general and artificial intelligence in particular. This paper endorses Landgrebe’s and Smith’s arguments that artificial general intelligence cannot be realized, but not their conclusion that machines will never rule the world. It is not only a question of what technology can do. An equally important question is what technology does to us. Machines may not take over the world in a literal sense, but they may have many negative effects. Some of the most serious can be placed under the category of the “degeneration effect”

    Inductive reasoning and Kolmogorov complexity

    Get PDF
    AbstractReasoning to obtain the “truth” about reality, from external data, is an important, controversial, and complicated issue in man's effort to understand nature. (Yet, today, we try to make machines do this.) There have been old useful principles, new exciting models, and intricate theories scattered in vastly different areas including philosophy of science, statistics, computer science, and psychology. We focus on inductive reasoning in correspondence with ideas of R. J. Solomonoff. While his proposals result in perfect procedures, they involve the noncomputable notion of Kolmogorov complexity. In this paper we develop the thesis that Solomonoff's method is fundamental in the sense that many other induction principles can be viewed as particular ways to obtain computable approximations to it. We demonstrate this explicitly in the cases of Gold's paradigm for inductive inference, Rissanen's minimum description length (MDL) principle, Fisher's maximum likelihood principle, and Jaynes' maximum entropy principle. We present several new theorems and derivations to this effect. We also delimit what can be learned and what cannot be learned in terms of Kolmogorov complexity, and we describe an experiment in machine learning of handwritten characters. We also give an application of Kolmogorov complexity in Valiant style learning, where we want to learn a concept probably approximately correct in feasible time and examples

    The locals casino as a social network – can an interconnected community of players detect differences in hold?

    Full text link
    Abstract It is difficult for individual players to detect differences in theoretical hold between slot machines without playing an unrealistically large number of games. This difficulty occurs because the fractional loss incurred by a player converges only slowly to the theoretical hold in the presence of volatility designed into slot pay tables. Nevertheless, many operators believe that players can detect changes in hold or differences compared to competition, especially in a locals casino market, and therefore resist increasing holds. Instead of investigating whether individual players can detect differences in hold, we ask whether a population of casino regulars who share information via a network of social connections can detect differences. We present a simulation study, varying factors such as the distribution of holds and volatilities, the density and topology of the social network (i.e. the typical number of social connections, and whether connections are random or form closed groups), and the degree to which an individual’s belief about hold is influenced by their peers. We differentiate between conditions where players are kept guessing about the looseness or tightness of the slots and conditions where the belief of the entire locals casino community crystalizes to a correct conclusion about hold. Implication statement Academic studies showing that players cannot detect differences in hold due to volatile pay tables are over-simplified because they do not take into account communication and collective experience in a locals casino community. Network-based simulations can resolve this controversy by determining how effectively a community can learn what individuals cannot

    The challenge of fluid flow 1. The diversity of flow phenomena

    Get PDF
    You look up at the sky, and see a lovely cloud; you look down, and may see lovely ripples on a rivulet (or river). On a hot summer afternoon you see dancing dust devils; on a cold winter evening you can see smoke rising lazily from achulah, and hang up there as if it has given up. You peer at a telescope, and see intense supersonic jets, or vast whirling galaxies; you measure in a wind tunnel, and sense powerful tornadoes behind an aircraft wing. The universe is full of fluid that flows in crazy, beautiful or fearsome ways. In our machines and in the lab, as in terrestrial nature, one sees this amazing diversity in the flow of such a simple liquid like water or a simple gas like air. What is it that makes fluid flows so rich, so complex-some times so highly ordered that their patterns can adorn a saree border, sometimes so chaotic as to defy analysis? Do thesame laws governall that extraordinary variety? We begin with a picture gallery of a number of visible or visualized flows, and consider which ones we understand and which ones we do not, which ones we can compute and which ones we cannot; and it will be argued that behind those all-too-common but lovely flows lie deep problems in physics and mathematics that still remain mysteries

    The role of knowledge in determining identity of long-tail entities

    Get PDF
    The NIL entities do not have an accessible representation, which means that their identity cannot be established through traditional disambiguation. Consequently, they have received little attention in entity linking systems and tasks so far. Given the non-redundancy of knowledge on NIL entities, the lack of frequency priors, their potentially extreme ambiguity, and numerousness, they form an extreme class of long-tail entities and pose a great challenge for state-of-the-art systems. In this paper, we investigate the role of knowledge when establishing the identity of NIL entities mentioned in text. What kind of knowledge can be applied to establish the identity of NILs? Can we potentially link to them at a later point? How to capture implicit knowledge and fill knowledge gaps in communication? We formulate and test hypotheses to provide insights to these questions. Due to the unavailability of instance-level knowledge, we propose to enrich the locally extracted information with profiling models that rely on background knowledge in Wikidata. We describe and implement two profiling machines based on state-of-the-art neural models. We evaluate their intrinsic behavior and their impact on the task of determining identity of NIL entities

    What is consciousness? Artificial intelligence, real intelligence, quantum mind and qualia

    Get PDF
    We approach the question ‘What is consciousness?’ in a new way, not as Descartes’ ‘systematic doubt’, but as how organisms find their way in their world. Finding one’s way involves finding possible uses of features of the world that might be beneficial or avoiding those that might be harmful. ‘Possible uses of X to accomplish Y’ are ‘affordances’. The number of uses of X is indefinite (or unknown), the different uses are unordered, are not listable, and are not deducible from one another. All biological adaptations are either affordances seized by heritable variation and selection or, far faster, by the organism acting in its world finding uses of X to accomplish Y. Based on this, we reach rather astonishing conclusions: 1. Artificial general intelligence based on universal Turing machines (UTMs) is not possible, since UTMs cannot ‘find’ novel affordances. 2. Brain-mind is not purely classical physics for no classical physics system can be an analogue computer whose dynamical behaviour can be isomorphic to ‘possible uses’. 3. Brain-mind must be partly quantum—supported by increasing evidence at 6.0 to 7.3 sigma. 4. Based on Heisenberg’s interpretation of the quantum state as ‘potentia’ converted to ‘actuals’ by measurement, where this interpretation is not a substance dualism, a natural hypothesis is that mind actualizes potentia. This is supported at 5.2 sigma. Then mind’s actualizations of entangled brain-mind-world states are experienced as qualia and allow ‘seeing’ or ‘perceiving’ of uses of X to accomplish Y. We can and do jury-rig. Computers cannot. 5. Beyond familiar quantum computers, we discuss the potentialities of trans-Turing systems
    • 

    corecore