162 research outputs found
Analysis, Clarification and Extension of the Theory of Strongly Semantic Information
This paper analyzes certain technical details of Floridiâs Theory of Strongly Semantic Information.
It provides a clarification regarding desirable properties of degrees of informativeness
functions by rejecting three of Floridiâs original constraints and proposing a replacement
constraint. Finally, the paper briefly explores the notion of quantities of inaccuracy
and shows an analysis that mimics Floridiâs analysis of quantities of vacuity
The instructional information processing account of digital computation
What is nontrivial digital computation?It is the processing of discrete data through discrete state transitions in accordance with finite instructional information. The motivation for our account is that many previous attempts to answer this question are inadequate, and also that this account accords with the common intuition that digital computation is a type of information processing. We use the notion of reachability in a graph to defend this characterization in memory-based systems and underscore the importance of instructional information for digital computation. We argue that our account evaluates positively against adequacy criteria for accounts of computation
Augmented Reality All Around Us: Power and Perception at a Crossroads
In this paper we continue to explore the ethics and social impact of augmented visual field devices (AVFDs). Recently, Microsoft announced the pending release of HoloLens, and Magic Leap filed a patent application for technology that will project light directly onto the wearerâs retina. Here we explore the notion of deception in relation to the impact these devices have on developers, users, and non-users as they interact via these devices. These sorts of interactions raise questions regarding autonomy and suggest a strong need for informed consent protocols. We identify issues of ownership that arise due to the blending of physical and virtual space and important ways that these devices impact trust. Finally, we explore how these devices impact individual identity and thus raise the question of ownership of the space between an object and someoneâs eyes. We conclude that developers ought to take time to design and implement a natural and easy to use informed consent system with these devices
Moral Responsibility for Computing Artifacts: The Rules and Issues of Trust
âThe Rulesâ are found in a collaborative document (started in March 2010) that states principles for responsibility when a computer artifact is designed, developed and deployed into a sociotechnical system. At this writing, over 50 people from nine countries have signed onto The Rules (Ad Hoc Committee, 2010). Unlike codes of ethics, The Rules are not tied to any organization, and computer users as well as computing professionals are invited to sign onto The Rules. The emphasis in The Rules is that both users and professionals have responsibilities in the production and use of computing artifacts. In this paper, we use The Rules to examine issues of trust
Why We Should Have Seen That Coming: Comments on Microsoftâs Tay âExperiment,â and Wider Implications
In this paper we examine the case of Tay, the Microsoft AI chatbot that was launched in March, 2016. After less than 24 hours, Microsoft shut down the experiment because the chatbot was generating tweets that were judged to be inappropriate since they included racist, sexist, and anti-Semitic language. We contend that the case of Tay illustrates a problem with the very nature of learning software (LS is a term that describes any software that changes its program in response to its interactions) that interacts directly with the public, and the developerâs role and responsibility associated with it. We make the case that when LS interacts directly with people or indirectly via social media, the developer has additional ethical responsibilities beyond those of standard software. There is an additional burden of care
On the Responsibility for Uses of Downstream Software
In this paper we explore an issue that is different from whether developers are responsible for the direct impact of the software they write. We examine, instead, in what ways, and to what degree, developers are responsible for the way their software is used âdownstream.â We review some key scholarship analyzing responsibility in computing ethics, including some recent work by Floridi. We use an adaptation of a mechanism developed by Floridi to argue that there are features of software that can be used as guides to better distinguish situations where a software developer might share in responsibility for the softwareâs downstream use from those in which the software developer likely does not share in that responsibility. We identify five such features and argue how they are useful in the model of responsibility that we develop. The features are: closeness to the hardware, risk, sensitivity of data, degree of control over or knowledge of the future population of users, and the nature of the software (general vs. special purpose)
On Using Model For Downstream Responsibility
The authors identify features of software and the software development process that may contribute to the differences in the level of responsibility assigned to the software developers when they make their software available for others to use as a tool in building a second piece of software. They call this second use of the software downstream use
When AI Moves Downstream
After computing professionals design, develop, and deploy software, what is their responsibility for subsequent uses of that software âdownstreamâ by others? Furthermore, does it matter ethically if the software in question is considered to be artificial intelligent (AI)? The authors have previously developed a model to explore downstream accountability, called the Software Responsibility Attribution System (SRAS). In this paper, we explore three recent publications relevant to downstream accountability, and focus particularly on examples of AI software. Based on our understanding of the three papers, we suggest refinements of SRAS
The Indeterminacy of Computation
Do the dynamics of a physical system determine what function the system computes? Except in special cases, the answer is no: it is often indeterminate what function a given physical system computes. Accordingly, care should be taken when the question âWhat does a particular neural system do?â is answered by hypothesising that the system computes a particular function. The phenomenon of the indeterminacy of computation has important implications for the development of computational explanations of biological systems. Additionally, the phenomenon lends some support to the idea that a single neural structure may perform multiple cognitive functions, each subserved by a different computation. We provide an overarching conceptual framework in order to further the philosophical debate on the nature of computational indeterminacy and computational explanation
On Two-Path Convexity in Multipartite Tournaments
Abstract In the context of two-path convexity, we study the rank, Helly number, Radon number, Caratheodory number, and hull number for multipartite tournaments. We show the maximum Caratheodory number of a multipartite tournament is 3. We then derive tight upper bounds for rank in both general multipartite tournaments and clone-free multipartite tournaments. We show that these same tight upper bounds hold for the Helly number, Radon number, and hull number. We classify all clone-free multipartite tournaments of maximum Helly number, Radon number, hull number, and rank. Finally we determine all convexly independent sets of clone-free multipartite tournaments of maximum rank
- âŠ