235,772 research outputs found
Editorial
This special issue seeks papers that provide a convergent research perspective on business futures, i.e., research that draws on many disciplinary views and strives to establish fresh integrative frameworks and vocabularies. Addressing the difficulty of work culture and intelligent machines in a broad sense necessitates grappling with complicated issues such as motivation, cognition, machine learning, human learning, and system design, among others
Desire Lines: Open Educational Collections, Memory and the Social Machine
This paper delineates the initial ideas around the development of the Co-Curate North East project. The idea of computerised machines which have a social use and impact was central to the development of the project. The project was designed with and for schools and communities as a digital platform which would collect and aggregate âmemoryâ resources and collections around local area studies and social identity. It was a co-curation process supported by museums and curators which was about the âmeshworkâ between âofficialâ and âunofficialâ archives and collections and the ways in which materials generated from within the schools and community groups could themselves be re-narrated and exhibited online as part of self-organised learning experiences. This paper looks at initial ideas of social machines and the ways in machines can be used in identity and memory studies. It examines ideas of navigation and visualisation of data and concludes with some initial findings from the early stages of the project about the potential for machines and educational work
Artificial morality: Making of the artificial moral agents
Abstract:
Artificial Morality is a new, emerging interdisciplinary field that centres
around the idea of creating artificial moral agents, or AMAs, by implementing moral
competence in artificial systems. AMAs are ought to be autonomous agents capable of
socially correct judgements and ethically functional behaviour. This request for moral
machines comes from the changes in everyday practice, where artificial systems are being
frequently used in a variety of situations from home help and elderly care purposes to
banking and court algorithms. It is therefore important to create reliable and responsible
machines based on the same ethical principles that society demands from people. New
challenges in creating such agents appear. There are philosophical questions about a
machineâs potential to be an agent, or mora
l agent, in the first place. Then comes the
problem of social acceptance of such machines, regardless of their theoretic agency
status. As a result of efforts to resolve this problem, there are insinuations of needed
additional psychological (emotional and cogn
itive) competence in cold moral machines.
What makes this endeavour of developing AMAs even harder is the complexity of the
technical, engineering aspect of their creation. Implementation approaches such as top-
down, bottom-up and hybrid approach aim to find the best way of developing fully
moral agents, but they encounter their own problems throughout this effort
Universal Intelligence: A Definition of Machine Intelligence
A fundamental problem in artificial intelligence is that nobody really knows
what intelligence is. The problem is especially acute when we need to consider
artificial systems which are significantly different to humans. In this paper
we approach this problem in the following way: We take a number of well known
informal definitions of human intelligence that have been given by experts, and
extract their essential features. These are then mathematically formalised to
produce a general measure of intelligence for arbitrary machines. We believe
that this equation formally captures the concept of machine intelligence in the
broadest reasonable sense. We then show how this formal definition is related
to the theory of universal optimal learning agents. Finally, we survey the many
other tests and definitions of intelligence that have been proposed for
machines.Comment: 50 gentle page
Automation of play:theorizing self-playing games and post-human ludic agents
This article offers a critical reflection on automation of play and its significance for the theoretical inquiries into digital games and play. Automation has become an ever more noticeable phenomenon in the domain of video games, expressed by self-playing game worlds, self-acting characters, and non-human agents traversing multiplayer spaces. On the following pages, the author explores various instances of automated non-human play and proposes a post-human theoretical lens, which may help to create a new framework for the understanding of videogames, renegotiate the current theories of interaction prevalent in game studies, and rethink the relationship between human players and digital games
Tuning Hyperparameters in Supervised Learning Models and Applications of Statistical Learning in Genome-Wide Association Studies with Emphasis on Heritability
Machine learning is a buzz word that has inundated popular culture in the last few years. This is a term for a computer method that can automatically learn and improve from data instead of being explicitly programmed at every step. Investigations regarding the best way to create and use these methods are prevalent in research. Machine learning models can be difficult to create because models need to be tuned. This dissertation explores the characteristics of tuning three popular machine learning models and finds a way to automatically select a set of tuning parameters. This information was used to create an R software package called EZtune that can be used to automatically tune three widely used machine learning algorithms: support vector machines, gradient boosting machines, and adaboost.
The second portion of this dissertation investigates the implementation of machine learning methods in finding locations along a genome that are associated with a trait. The performance of methods that have been commonly used for these types of studies, and some that have not been commonly used, are assessed using simulated data. The affect of the strength of the relationship between the genetic code and the trait is of particular interest. It was found that the strength of this relationship was the most important characteristic in the efficacy of each method
The first business computer: a case study in user-driven innovation
In 1949, the world's first business computer application was rolled out. The host for the application was a British catering and food-manufacturing company, which had developed and built its own computer, designed for business data processing. The author traces the endeavour's history and presents an analysis of how and why the company-J. Lyons & Co.-was in a natural position to take on the challenge, the precursor of the information revolution we see toda
- âŠ