27,292 research outputs found

    Can Computers Create Art?

    Full text link
    This essay discusses whether computers, using Artificial Intelligence (AI), could create art. First, the history of technologies that automated aspects of art is surveyed, including photography and animation. In each case, there were initial fears and denial of the technology, followed by a blossoming of new creative and professional opportunities for artists. The current hype and reality of Artificial Intelligence (AI) tools for art making is then discussed, together with predictions about how AI tools will be used. It is then speculated about whether it could ever happen that AI systems could be credited with authorship of artwork. It is theorized that art is something created by social agents, and so computers cannot be credited with authorship of art in our current understanding. A few ways that this could change are also hypothesized.Comment: to appear in Arts, special issue on Machine as Artist (21st Century

    Animating the evolution of software

    Get PDF
    The use and development of open source software has increased significantly in the last decade. The high frequency of changes and releases across a distributed environment requires good project management tools in order to control the process adequately. However, even with these tools in place, the nature of the development and the fact that developers will often work on many other projects simultaneously, means that the developers are unlikely to have a clear picture of the current state of the project at any time. Furthermore, the poor documentation associated with many projects has a detrimental effect when encouraging new developers to contribute to the software. A typical version control repository contains a mine of information that is not always obvious and not easy to comprehend in its raw form. However, presenting this historical data in a suitable format by using software visualisation techniques allows the evolution of the software over a number of releases to be shown. This allows the changes that have been made to the software to be identified clearly, thus ensuring that the effect of those changes will also be emphasised. This then enables both managers and developers to gain a more detailed view of the current state of the project. The visualisation of evolving software introduces a number of new issues. This thesis investigates some of these issues in detail, and recommends a number of solutions in order to alleviate the problems that may otherwise arise. The solutions are then demonstrated in the definition of two new visualisations. These use historical data contained within version control repositories to show the evolution of the software at a number of levels of granularity. Additionally, animation is used as an integral part of both visualisations - not only to show the evolution by representing the progression of time, but also to highlight the changes that have occurred. Previously, the use of animation within software visualisation has been primarily restricted to small-scale, hand generated visualisations. However, this thesis shows the viability of using animation within software visualisation with automated visualisations on a large scale. In addition, evaluation of the visualisations has shown that they are suitable for showing the changes that have occurred in the software over a period of time, and subsequently how the software has evolved. These visualisations are therefore suitable for use by developers and managers involved with open source software. In addition, they also provide a basis for future research in evolutionary visualisations, software evolution and open source development

    LCrowdV: Generating Labeled Videos for Simulation-based Crowd Behavior Learning

    Full text link
    We present a novel procedural framework to generate an arbitrary number of labeled crowd videos (LCrowdV). The resulting crowd video datasets are used to design accurate algorithms or training models for crowded scene understanding. Our overall approach is composed of two components: a procedural simulation framework for generating crowd movements and behaviors, and a procedural rendering framework to generate different videos or images. Each video or image is automatically labeled based on the environment, number of pedestrians, density, behavior, flow, lighting conditions, viewpoint, noise, etc. Furthermore, we can increase the realism by combining synthetically-generated behaviors with real-world background videos. We demonstrate the benefits of LCrowdV over prior lableled crowd datasets by improving the accuracy of pedestrian detection and crowd behavior classification algorithms. LCrowdV would be released on the WWW

    Development and preliminary evaluation of a novel low cost VR-based upper limb stroke rehabilitation platform using Wii technology.

    Get PDF
    Abstract Purpose: This paper proposes a novel system (using the Nintendo Wii remote) that offers customised, non-immersive, virtual reality-based, upper-limb stroke rehabilitation and reports on promising preliminary findings with stroke survivors. Method: The system novelty lies in the high accuracy of the full kinematic tracking of the upper limb movement in real-time, offering strong personal connection between the stroke survivor and a virtual character when executing therapist prescribed adjustable exercises/games. It allows the therapist to monitor patient performance and to individually calibrate the system in terms of range of movement, speed and duration. Results: The system was tested for acceptability with three stroke survivors with differing levels of disability. Participants reported an overwhelming connection with the system and avatar. A two-week, single case study with a long-term stroke survivor showed positive changes in all four outcome measures employed, with the participant reporting better wrist control and greater functional use. Activities, which were deemed too challenging or too easy were associated with lower scores of enjoyment/motivation, highlighting the need for activities to be individually calibrated. Conclusions: Given the preliminary findings, it would be beneficial to extend the case study in terms of duration and participants and to conduct an acceptability and feasibility study with community dwelling survivors. Implications for Rehabilitation Low-cost, off-the-shelf game sensors, such as the Nintendo Wii remote, are acceptable by stroke survivors as an add-on to upper limb stroke rehabilitation but have to be bespoked to provide high-fidelity and real-time kinematic tracking of the arm movement. Providing therapists with real-time and remote monitoring of the quality of the movement and not just the amount of practice, is imperative and most critical for getting a better understanding of each patient and administering the right amount and type of exercise. The ability to translate therapeutic arm movement into individually calibrated exercises and games, allows accommodation of the wide range of movement difficulties seen after stroke and the ability to adjust these activities (in terms of speed, range of movement and duration) will aid motivation and adherence - key issues in rehabilitation. With increasing pressures on resources and the move to more community-based rehabilitation, the proposed system has the potential for promoting the intensity of practice necessary for recovery in both community and acute settings.The National Health Service (NHS) London Regional Innovation Fund

    Teaching programming at a distance: the Internet software visualization laboratory

    Get PDF
    This paper describes recent developments in our approach to teaching computer programming in the context of a part-time Masters course taught at a distance. Within our course, students are sent a pack which contains integrated text, software and video course material, using a uniform graphical representation to tell a consistent story of how the programming language works. The students communicate with their tutors over the phone and through surface mail. Through our empirical studies and experience teaching the course we have identified four current problems: (i) students' difficulty mapping between the graphical representations used in the course and the programs to which they relate, (ii) the lack of a conversational context for tutor help provided over the telephone, (iii) helping students who due to their other commitments tend to study at 'unsociable' hours, and (iv) providing software for the constantly changing and expanding range of platforms and operating systems used by students. We hope to alleviate these problems through our Internet Software Visualization Laboratory (ISVL), which supports individual exploration, and both synchronous and asynchronous communication. As a single user, students are aided by the extra mappings provided between the graphical representations used in the course and their computer programs, overcoming the problems of the original notation. ISVL can also be used as a synchronous communication medium whereby one of the users (generally the tutor) can provide an annotated demonstration of a program and its execution, a far richer alternative to technical discussions over the telephone. Finally, ISVL can be used to support asynchronous communication, helping students who work at unsociable hours by allowing the tutor to prepare short educational movies for them to view when convenient. The ISVL environment runs on a conventional web browser and is therefore platform independent, has modest hardware and bandwidth requirements, and is easy to distribute and maintain. Our planned experiments with ISVL will allow us to investigate ways in which new technology can be most appropriately applied in the service of distance education

    Visual Expectations in Infants: Evaluating the Gaze-Direction Model

    Get PDF
    Schlesinger (in press) recently proposed a model of eye movements as a tool for investigating infants’ visual expectations. In the present study, this gaze-direction model was evaluated by (a) generating a set of predictions concerning how infants distribute their attention during possible and impossible events, and (b) testing these predictions in a replication of Baillargeon’s "car study" (1986; Baillargeon & DeVos, 1991). We find that the model successfully predicts general features of infants’ gaze direction, but not specific differences obtained during the possible and impossible events. The implications of these results for infant cognition research and theory are discussed
    • …
    corecore