12 research outputs found

    First Steps towards Underdominant Genetic Transformation of Insect Populations

    Get PDF
    The idea of introducing genetic modifications into wild populations of insects to stop them from spreading diseases is more than 40 years old. Synthetic disease refractory genes have been successfully generated for mosquito vectors of dengue fever and human malaria. Equally important is the development of population transformation systems to drive and maintain disease refractory genes at high frequency in populations. We demonstrate an underdominant population transformation system in Drosophila melanogaster that has the property of being both spatially self-limiting and reversible to the original genetic state. Both population transformation and its reversal can be largely achieved within as few as 5 generations. The described genetic construct {Ud} is composed of two genes; (1) a UAS-RpL14.dsRNA targeting RNAi to a haploinsufficient gene RpL14 and (2) an RNAi insensitive RpL14 rescue. In this proof-of-principle system the UAS-RpL14.dsRNA knock-down gene is placed under the control of an Actin5c-GAL4 driver located on a different chromosome to the {Ud} insert. This configuration would not be effective in wild populations without incorporating the Actin5c-GAL4 driver as part of the {Ud} construct (or replacing the UAS promoter with an appropriate direct promoter). It is however anticipated that the approach that underlies this underdominant system could potentially be applied to a number of species. Figure

    labanotation for design of movement-based interaction

    Full text link
    This paper reports findings from a study of Labanotation, an already established movement notation, as a design tool for movement-based interaction where movements of the human body are direct input to technology. Using Labanotation, we transcribed movements performed by players of two different EyetolM games. Our analysis identified a range of advantages and disadvantages of the potential use of Labanotation in design. Its major disadvantage is the effort required to learn how to use it. But it supports a representation of movement that can be easily linked into the context and point of interaction. This provides a valuable foundation for design of movement-based interaction

    How it feels, not just how it looks: when bodies interact with technology

    Full text link
    This paper presents thoughts to extend our understanding of bodily aspects of technology interactions. The aim of the paper is to offer a way of looking at the role our kinaesthetic sense plays in human-computer interaction. We approach this issue by framing it around how our bodies establish relationships with things when interacting with technology. Five aspects of a conceptual tool, body-thing dialogue, potential for action, withinreach, out-of-reach and movement expression are introduced. We discuss the role this tool can play in our thinking about, further exploration and eventually our design for movement enabled technology interactions. The idea is that it can help us consider, not just how a design or a technology might look but also how it might feel to use

    Age-friendly, how are ye: Welcoming the grace and wisdom-making of older learners

    No full text
    Movement-based interfaces assume that their users move. Users have to perform exercises, they have to dance, they have to golf or football, or they want to train particular bodily skills. Many examples of those interfaces exist, sometimes asking for subtle interaction between user and interface and sometimes asking for ‘brute force’ interaction between user and interface. Often these interfaces mediate between players of a game. Obviously, one of the players may be a virtual human. We embed this interface research in ambient intelligence and entertainment computing research, and the interfaces we consider are not only mediating, but they also ‘add’ intelligence to the interaction. Intelligent movement-based interfaces, being able to know and learn about their users, should also be able to provide means to keep their users engaged in the interaction. Issues that will be discussed in this chapter are ‘flow’ and ‘immersion’ for movement-based interfaces and we look at the possible role of interaction synchrony to measure and support engagement

    Movement-based co-creation of Adaptive Architecture

    No full text
    Research in Ubiquitous Computing, Human Computer Interaction and Adaptive Architecture combine in the research of movement-based interaction with our environments. Despite movement capture technologies becoming commonplace, the design and the consequences for architecture of such interactions require further research. This paper combines previous research in this space with the development and evaluation of the MOVE research platform that allows the investigation of movement-based interactions in Adaptive Architecture. Using a Kinect motion sensor, MOVE tracks selected body movements of a person and allows the flexible mapping of those movements to the movement of prototype components. In this way, a person inside MOVE can immediately explore the creation of architectural form around them as they are created through the body. A sensitizing study with martial arts practitioners highlighted the potential use of MOVE as a training device, and it provided further insights into the approach and the specific implementation of the prototype. We discuss how the feedback loop between person and environment shapes and limits interaction, and how the selectiveness of this ‘mirror’ becomes useful in practice and training. We draw on previous work to describe movement based, architectural co-creation enabled by MOVE: 1) Designers of movement-based interaction embedded in Adaptive Architecture need to draw on and design around the correspondences between person and environment. 2) Inhabiting the created feedback loops result in an on-going form creation process that is egocentric as well as performative and embodied as well as without contact
    corecore