44,902 research outputs found

    Toward homochiral protocells in noncatalytic peptide systems

    Full text link
    The activation-polymerization-epimerization-depolymerization (APED) model of Plasson et al. has recently been proposed as a mechanism for the evolution of homochirality on prebiotic Earth. The dynamics of the APED model in two-dimensional spatially-extended systems is investigated for various realistic reaction parameters. It is found that the APED system allows for the formation of isolated homochiral proto-domains surrounded by a racemate. A diffusive slowdown of the APED network such as induced through tidal motion or evaporating pools and lagoons leads to the stabilization of homochiral bounded structures as expected in the first self-assembled protocells.Comment: 10 pages, 5 figure

    Segregating Event Streams and Noise with a Markov Renewal Process Model

    Get PDF
    DS and MP are supported by EPSRC Leadership Fellowship EP/G007144/1

    3D Composer: A Software for Micro-composition

    Get PDF
    The aim of this compositional research project is to find new paradigms of expression and representation of musical information, supported by technology. This may further our understanding of how artistic intention materialises during the production of a musical work. A further aim is to create a software device, which will allow the user to generate, analyse and manipulate abstract musical information within a multi-dimensional environment. The main intent of this software and composition portfolio is to examine the process involved during the development of a compositional tool to verify how transformations applied to the conceptualisation of musical abstraction will affect musical outcome, and demonstrate how this transformational process would be useful in a creative context. This thesis suggests a reflection upon various technological and conceptual aspects within a dynamic multimedia framework. The discussion situates the artistic work of a composer within the technological sphere, and investigates the role of technology and its influences during the creative process. Notions of space are relocated in the scope of a personal compositional direction in order to develop a new framework for musical creation. The author establishes theoretical ramifications and suggests a definition for micro-composition. The main aspect focuses on the ability to establish a direct conceptual link between visual elements and their correlated musical output, ultimately leading to the design of a software called 3D-Composer, a tool for the visualisation of musical information as a means to assist composers to create works within a new methodological and conceptual realm. Of particular importance is the ability to transform musical structures in three-dimensional space, based on the geometric properties of micro-composition. The compositions Six Electroacoustic Studies and Dada 2009 display the use of the software. The formalisation process was derived from a transposition of influences of the early twentieth century avant-garde period, to a contemporary digital studio environment utilising new media and computer technologies for musical expression

    Dynamic mapping strategies for interactive art installations: an embodied combined HCI HRI HHI approach

    Get PDF
    This paper proposes a theoretical framework for dealing with the paradigm of interactivity in new media art, and how the broad use of the term in different research fields can lead to some misunderstandings. The paper addresses a conceptual view on how we can implement interaction in new media art from an embodied approach that unites views from HCI, HRI and HHI. The focus is on an intuitive mapping of a multitude of sensor data and to extend upon this using the paradigm of (1) finite state machines (FSM) to address dynamic mapping strategies, (2) mediality to address aisthesis and (3) embodiment to address valid mapping strategies originated from natural body movements. The theory put forward is illustrated by a case study

    Affective Medicine: a review of Affective Computing efforts in Medical Informatics

    Get PDF
    Background: Affective computing (AC) is concerned with emotional interactions performed with and through computers. It is defined as “computing that relates to, arises from, or deliberately influences emotions”. AC enables investigation and understanding of the relation between human emotions and health as well as application of assistive and useful technologies in the medical domain. Objectives: 1) To review the general state of the art in AC and its applications in medicine, and 2) to establish synergies between the research communities of AC and medical informatics. Methods: Aspects related to the human affective state as a determinant of the human health are discussed, coupled with an illustration of significant AC research and related literature output. Moreover, affective communication channels are described and their range of application fields is explored through illustrative examples. Results: The presented conferences, European research projects and research publications illustrate the recent increase of interest in the AC area by the medical community. Tele-home healthcare, AmI, ubiquitous monitoring, e-learning and virtual communities with emotionally expressive characters for elderly or impaired people are few areas where the potential of AC has been realized and applications have emerged. Conclusions: A number of gaps can potentially be overcome through the synergy of AC and medical informatics. The application of AC technologies parallels the advancement of the existing state of the art and the introduction of new methods. The amount of work and projects reviewed in this paper witness an ambitious and optimistic synergetic future of the affective medicine field

    Synthesis of Attributed Feature Models From Product Descriptions: Foundations

    Get PDF
    Feature modeling is a widely used formalism to characterize a set of products (also called configurations). As a manual elaboration is a long and arduous task, numerous techniques have been proposed to reverse engineer feature models from various kinds of artefacts. But none of them synthesize feature attributes (or constraints over attributes) despite the practical relevance of attributes for documenting the different values across a range of products. In this report, we develop an algorithm for synthesizing attributed feature models given a set of product descriptions. We present sound, complete, and parametrizable techniques for computing all possible hierarchies, feature groups, placements of feature attributes, domain values, and constraints. We perform a complexity analysis w.r.t. number of features, attributes, configurations, and domain size. We also evaluate the scalability of our synthesis procedure using randomized configuration matrices. This report is a first step that aims to describe the foundations for synthesizing attributed feature models

    Support for Learning Synthesiser Programming

    Get PDF
    When learning an instrument, students often like to emulate the sound and style of their favourite performers. The learning process takes many years of study and practice. In the case of synthesisers the vast parameter space involved can be daunting and unintuitive to the novice making it hard to deïŹne their desired sound and difïŹcult to understand how it was achieved. Previous research has produced methods for automatically determining an appropriate parameter set to produce a desired sound but this can still require many parameters and does not explain or demonstrate the effect of particular parameters on the resulting sound. As a ïŹrst step to solving this problem, this paper presents a new approach to searching the synthesiser parameter space to ïŹnd a sound, reformulating it as a multi-objective optimisation problem (MOOP) where two competing objectives (closeness of perceived sonic match and number of parameters) are considered. As a proof-of-concept a pareto-optimal search algorithm (NSGA-II) is applied to CSound patches of varying complexity to generate a pareto-front of non-dominating (i.e. ”equally good”) solutions. The results offer insight into the extent to which the size and nature of parameter sets can be reduced whilst still retaining an acceptable degree of perceived sonic match between target and candidate sound

    Developing a flexible and expressive realtime polyphonic wave terrain synthesis instrument based on a visual and multidimensional methodology

    Get PDF
    The Jitter extended library for Max/MSP is distributed with a gamut of tools for the generation, processing, storage, and visual display of multidimensional data structures. With additional support for a wide range of media types, and the interaction between these mediums, the environment presents a perfect working ground for Wave Terrain Synthesis. This research details the practical development of a realtime Wave Terrain Synthesis instrument within the Max/MSP programming environment utilizing the Jitter extended library. Various graphical processing routines are explored in relation to their potential use for Wave Terrain Synthesis
    • 

    corecore