28 research outputs found

    DirectIVE-- choreographing media for interactive virtual environments

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, June 1997.Includes bibliographical references (leaves 62-65).by Flavia Sparacino.M.S

    Supporting Group Coherence in a Museum Visit

    Full text link

    ADHERE: randomized controlled trial comparing renal function in de novo kidney transplant recipients receiving prolonged-release tacrolimus plus mycophenolate mofetil or sirolimus

    Get PDF
    ADHERE was a randomized, open-label, Phase IV study comparing renal function at Week 52 postkidney transplant, in patients who received prolongedrelease tacrolimus-based immunosuppressive regimens. On Days 0?27, patients received prolonged-release tacrolimus (initially 0.2 mg/kg/day), corticosteroids, and mycophenolate mofetil (MMF). Patients were randomized on Day 28 to receive either prolonged-release tacrolimus plus MMF (Arm 1) or prolongedrelease tacrolimus (?25% dose reduction on Day 42) plus sirolimus (Arm 2). The primary endpoint was glomerular filtration rate by iohexol clearance (mGFR) at Week 52. Secondary endpoints included eGFR, creatinine clearance (CrCl), efficacy failure (patient withdrawal or graft loss), and patient/graft survival. Tolerability was analyzed. The full-analysis set comprised 569 patients (Arm 1: 287; Arm 2: 282). Week 52 mean mGFR was similar in Arm 1 versus Arm 2 (40.73 vs. 41.75 ml/min/1.73 m2; P = 0.405), as were the secondary endpoints, except composite efficacy failure, which was higher in Arm 2 versus 1 (18.2% vs. 11.5%; P = 0.002) owing to a higher postrandomization withdrawal rate due to adverse events (AEs) (14.4% vs. 5.2%). Results from this study show comparable renal function between arms at Week 52, with fewer AEs leading to study discontinuation with prolonged-release tacrolimus plus MMF (Arm 1) versus lower dose prolonged-release tacrolimus plus sirolimus (Arm 2)

    The Museum Wearable: real-time sensor-driven understanding of visitors' interests for personalized visually-augmented museum experiences

    No full text
    This paper describes the museum wearable: a wearable computer which orchestrates an audiovisual narration as a function of the visitor's interests gathered from his/her physical path in the museum and length of stops. The wearable is made by a lightweight and small computer that people carry inside a shoulder pack. It offers an audiovisual augmentation of the surrounding environment using a small, lightweight eye-piece display (often called private-eye) attached to conventional headphones. Using custom built infrared location sensors distributed in the museum space, and statistical mathematical modeling, the museum wearable builds a progressively refined user model and uses it to deliver a personalized audiovisual narration to the visitor. This device will enrich and personalize the museum visit as a visual and auditory storyteller that is able to adapt its story to the audience's interests and guide the public through the path of the exhibit

    (Some) computer vision based interfaces for interactive art and entertainment installations

    No full text
    This paper presents a brief summary of body tracking tools and interfaces the author developed, and explains how they have been applied to a variety of interactive art and entertainment projects. The purpose of such grouping of techniques and related applications is to provide the reader with information on some of the tools available today for computer vision based body tracking, how they can be selected and applied to achieve the desired artistic goal, and their limitations. What these computer vision interfaces have in common is low cost, and ease of implementation, as they require means which are commonly available today to most individuals/institutions, such as computers and small cameras. They have the additional advantage that they do not require special calibration procedures, they do not limit body movements with cables or tethers, nor do they require wearing special suites with markers for tracking

    Stochastics : a Bayesian network architecture for combined user modeling, sensor fusion, and computational storytelling for interactive spaces

    No full text
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, February 2002.Includes bibliographical references (p. 205-211).This thesis presents a mathematical framework for real-time sensor-driven stochastic modeling of story and user-story interaction, which I call sto(ry)chastics. Almost all sensor-driven interactive entertainment, art, and architecture installations today rely on one-to-one mappings between content and participant's actions to tell a story. These mappings chain small subsets of scripted content, and do not attempt to understand the public's intention or desires during interaction, and therefore are rigid, ad hoc, prone to error, and lack depth in communication of meaning and expressive power. Sto(ry)chastics uses graphical probabilistic modeling of story fragments and participant input, gathered from sensors, to tell a story to the user, as a function of people's estimated intentions and desires during interaction. Using a Bayesian network approach for combined modeling of users, sensors, and story, sto(ry)chastics, as opposed to traditional systems based on one- to-one mappings, is flexible, reconfigurable, adaptive, context-sensitive, robust, accessible, and able to explain its choices. To illustrate sto(ry)chastics, this thesis describes the museum wearable, which orchestrates an audiovisual narration as a function of the visitor's interests and physical path in the museum. The museum wearable is a lightweight and small computer that people carry inside a shoulder pack. It offers an audiovisual augmentation of the surrounding environment using a small eye-piece display attached to conventional headphones. The wearable prototype described in this document relies on a custom-designed(cont.) long-range infrared location-identification sensor to gather information on where and how long the visitor stops in the museum galleries. It uses this information as input to, or observations of, a (dynamic) Bayesian network, selected from a variety of possible models designed for this research. It then delivers an audiovisual narration to the visitor as a function of the estimated visitor type, and interactively in time and space. The network has been tested and validated on observed visitor tracking data by parameter learning using the Expectation Maximization (EM) algorithm, and by performance analysis of the model with the learned parameters. Estimation of the visitor's preferences, in addition to the type, using additional sensors, and examples of sensor fusion, are provided in a simulated environment. The main contribution of this research is to show that (dynamic) Bayesian networks are a powerful modeling technique to couple inputs to outputs for real-time sensor-driven multimedia audiovisual stories, such as those that are triggered by the body in motion in a sensor-instrumented interactive narrative space. The coarse and noisy sensor inputs are coupled to digital media outputs via a user model, and estimated probabilistically by a Bayesian network ...by Flavia Sparacino.Ph.D
    corecore