20,489 research outputs found

    Sensing and mapping for interactive performance

    Get PDF
    This paper describes a trans-domain mapping (TDM) framework for translating meaningful activities from one creative domain onto another. The multi-disciplinary framework is designed to facilitate an intuitive and non-intrusive interactive multimedia performance interface that offers the users or performers real-time control of multimedia events using their physical movements. It is intended to be a highly dynamic real-time performance tool, sensing and tracking activities and changes, in order to provide interactive multimedia performances. From a straightforward definition of the TDM framework, this paper reports several implementations and multi-disciplinary collaborative projects using the proposed framework, including a motion and colour-sensitive system, a sensor-based system for triggering musical events, and a distributed multimedia server for audio mapping of a real-time face tracker, and discusses different aspects of mapping strategies in their context. Plausible future directions, developments and exploration with the proposed framework, including stage augmenta tion, virtual and augmented reality, which involve sensing and mapping of physical and non-physical changes onto multimedia control events, are discussed

    Challenges in video based object detection in maritime scenario using computer vision

    Get PDF
    This paper discusses the technical challenges in maritime image processing and machine vision problems for video streams generated by cameras. Even well documented problems of horizon detection and registration of frames in a video are very challenging in maritime scenarios. More advanced problems of background subtraction and object detection in video streams are very challenging. Challenges arising from the dynamic nature of the background, unavailability of static cues, presence of small objects at distant backgrounds, illumination effects, all contribute to the challenges as discussed here

    Autonomous real-time surveillance system with distributed IP cameras

    Get PDF
    An autonomous Internet Protocol (IP) camera based object tracking and behaviour identification system, capable of running in real-time on an embedded system with limited memory and processing power is presented in this paper. The main contribution of this work is the integration of processor intensive image processing algorithms on an embedded platform capable of running at real-time for monitoring the behaviour of pedestrians. The Algorithm Based Object Recognition and Tracking (ABORAT) system architecture presented here was developed on an Intel PXA270-based development board clocked at 520 MHz. The platform was connected to a commercial stationary IP-based camera in a remote monitoring station for intelligent image processing. The system is capable of detecting moving objects and their shadows in a complex environment with varying lighting intensity and moving foliage. Objects moving close to each other are also detected to extract their trajectories which are then fed into an unsupervised neural network for autonomous classification. The novel intelligent video system presented is also capable of performing simple analytic functions such as tracking and generating alerts when objects enter/leave regions or cross tripwires superimposed on live video by the operator

    Synopsis of an engineering solution for a painful problem Phantom Limb Pain

    Get PDF
    This paper is synopsis of a recently proposed solution for treating patients who suffer from Phantom Limb Pain (PLP). The underpinning approach of this research and development project is based on an extension of “mirror box” therapy which has had some promising results in pain reduction. An outline of an immersive individually tailored environment giving the patient a virtually realised limb presence, as a means to pain reduction is provided. The virtual 3D holographic environment is meant to produce immersive, engaging and creative environments and tasks to encourage and maintain patients’ interest, an important aspect in two of the more challenging populations under consideration (over-60s and war veterans). The system is hoped to reduce PLP by more than 3 points on an 11 point Visual Analog Scale (VAS), when a score less than 3 could be attributed to distraction alone

    leave a trace - A People Tracking System Meets Anomaly Detection

    Full text link
    Video surveillance always had a negative connotation, among others because of the loss of privacy and because it may not automatically increase public safety. If it was able to detect atypical (i.e. dangerous) situations in real time, autonomously and anonymously, this could change. A prerequisite for this is a reliable automatic detection of possibly dangerous situations from video data. This is done classically by object extraction and tracking. From the derived trajectories, we then want to determine dangerous situations by detecting atypical trajectories. However, due to ethical considerations it is better to develop such a system on data without people being threatened or even harmed, plus with having them know that there is such a tracking system installed. Another important point is that these situations do not occur very often in real, public CCTV areas and may be captured properly even less. In the artistic project leave a trace the tracked objects, people in an atrium of a institutional building, become actor and thus part of the installation. Visualisation in real-time allows interaction by these actors, which in turn creates many atypical interaction situations on which we can develop our situation detection. The data set has evolved over three years and hence, is huge. In this article we describe the tracking system and several approaches for the detection of atypical trajectories

    Calibration and Validation of A Shared space Model: A Case Study

    Get PDF
    Shared space is an innovative streetscape design that seeks minimum separation between vehicle traffic and pedestrians. Urban design is moving toward space sharing as a means of increasing the community texture of street surroundings. Its unique features aim to balance priorities and allow cars and pedestrians to coexist harmoniously without the need to dictate behavior. There is, however, a need for a simulation tool to model future shared space schemes and to help judge whether they might represent suitable alternatives to traditional street layouts. This paper builds on the authors’ previously published work in which a shared space microscopic mixed traffic model based on the social force model (SFM) was presented, calibrated, and evaluated with data from the shared space link typology of New Road in Brighton, United Kingdom. Here, the goal is to explore the transferability of the authors’ model to a similar shared space typology and investigate the effect of flow and ratio of traffic modes. Data recorded from the shared space scheme of Exhibition Road, London, were collected and analyzed. The flow and speed of cars and segregation between pedestrians and cars are greater on Exhibition Road than on New Road. The rule-based SFM for shared space modeling is calibrated and validated with the real data. On the basis of the results, it can be concluded that shared space schemes are context dependent and that factors such as the infrastructural design of the environment and the flow and speed of pedestrians and vehicles affect the willingness to share space

    HeadOn: Real-time Reenactment of Human Portrait Videos

    Get PDF
    We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel real-time reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor. On top of the coarse geometric proxy, we propose a video-based rendering technique that composites the modified target portrait video via view- and pose-dependent texturing, and creates photo-realistic imagery of the target actor under novel torso and head poses, facial expressions, and gaze directions. To this end, we propose a robust tracking of the face and torso of the source actor. We extensively evaluate our approach and show significant improvements in enabling much greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at Siggraph'1
    • 

    corecore