4,821 research outputs found

    TV-Centric technologies to provide remote areas with two-way satellite broadband access

    Get PDF
    October 1-2, 2007, Rome, Italy TV-Centric Technologies To Provide Remote Areas With Two-Way Satellite Broadband Acces

    High-level feature detection from video in TRECVid: a 5-year retrospective of achievements

    Get PDF
    Successful and effective content-based access to digital video requires fast, accurate and scalable methods to determine the video content automatically. A variety of contemporary approaches to this rely on text taken from speech within the video, or on matching one video frame against others using low-level characteristics like colour, texture, or shapes, or on determining and matching objects appearing within the video. Possibly the most important technique, however, is one which determines the presence or absence of a high-level or semantic feature, within a video clip or shot. By utilizing dozens, hundreds or even thousands of such semantic features we can support many kinds of content-based video navigation. Critically however, this depends on being able to determine whether each feature is or is not present in a video clip. The last 5 years have seen much progress in the development of techniques to determine the presence of semantic features within video. This progress can be tracked in the annual TRECVid benchmarking activity where dozens of research groups measure the effectiveness of their techniques on common data and using an open, metrics-based approach. In this chapter we summarise the work done on the TRECVid high-level feature task, showing the progress made year-on-year. This provides a fairly comprehensive statement on where the state-of-the-art is regarding this important task, not just for one research group or for one approach, but across the spectrum. We then use this past and on-going work as a basis for highlighting the trends that are emerging in this area, and the questions which remain to be addressed before we can achieve large-scale, fast and reliable high-level feature detection on video

    Wireless multimedia sensor network technology: a survey

    Get PDF
    Wireless Multimedia Sensor Networks (WMSNs) is comprised of small embedded video motes capable of extracting the surrounding environmental information, locally processing it and then wirelessly transmitting it to parent node or sink. It is comprised of video sensor, digital signal processing unit and digital radio interface. In this paper we have surveyed existing WMSN hardware and communicationprotocol layer technologies for achieving or fulfilling the objectives of WMSN. We have also listed the various technical challenges posed by this technology while discussing the communication protocol layer technologies. Sensor networking capabilities are urgently required for some of our most important scientific and societal problems like understanding the international carbon budget, monitoring water resources, monitoring vehicle emissions and safeguarding public health. This is a daunting research challenge requiring distributed sensor systems operating in complex environments while providing assurance of reliable and accurate sensing

    Adaptive Live Video Streaming by Priority Drop

    Get PDF
    In this paper we explore the use of Priority-progress streaming (PPS) for video surveillance applications. PPS is an adaptive streaming technique for the delivery of continuous media over variable bit-rate channels. It is based on the simple idea of reordering media components within a time window into priority order before transmission. The main concern when using PPS for live video streaming is the time delay introduced by reordering. In this paper we describe how PPS can be extended to support live streaming and show that the delay inherent in the approach can be tuned to satisfy a wide range of latency constraints while supporting fine-grain adaptation

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio
    • 

    corecore