8,142 research outputs found

    User-interface to a CCTV video search system

    Get PDF
    The proliferation of CCTV surveillance systems creates a problem of how to effectively navigate and search the resulting video archive, in a variety of security scenarios. We are concerned here with a situation where a searcher must locate all occurrences of a given person or object within a specified timeframe and with constraints on which camera(s) footage is valid to search. Conventional approaches based on browsing time/camera based combinations are inadequate. We advocate using automatically detected video objects as a basis for search, linking and browsing. In this paper we present a system under development based on users interacting with detected video objects. We outline the suite of technologies needed to achieve such a system and for each we describe where we are in terms of realizing those technologies. We also present a system interface to this system, designed with user needs and user tasks in mind

    Fast and Robust Detection of Fallen People from a Mobile Robot

    Full text link
    This paper deals with the problem of detecting fallen people lying on the floor by means of a mobile robot equipped with a 3D depth sensor. In the proposed algorithm, inspired by semantic segmentation techniques, the 3D scene is over-segmented into small patches. Fallen people are then detected by means of two SVM classifiers: the first one labels each patch, while the second one captures the spatial relations between them. This novel approach showed to be robust and fast. Indeed, thanks to the use of small patches, fallen people in real cluttered scenes with objects side by side are correctly detected. Moreover, the algorithm can be executed on a mobile robot fitted with a standard laptop making it possible to exploit the 2D environmental map built by the robot and the multiple points of view obtained during the robot navigation. Additionally, this algorithm is robust to illumination changes since it does not rely on RGB data but on depth data. All the methods have been thoroughly validated on the IASLAB-RGBD Fallen Person Dataset, which is published online as a further contribution. It consists of several static and dynamic sequences with 15 different people and 2 different environments

    iBILL: Using iBeacon and Inertial Sensors for Accurate Indoor Localization in Large Open Areas

    Get PDF
    As a key technology that is widely adopted in location-based services (LBS), indoor localization has received considerable attention in both research and industrial areas. Despite the huge efforts made for localization using smartphone inertial sensors, its performance is still unsatisfactory in large open areas, such as halls, supermarkets, and museums, due to accumulated errors arising from the uncertainty of users’ mobility and fluctuations of magnetic field. Regarding that, this paper presents iBILL, an indoor localization approach that jointly uses iBeacon and inertial sensors in large open areas. With users’ real-time locations estimated by inertial sensors through an improved particle filter, we revise the algorithm of augmented particle filter to cope with fluctuations of magnetic field. When users enter vicinity of iBeacon devices clusters, their locations are accurately determined based on received signal strength of iBeacon devices, and accumulated errors can, therefore, be corrected. Proposed by Apple Inc. for developing LBS market, iBeacon is a type of Bluetooth low energy, and we characterize both the advantages and limitations of localization when it is utilized. Moreover, with the help of iBeacon devices, we also provide solutions of two localization problems that have long remained tough due to the increasingly large computational overhead and arbitrarily placed smartphones. Through extensive experiments in the library on our campus, we demonstrate that iBILL exhibits 90% errors within 3.5 m in large open areas
    • 

    corecore