5 research outputs found

    Who is where? Matching people in video to wearable acceleration during crowded mingling events

    Get PDF
    ConferenciaWe address the challenging problem of associating acceler- ation data from a wearable sensor with the corresponding spatio-temporal region of a person in video during crowded mingling scenarios. This is an important rst step for multi- sensor behavior analysis using these two modalities. Clearly, as the numbers of people in a scene increases, there is also a need to robustly and automatically associate a region of the video with each person's device. We propose a hierarchi- cal association approach which exploits the spatial context of the scene, outperforming the state-of-the-art approaches signi cantly. Moreover, we present experiments on match- ing from 3 to more than 130 acceleration and video streams which, to our knowledge, is signi cantly larger than prior works where only up to 5 device streams are associated

    Penggunaan Accelerometer MMA7361 sebagai Alternatif Pengukuran Lendutan pada Jembatan Secara Nirkabel Berbasis ATmega32

    Get PDF
    A bridge is planned and built with a certain capability against vehicles through it. The vehicles movement causes vibration and vertical deflection on certain parts of the bridge. If the vibration occurs continuously in a great value, then the bridge will be damaged sooner than had been planned. This research reports a design of a vertical deflection measuring system prototype on bridge, employing an accelerometer MMA7361 sensor which is controlledby ATmega32. The system was tested by manually loading the trial bridge with 1 m of length. The loading deflects down the bridge to maximum 15 cm from the reference point. Sensor readout data was sent wirelessly using ZigBee real time to computer in a graphical display for easy analysis. The research give an alternative method in vertical deflection measuring on the bridge that can be utilized by stakeholder in policy decision

    Recognising Complex Activities with Histograms of Relative Tracklets

    Get PDF
    One approach to the recognition of complex human activities is to use feature descriptors that encode visual inter-actions by describing properties of local visual features with respect to trajectories of tracked objects. We explore an example of such an approach in which dense tracklets are described relative to multiple reference trajectories, providing a rich representation of complex interactions between objects of which only a subset can be tracked. SpeciïŹcally, we report experiments in which reference trajectories are provided by tracking inertial sensors in a food preparation sce-nario. Additionally, we provide baseline results for HOG, HOF and MBH, and combine these features with others for multi-modal recognition. The proposed histograms of relative tracklets (RETLETS) showed better activity recognition performance than dense tracklets, HOG, HOF, MBH, or their combination. Our comparative evaluation of features from accelerometers and video highlighted a performance gap between visual and accelerometer-based motion features and showed a substantial performance gain when combining features from these sensor modalities. A considerable further performance gain was observed in combination with RETLETS and reference tracklet features
    corecore