7 research outputs found

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    New Challenges in HCI: Ambient Intelligence for Human Performance Improvement

    Get PDF
    Ambient Intelligence is new multidisciplinary paradigm that is going to change the relation between humans, technology and the environment they live in. This paradigm has its roots in the ideas Ubiquitous and Pervasive computing. In this vision, that nowadays is almost reality, technology becomes pervasive in everyday lives but, despite its increasing importance, it (should) becomes “invisible”, so deeply intertwined in our day-to-day activities to disappear into the fabric of our lives. The new environment should become “intelligent” and “smart”, able to actively and adaptively react to the presence, actions and needs of humans (not only users but complex human being), in order to support daily activities and improve the quality of life. Ambient Intelligence represents a trend able to profoundly affect every aspect of our life. It is not a problem regarding only technology but is about a new way to be “human”, to inhabit our environment, and to dialogue with technology. But what makes an environment smart and intelligent is the way it understands and reacts to changing conditions. As a well-designed tool can help us carry out our activities more quickly and easily, a poorly designed one could be an obstacle. Ambient Intelligence paradigm tends to change some human’s activities by automating certain task. However is not always simple to decide what automate and when and how much the user needs to have control. In this thesis we analyse the different levels composing the Ambient Intelligence paradigm, from its theoretical roots, through technology until the issues related the Human Factors and the Human Computer Interaction, to better understand how this paradigm is able to change the performance and the behaviour of the user. After a general analysis, we decided to focus on the problem of smart surveillance analysing how is possible to automate certain tasks through a context capture system, based on the fusion of different sources and inspired to the paradigm of Ambient Intelligence. Particularly we decide to investigate, from a Human Factors point of view, how different levels of automation (LOAs) may result in a change of user’s behaviour and performances. Moreover this investigation was aimed to find the criteria that may help to design a smart surveillance system. After the design of a general framework for fusion of different sensor in a real time locating system, an hybrid people tracking system, based on the combined use of RFID UWB and computer vision techniques was developed and tested to explore the possibilities of a smart context capture system. Taking this system as an example we developed 3 simulators of a smart surveillance system implementing 3 different LOAs: manual, low system assistance, high system assistance. We performed tests (using quali-quantitative measures) to see changes in performances, Situation Awareness and workload in relation to different LOAs. Based on the results obtained, is proposed a new interaction paradigm for control rooms based on the HCI concepts related to Ambient Intelligence paradigm and especially related to Ambient Display’s concept, highlighting its usability advantages in a control room scenario. The assessments made through test showed that if from a technological perspective is possible to achieve very high levels of automation, from a Human Factors point of view this doesn’t necessarily reflect in an improvement of human performances. The latter is rather related to a particular balance that is not fixed but changes according to specific context. Thus every Ambient Intelligence system may be designed in a human centric perspective considering that, sometimes less can be more and vice-versa

    Intelligent strategies for sheep monitoring and management

    Get PDF
    With the growth in world population, there is an increasing demand for food resources and better land utilisation, e.g., domesticated animals and land management, which in turn brought about developments in intelligent farming. Modern farms rely upon intelligent sensors and advanced software solutions, to optimally manage pasture and support animal welfare. A very significant aspect in domesticated animal farms is monitoring and understanding of animal activity, which provides vital insight into animal well-being and the environment they live in. Moreover, “virtual” fencing systems provide an alternative to managing farmland by replacing traditional boundaries. This thesis proposes novel solutions to animal activity recognition based on accelerometer data using machine learning strategies, and supports the development of virtual fencing systems via animal behaviour management using audio stimuli. The first contribution of this work is four datasets comprising accelerometer gait signals. The first dataset consisted of accelerometer and gyroscope measurements, which were obtained using a Samsung smartphone on seven animals. Next, a dataset of accelerometer measurements was collected using the MetamotionR device on 8 Hebridean ewes. Finally, two datasets of nine Hebridean ewes were collected from two sensors (MetamotionR and Raspberry Pi) comprising of accelerometer signals describing active, inactive and grazing activity of the animal. These datasets will be made publicly available as there is limited availability of such datasets. In respect to activity recognition, a systematic study of the experimental setup, associated signal features and machine learning methods was performed. It was found that Random Forest using accelerometer measurements and a sample rate of 12.5Hz with a sliding window of 5 seconds provides an accuracy of above 96% when discriminating animal activity. The problem of sensor heterogeneity was addressed with transfer learning of Convolutional Neural Networks, which has been used for the first time in this problem, and resulted to an accuracy of 98.55%, and 96.59%, respectively, in the two experimental datasets. Next, the feasibility of using only audio stimuli in the context of a virtual fencing system was explored. Specifically, a systematic evaluation of the parameters of audio stimuli, e.g., frequency and duration, was performed on two sheep breeds, Hebridean and Greyface Dartmoor ewes, in the context of controlling animal position and keeping them away from a designated area. It worth noting that the use of sounds is different to existing approaches, which utilize electric shocks to train animals to adhere within the boundaries of a virtual fence. It was found that audio signals in the frequencies of 125Hz-440Hz, 10kHz-17kHz and white noise are able to control animal activity with accuracies of 89.88%, and 95.93%, for Hebridean and Greyface Dartmoor ewes, respectively. Last but not least, the thesis proposes a multifunctional system that identifies whether the animal is active or inactive, using transfer learning, and manipulates its position using the optimized sound settings achieving a classification accuracy of over 99.95%

    Activity recognition using visual tracking and RFID

    No full text
    Computer vision-based articulated human motion tracking is attractive for many applications since it allows unobtrusive and passive estimation of people's activities. Although much progress has been made on human-only tracking, the visual tracking of people that interact with objects such as tools, products, packages, and devices is considerably more challenging. The wide variety of objects, their varying visual appearance, and their varying (and often small) size makes a vision-based understanding of person-object interactions very difficult. To alleviate this problem for at least some application domains, we propose a framework that combines visual human motion tracking with RFID based object tracking. We customized commonly available RFID technology to obtain orientation estimates of objects in the field of RFID emitter coils. The resulting fusion of visual human motion tracking and RFID-based object tracking enables the accurate estimation of high-level interactions between people and objects for application domains such as retail, home-care, workplace-safety, manufacturing and others

    Activity recognition using visual tracking and RFID

    No full text
    Computer vision-based articulated human motion tracking is attractive for many applications since it allows unobtrusive and passive estimation of people's activities. Although much progress has been made on human-only tracking, the visual tracking of people that interact with objects such as tools, products, packages, and devices is considerably more challenging. The wide variety of objects, their varying visual appearance, and their varying (and often small) size makes a vision-based understanding of person-object interactions very difficult. To alleviate this problem for at least some application domains, we propose a framework that combines visual human motion tracking with RFID based object tracking. We customized commonly available RFID technology to obtain orientation estimates of objects in the field of RFID emitter coils. The resulting fusion of visual human motion tracking and RFID-based object tracking enables the accurate estimation of high-level interactions between people and objects for application domains such as retail, home-care, workplace-safety, manufacturing and others
    corecore