22,618 research outputs found
Smart Computing and Sensing Technologies for Animal Welfare: A Systematic Review
Animals play a profoundly important and intricate role in our lives today.
Dogs have been human companions for thousands of years, but they now work
closely with us to assist the disabled, and in combat and search and rescue
situations. Farm animals are a critical part of the global food supply chain,
and there is increasing consumer interest in organically fed and humanely
raised livestock, and how it impacts our health and environmental footprint.
Wild animals are threatened with extinction by human induced factors, and
shrinking and compromised habitat. This review sets the goal to systematically
survey the existing literature in smart computing and sensing technologies for
domestic, farm and wild animal welfare. We use the notion of \emph{animal
welfare} in broad terms, to review the technologies for assessing whether
animals are healthy, free of pain and suffering, and also positively stimulated
in their environment. Also the notion of \emph{smart computing and sensing} is
used in broad terms, to refer to computing and sensing systems that are not
isolated but interconnected with communication networks, and capable of remote
data collection, processing, exchange and analysis. We review smart
technologies for domestic animals, indoor and outdoor animal farming, as well
as animals in the wild and zoos. The findings of this review are expected to
motivate future research and contribute to data, information and communication
management as well as policy for animal welfare
360 Quantified Self
Wearable devices with a wide range of sensors have contributed to the rise of
the Quantified Self movement, where individuals log everything ranging from the
number of steps they have taken, to their heart rate, to their sleeping
patterns. Sensors do not, however, typically sense the social and ambient
environment of the users, such as general life style attributes or information
about their social network. This means that the users themselves, and the
medical practitioners, privy to the wearable sensor data, only have a narrow
view of the individual, limited mainly to certain aspects of their physical
condition.
In this paper we describe a number of use cases for how social media can be
used to complement the check-up data and those from sensors to gain a more
holistic view on individuals' health, a perspective we call the 360 Quantified
Self. Health-related information can be obtained from sources as diverse as
food photo sharing, location check-ins, or profile pictures. Additionally,
information from a person's ego network can shed light on the social dimension
of wellbeing which is widely acknowledged to be of utmost importance, even
though they are currently rarely used for medical diagnosis. We articulate a
long-term vision describing the desirable list of technical advances and
variety of data to achieve an integrated system encompassing Electronic Health
Records (EHR), data from wearable devices, alongside information derived from
social media data.Comment: QCRI Technical Repor
The Evolution of First Person Vision Methods: A Survey
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart
Glasses, Computer Vision, Video Analytics, Human-machine Interactio
Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos
Wearable cameras stand out as one of the most promising devices for the
upcoming years, and as a consequence, the demand of computer algorithms to
automatically understand the videos recorded with them is increasing quickly.
An automatic understanding of these videos is not an easy task, and its mobile
nature implies important challenges to be faced, such as the changing light
conditions and the unrestricted locations recorded. This paper proposes an
unsupervised strategy based on global features and manifold learning to endow
wearable cameras with contextual information regarding the light conditions and
the location captured. Results show that non-linear manifold methods can
capture contextual patterns from global features without compromising large
computational resources. The proposed strategy is used, as an application case,
as a switching mechanism to improve the hand-detection problem in egocentric
videos.Comment: Submitted for publicatio
Wearable performance
This is the post-print version of the article. The official published version can be accessed from the link below - Copyright @ 2009 Taylor & FrancisWearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment. Wearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment
A Model for Using Physiological Conditions for Proactive Tourist Recommendations
Mobile proactive tourist recommender systems can support tourists by
recommending the best choice depending on different contexts related to herself
and the environment. In this paper, we propose to utilize wearable sensors to
gather health information about a tourist and use them for recommending tourist
activities. We discuss a range of wearable devices, sensors to infer
physiological conditions of the users, and exemplify the feasibility using a
popular self-quantification mobile app. Our main contribution then comprises a
data model to derive relations between the parameters measured by the wearable
sensors, such as heart rate, body temperature, blood pressure, and use them to
infer the physiological condition of a user. This model can then be used to
derive classes of tourist activities that determine which items should be
recommended
- …