43,518 research outputs found
Public entities driven robotic innovation in urban areas
Cities present new challenges and needs to satisfy and improve lifestyle for their citizens under the concept “Smart City”. In order to achieve this goal in a global manner, new technologies are required as the robotic one. But Public entities unknown the possibilities offered by this technology to get solutions to their needs. In this paper the development of the Innovative Public Procurement instruments is explained, specifically the process PDTI (Public end Users Driven Technological Innovation) as a driving force of robotic research and development and offering a list of robotic urban challenges proposed by European cities that have participated in such a process. In the next phases of the procedure, this fact will provide novel robotic solutions addressed to public demand that are an example to be followed by other Smart Cities.Peer ReviewedPostprint (author's final draft
Pedestrian Detection with Wearable Cameras for the Blind: A Two-way Perspective
Blind people have limited access to information about their surroundings,
which is important for ensuring one's safety, managing social interactions, and
identifying approaching pedestrians. With advances in computer vision, wearable
cameras can provide equitable access to such information. However, the
always-on nature of these assistive technologies poses privacy concerns for
parties that may get recorded. We explore this tension from both perspectives,
those of sighted passersby and blind users, taking into account camera
visibility, in-person versus remote experience, and extracted visual
information. We conduct two studies: an online survey with MTurkers (N=206) and
an in-person experience study between pairs of blind (N=10) and sighted (N=40)
participants, where blind participants wear a working prototype for pedestrian
detection and pass by sighted participants. Our results suggest that both of
the perspectives of users and bystanders and the several factors mentioned
above need to be carefully considered to mitigate potential social tensions.Comment: The 2020 ACM CHI Conference on Human Factors in Computing Systems
(CHI 2020
VANET Applications: Hot Use Cases
Current challenges of car manufacturers are to make roads safe, to achieve
free flowing traffic with few congestions, and to reduce pollution by an
effective fuel use. To reach these goals, many improvements are performed
in-car, but more and more approaches rely on connected cars with communication
capabilities between cars, with an infrastructure, or with IoT devices.
Monitoring and coordinating vehicles allow then to compute intelligent ways of
transportation. Connected cars have introduced a new way of thinking cars - not
only as a mean for a driver to go from A to B, but as smart cars - a user
extension like the smartphone today. In this report, we introduce concepts and
specific vocabulary in order to classify current innovations or ideas on the
emerging topic of smart car. We present a graphical categorization showing this
evolution in function of the societal evolution. Different perspectives are
adopted: a vehicle-centric view, a vehicle-network view, and a user-centric
view; described by simple and complex use-cases and illustrated by a list of
emerging and current projects from the academic and industrial worlds. We
identified an empty space in innovation between the user and his car:
paradoxically even if they are both in interaction, they are separated through
different application uses. Future challenge is to interlace social concerns of
the user within an intelligent and efficient driving
Non-Invasive Ambient Intelligence in Real Life: Dealing with Noisy Patterns to Help Older People
This paper aims to contribute to the field of ambient intelligence from the perspective of real environments, where noise levels in datasets are significant, by showing how machine learning techniques can contribute to the knowledge creation, by promoting software sensors. The created knowledge can be actionable to develop features helping to deal with problems related to minimally labelled datasets. A case study is presented and analysed, looking to infer high-level rules, which can help to anticipate abnormal activities, and potential benefits of the integration of these technologies are discussed in this context. The contribution also aims to analyse the usage of the models for the transfer of knowledge when different sensors with different settings contribute to the noise levels. Finally, based on the authors’ experience, a framework proposal for creating valuable and aggregated knowledge is depicted.This research was partially funded by Fundación Tecnalia Research & Innovation, and J.O.-M. also wants
to recognise the support obtained from the EU RFCS program through project number 793505 ‘4.0 Lean system
integrating workers and processes (WISEST)’ and from the grant PRX18/00036 given by the Spanish Secretaría
de Estado de Universidades, Investigación, Desarrollo e Innovación del Ministerio de Ciencia, Innovación
y Universidades
The Evolution of First Person Vision Methods: A Survey
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart
Glasses, Computer Vision, Video Analytics, Human-machine Interactio
Context Aware Computing for The Internet of Things: A Survey
As we are moving towards the Internet of Things (IoT), the number of sensors
deployed around the world is growing at a rapid pace. Market research has shown
a significant growth of sensor deployments over the past decade and has
predicted a significant increment of the growth rate in the future. These
sensors continuously generate enormous amounts of data. However, in order to
add value to raw sensor data we need to understand it. Collection, modelling,
reasoning, and distribution of context in relation to sensor data plays
critical role in this challenge. Context-aware computing has proven to be
successful in understanding sensor data. In this paper, we survey context
awareness from an IoT perspective. We present the necessary background by
introducing the IoT paradigm and context-aware fundamentals at the beginning.
Then we provide an in-depth analysis of context life cycle. We evaluate a
subset of projects (50) which represent the majority of research and commercial
solutions proposed in the field of context-aware computing conducted over the
last decade (2001-2011) based on our own taxonomy. Finally, based on our
evaluation, we highlight the lessons to be learnt from the past and some
possible directions for future research. The survey addresses a broad range of
techniques, methods, models, functionalities, systems, applications, and
middleware solutions related to context awareness and IoT. Our goal is not only
to analyse, compare and consolidate past research work but also to appreciate
their findings and discuss their applicability towards the IoT.Comment: IEEE Communications Surveys & Tutorials Journal, 201
- …