7,733 research outputs found
Overcoming barriers and increasing independence: service robots for elderly and disabled people
This paper discusses the potential for service robots to overcome barriers and increase independence of
elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly
people and advances in technology which will make new uses possible and provides suggestions for some of these new
applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses
the complementarity of assistive service robots and personal assistance and considers the types of applications and
users for which service robots are and are not suitable
Real-time computation of distance to dynamic obstacles with multiple depth sensors
We present an efficient method to evaluate distances between dynamic obstacles and a number of points of interests (e.g., placed on the links of a robot) when using multiple depth cameras. A depth-space oriented discretization of the Cartesian space is introduced that represents at best the workspace monitored by a depth camera, including occluded points. A depth grid map can be initialized off line from the arrangement of the multiple depth cameras, and its peculiar search characteristics allows fusing on line the information given by the multiple sensors in a very simple and fast way. The real-time performance of the proposed approach is shown by means of collision avoidance experiments where two Kinect sensors monitor a human-robot coexistence task
Towards the simulation of cooperative perception applications by leveraging distributed sensing infrastructures
With the rapid development of Automated Vehicles (AV), the boundaries of their function alities are being pushed and new challenges are being imposed. In increasingly complex
and dynamic environments, it is fundamental to rely on more powerful onboard sensors and
usually AI. However, there are limitations to this approach. As AVs are increasingly being
integrated in several industries, expectations regarding their cooperation ability is growing,
and vehicle-centric approaches to sensing and reasoning, become hard to integrate. The
proposed approach is to extend perception to the environment, i.e. outside of the vehicle,
by making it smarter, via the deployment of wireless sensors and actuators. This will vastly
improve the perception capabilities in dynamic and unpredictable scenarios and often in a
cheaper way, relying mostly in the use of lower cost sensors and embedded devices, which rely
on their scale deployment instead of centralized sensing abilities. Consequently, to support
the development and deployment of such cooperation actions in a seamless way, we require
the usage of co-simulation frameworks, that can encompass multiple perspectives of control
and communications for the AVs, the wireless sensors and actuators and other actors in the
environment. In this work, we rely on ROS2 and micro-ROS as the underlying technologies
for integrating several simulation tools, to construct a framework, capable of supporting the
development, test and validation of such smart, cooperative environments. This endeavor
was undertaken by building upon an existing simulation framework known as AuNa. We
extended its capabilities to facilitate the simulation of cooperative scenarios by incorporat ing external sensors placed within the environment rather than just relying on vehicle-based
sensors. Moreover, we devised a cooperative perception approach within this framework,
showcasing its substantial potential and effectiveness. This will enable the demonstration of
multiple cooperation scenarios and also ease the deployment phase by relying on the same
software architecture.Com o rápido desenvolvimento dos Veículos Autónomos (AV), os limites das suas funcional idades estão a ser alcançados e novos desafios estão a surgir. Em ambientes complexos
e dinâmicos, é fundamental a utilização de sensores de alta capacidade e, na maioria dos
casos, inteligência artificial. Mas existem limitações nesta abordagem. Como os AVs estão
a ser integrados em várias indústrias, as expectativas quanto à sua capacidade de cooperação estão a aumentar, e as abordagens de perceção e raciocínio centradas no veículo,
tornam-se difíceis de integrar. A abordagem proposta consiste em extender a perceção para
o ambiente, isto é, fora do veículo, tornando-a inteligente, através do uso de sensores e
atuadores wireless. Isto irá melhorar as capacidades de perceção em cenários dinâmicos e
imprevisíveis, reduzindo o custo, pois a abordagem será baseada no uso de sensores low-cost
e sistemas embebidos, que dependem da sua implementação em grande escala em vez da
capacidade de perceção centralizada. Consequentemente, para apoiar o desenvolvimento
e implementação destas ações em cooperação, é necessária a utilização de frameworks de
co-simulação, que abranjam múltiplas perspetivas de controlo e comunicação para os AVs,
sensores e atuadores wireless, e outros atores no ambiente. Neste trabalho será utilizado
ROS2 e micro-ROS como as tecnologias subjacentes para a integração das ferramentas de
simulação, de modo a construir uma framework capaz de apoiar o desenvolvimento, teste e
validação de ambientes inteligentes e cooperativos. Esta tarefa foi realizada com base numa
framework de simulação denominada AuNa. Foram expandidas as suas capacidades para
facilitar a simulação de cenários cooperativos através da incorporação de sensores externos
colocados no ambiente, em vez de depender apenas de sensores montados nos veículos.
Além disso, concebemos uma abordagem de perceção cooperativa usando a framework,
demonstrando o seu potencial e eficácia. Isto irá permitir a demonstração de múltiplos
cenários de cooperação e também facilitar a fase de implementação, utilizando a mesma
arquitetura de software
Towards an intelligent and supportive environment for people with physical or cognitive restrictions
AmbienNet environment has been developed with the aim
of demonstrating the feasibility of accessible
intelligent environments designed to support people with
disabilities and older persons living independently. Its main
purpose is to examine in depth the advantages and disadvantages
of pervasive supporting systems based on the paradigm of
Ambient Intelligence for people with sensory, physical or
cognitive limitations. Hence diverse supporting technologies
and applications have been designed in order to test their
accessibility, ease of use and validity. This paper presents
the architecture of AmbienNet intelligent environment and
an intelligent application to support indoors navigation for
smart wheelchairs designed for validation purposes.Ministerio de Educación y Ciencia TIN2006-15617-C[01,02,03
A Study on Recent Developments and Issues with Obstacle Detection Systems for Automated Vehicles
This paper reviews current developments and discusses some critical issues with obstacle detection systems for automated vehicles. The concept of autonomous driving is the driver towards future mobility. Obstacle detection systems play a crucial role in implementing and deploying autonomous driving on our roads and city streets. The current review looks at technology and existing systems for obstacle detection. Specifically, we look at the performance of LIDAR, RADAR, vision cameras, ultrasonic sensors, and IR and review their capabilities and behaviour in a number of different situations: during daytime, at night, in extreme weather conditions, in urban areas, in the presence of smooths surfaces, in situations where emergency service vehicles need to be detected and recognised, and in situations where potholes need to be observed and measured. It is suggested that combining different technologies for obstacle detection gives a more accurate representation of the driving environment. In particular, when looking at technological solutions for obstacle detection in extreme weather conditions (rain, snow, fog), and in some specific situations in urban areas (shadows, reflections, potholes, insufficient illumination), although already quite advanced, the current developments appear to be not sophisticated enough to guarantee 100% precision and accuracy, hence further valiant effort is needed
AMTV headway sensor and safety design
A headway sensing system for an automated mixed traffic vehicle (AMTV) employing an array of optical proximity sensor elements is described, and its performance is presented in terms of object detection profiles. The problem of sensing in turns is explored experimentally and requirements for future turn sensors are discussed. A recommended headway sensor configuration, employing multiple source elements in the focal plane of one lens operating together with a similar detector unit, is described. Alternative concepts including laser radar, ultrasonic sensing, imaging techniques, and radar are compared to the present proximity sensor approach. Design concepts for an AMTV body which will minimize the probability of injury to pedestrians or passengers in the event of a collision are presented
Robot-assisted smart firefighting and interdisciplinary perspectives
Urbanization and changes in modern infrastructure have introduced new challenges to current firefighting practices. The current manual operations and training including fire investigation, hazardous chemicals detection, fire and rescue are insufficient to protect the firefighter's safety and life. The firefighting and rescue functions of the existing equipment and apparatus and their dexterity are limited, particularly in the harsh firefighting environments. It is well-established that data and informatics are key factors for efficient and smart firefighting operation. This paper provides a review on the robot-assisted firefighting systems with interdisciplinary perspectives to identify the needs, requirements, challenges as well as future trends to facilitate smart and efficient operations. The needs and challenges of robot-assisted firefighting systems are firstly investigated and identified. Subsequently, prevailing firefighting robotic platforms in literature as well as in practices are elaborately scrutinized and discussed, followed by investigation of localization and navigation support methods. Finally, conclusions and future trends outlook are provided
A Real-Time Wireless Sensor Network for Wheelchair Navigation
Today, the availability of inexpensive, low power
hardware including CMOS cameras and wireless devices make
it possible to deploy a wireless sensor network (WSN) with nodes
equipped with cameras for a variety of applications. In this
paper, we discuss the use of one of these WSNs as a navigation
aid for wheelchairs. Instead of having complicated wheelchairs
with lots of on-board sensors, we argue that a viable alternative
is to have simpler wheelchairs that are able to interact with an
intelligent environment so that the wheelchair bases its
navigation on its software intelligence, supported by the
information sent by external sensors. Many questions have to be
investigated, for instance how sensors should be deployed or
whether the wireless links would be able to meet our temporal
requirements. We describe some of the solutions we adopted,
particularly how to implement with Zigbee devices a polling
mechanism that allows us to guarantee a real-time secure
navigation.Ministerio de Educación y Ciencia TIN2006-15617- C03-03Junta de Andalucía P06-TIC-229
- …