16 research outputs found
Enhancing Graph Representation of the Environment through Local and Cloud Computation
Enriching the robot representation of the operational environment is a
challenging task that aims at bridging the gap between low-level sensor
readings and high-level semantic understanding. Having a rich representation
often requires computationally demanding architectures and pure point cloud
based detection systems that struggle when dealing with everyday objects that
have to be handled by the robot. To overcome these issues, we propose a
graph-based representation that addresses this gap by providing a semantic
representation of robot environments from multiple sources. In fact, to acquire
information from the environment, the framework combines classical computer
vision tools with modern computer vision cloud services, ensuring computational
feasibility on onboard hardware. By incorporating an ontology hierarchy with
over 800 object classes, the framework achieves cross-domain adaptability,
eliminating the need for environment-specific tools. The proposed approach
allows us to handle also small objects and integrate them into the semantic
representation of the environment. The approach is implemented in the Robot
Operating System (ROS) using the RViz visualizer for environment
representation. This work is a first step towards the development of a
general-purpose framework, to facilitate intuitive interaction and navigation
across different domains.Comment: 5 pages, 4 figure
LLM Based Multi-Agent Generation of Semi-structured Documents from Semantic Templates in the Public Administration Domain
In the last years' digitalization process, the creation and management of
documents in various domains, particularly in Public Administration (PA), have
become increasingly complex and diverse. This complexity arises from the need
to handle a wide range of document types, often characterized by
semi-structured forms. Semi-structured documents present a fixed set of data
without a fixed format. As a consequence, a template-based solution cannot be
used, as understanding a document requires the extraction of the data
structure. The recent introduction of Large Language Models (LLMs) has enabled
the creation of customized text output satisfying user requests. In this work,
we propose a novel approach that combines the LLMs with prompt engineering and
multi-agent systems for generating new documents compliant with a desired
structure. The main contribution of this work concerns replacing the commonly
used manual prompting with a task description generated by semantic retrieval
from an LLM. The potential of this approach is demonstrated through a series of
experiments and case studies, showcasing its effectiveness in real-world PA
scenarios.Comment: Accepted at HCI INTERNATIONAL 2024 - 26th International Conference on
Human-Computer Interaction. Washington Hilton Hotel, Washington DC, USA, 29
June - 4 July 202
Multi-Agent Coordination for a Partially Observable and Dynamic Robot Soccer Environment with Limited Communication
RoboCup represents an International testbed for advancing research in AI and
robotics, focusing on a definite goal: developing a robot team that can win
against the human world soccer champion team by the year 2050. To achieve this
goal, autonomous humanoid robots' coordination is crucial. This paper explores
novel solutions within the RoboCup Standard Platform League (SPL), where a
reduction in WiFi communication is imperative, leading to the development of
new coordination paradigms. The SPL has experienced a substantial decrease in
network packet rate, compelling the need for advanced coordination
architectures to maintain optimal team functionality in dynamic environments.
Inspired by market-based task assignment, we introduce a novel distributed
coordination system to orchestrate autonomous robots' actions efficiently in
low communication scenarios. This approach has been tested with NAO robots
during official RoboCup competitions and in the SimRobot simulator,
demonstrating a notable reduction in task overlaps in limited communication
settings.Comment: International Conference of the Italian Association for Artificial
Intelligence (AIxIA 2023) - Italian Workshop on Artificial Intelligence and
Robotics (AIRO) Rome, 6 - 9 November, 202
Skin Lesion Area Segmentation Using Attention Squeeze U-Net for Embedded Devices
Melanoma is the deadliest form of skin cancer. Early diagnosis of malignant lesions is crucial for reducing mortality. The use of deep learning techniques on dermoscopic images can help in keeping track of the change over time in the appearance of the lesion, which is an important factor for detecting malignant lesions. In this paper, we present a deep learning architecture called Attention Squeeze U-Net for skin lesion area segmentation specifically designed for embedded devices. The main goal is to increase the patient empowerment through the adoption of deep learning algorithms that can run locally on smartphones or low cost embedded devices. This can be the basis to (1) create a history of the lesion, (2) reduce patient visits to the hospital, and (3) protect the privacy of the users. Quantitative results on publicly available data demonstrate that it is possible to achieve good segmentation results even with a compact model
Perspective Chapter: European Robotics League – Benchmarking through Smart City Robot Competitions
The SciRoc project, started in 2018, is an EU-H2020 funded project supporting the European Robotics League (ERL) and builds on the success of the EU-FP7/H2020 projects RoCKIn, euRathlon, EuRoC and ROCKEU2. The ERL is a framework for robot competitions currently consisting of three challenges: ERL Consumer, ERL Professional and ERL Emergency. These three challenge scenarios are set up in urban environments and converge every two years under one major tournament: the ERL Smart Cities Challenge. Smart cities are a new urban innovation paradigm promoting the use of advanced technologies to improve citizens’ quality of life. A key novelty of the SciRoc project is the ERL Smart Cities Challenge, which aims to show how robots will integrate into the cities of the future as physical agents. The SciRoc Project ran two such ERL Smart Cities Challenges, the first in Milton Keynes, UK (2019) and the second in Bologna, Italy (2021). In this chapter we evaluate the three challenges of the ERL, explain why the SciRoc project introduced a fourth challenge to bring robot benchmarking to Smart Cities and outline the process in conducting a Smart City event under the ERL umbrella. These innovations may pave the way for easier robotic benchmarking in the future
Preserving HRI Capabilities: Physical, Remote and Simulated Modalities in the SciRoc 2021 Competition
In the last years, robots are moving out of research laboratories to enter
everyday life. Competitions aiming at benchmarking the capabilities of a robot
in everyday scenarios are useful to make a step forward in this path. In fact,
they foster the development of robust architectures capable of solving issues
that might occur during human-robot coexistence in human-shaped scenarios. One
of those competitions is SciRoc that, in its second edition, proposed new
benchmarking environments. In particular, Episode 1 of SciRoc 2 proposed three
different modalities of participation while preserving the Human-Robot
Interaction (HRI), being a fundamental benchmarking functionality. The Coffee
Shop environment, used to challenge the participating teams, represented an
excellent testbed enabling for the benchmarking of different robotics
functionalities, but also an exceptional opportunity for proposing novel
solutions to guarantee real human-robot interaction procedures despite the
Covid-19 pandemic restrictions. The developed software is publicly released
Enhancing graph representation of the environment through local and cloud computation
Enriching the robot representation of the operational environment is a challenging task that aims at bridging the gap between low-level sensor readings and high-level semantic understanding. Having a rich representation often requires computationally demanding architectures and pure point cloud based detection systems that struggle when dealing with everyday objects that have to be handled by the robot. To overcome these issues, we propose a graph-based representation that addresses this gap by providing a semantic representation of robot environments from multiple sources. In fact, to acquire information from
the environment, the framework combines classical computer vision tools with modern computer vision cloud services, ensuring computational feasibility on onboard hardware. By incorporating an ontology hierarchy with over 800 object classes, the framework achieves cross-domain adaptability, eliminating the need for environment-specific tools. The proposed approach allows us to handle also small objects and integrate them into the semantic representation of the environment. The approach is implemented in the Robot Operating System (ROS) using the RViz visualizer for environment representation. This work is a first step towards the development of a general-purpose framework, to facilitate intuitive interaction and navigation across different domains
Game Strategies for Physical Robot Soccer Players: A Survey
Effective team strategies and joint decision-making processes are fundamental in modern robotic applications, where multiple units have to cooperate to achieve a common goal. The research community in artificial intelligence and robotics has launched robotic competitions to promote research and validate new approaches, by providing robust benchmarks to evaluate all the components of a multiagent system—ranging from hardware to high-level strategy learning. Among these competitions RoboCup has a prominent role, running one of the first worldwide multirobot competition (in the late 1990s), challenging researchers to develop robotic systems able to compete in the game of soccer. Robotic soccer teams are complex multirobot systems, where each unit shows individual skills, and solid teamwork by exchanging information about their local perceptions and intentions. In this survey, we dive into the techniques developed within the RoboCup framework by analyzing and commenting on them in detail. We highlight significant trends in the research conducted in the field and to provide commentaries and insights, about challenges and achievements in generating decision-making processes for multirobot adversarial scenarios. As an outcome, we provide an overview a body of work that lies at the intersection of three disciplines: Artificial intelligence, robotics, and games
Questioning Items’ Link in Users’ Perception of a Training Robot for Elders
Socially Assistive robots are becoming more common in modern society. These robots can accomplish a variety of tasks for people that are exposed to isolation and difficulties. Among those, elderly people are the largest part, and with them, robotics can play new roles. Elderly people are the ones who usually suffer a major technological gap, and it is worth evaluating their perception when dealing with robots. To this end, the present work addresses the interaction of elderly people during a training session with a humanoid robot. The analysis has been carried out by means of a questionnaire, using four key factors: Motivation, Usability, Likability, and Sociability. The results can contribute to the design and the development of social interaction between robots and humans in training contexts to enhance the effectiveness of human-robot interaction