3,901 research outputs found
Collaborative signal and information processing for target detection with heterogeneous sensor networks
In this paper, an approach for target detection and acquisition with heterogeneous sensor networks through strategic resource allocation and coordination is presented. Based on sensor management and collaborative signal and information processing, low-capacity low-cost sensors are strategically deployed to guide and cue scarce high performance sensors in the network to improve the data quality, with which the mission is eventually completed more efficiently with lower cost. We focus on the problem of designing such a network system in which issues of resource selection and allocation, system behaviour and capacity, target behaviour and patterns, the environment, and multiple constraints such as the cost must be addressed simultaneously. Simulation results offer significant insight into sensor selection and network operation, and demonstrate the great benefits introduced by guided search in an application of hunting down and capturing hostile vehicles on the battlefield
Aesthetic choices: Defining the range of aesthetic views in interactive digital media including games and 3D virtual environments (3D VEs)
Defining aesthetic choices for interactive digital media such as games is a challenging task. Objective and subjective factors such as colour, symmetry, order and complexity, and statistical features among others play an important role for defining the aesthetic properties of interactive digital artifacts. Computational approaches developed in this regard also consider objective factors such as statistical image features for the assessment of aesthetic qualities. However, aesthetics for interactive digital media, such as games, requires more nuanced consideration than simple objective and subjective factors, for choosing a range of aesthetic features.
From the study it was found that the there is no one single optimum position or viewpoint with a corresponding relationship to the aesthetic considerations that influence interactive digital media. Instead, the incorporation of aesthetic features demonstrates the need to consider each component within interactive digital media as part of a range of possible features, and therefore within a range of possible camera positions. A framework, named as PCAWF, emphasized that combination of features and factors demonstrated the need to define a range of aesthetic viewpoints. This is important for improved user experience. From the framework it has been found that factors including the storyline, user state, gameplay, and application type are critical to defining the reasons associated with making aesthetic choices. The selection of a range of aesthetic features and characteristics is influenced by four main factors and sub-factors associated with the main factors.
This study informs the future of interactive digital media interaction by providing clarity and reasoning behind the aesthetic decision-making inclusions that are integrated into automatically generated vision by providing a framework for choosing a range of aesthetic viewpoints in a 3D virtual environment of a game. The study identifies critical juxtapositions between photographic and cinema-based media aesthetics by incorporating qualitative rationales from experts within the interactive digital media field. This research will change the way Artificial Intelligence (AI) generated interactive digital media in the way that it chooses visual outputs in terms of camera positions, field-view, orientation, contextual considerations, and user experiences. It will impact across all automated systems to ensure that human-values, rich variations, and extensive complexity are integrated in the AI-dominated development and design of future interactive digital media production
Interactive Imitation Learning in Robotics: A Survey
Interactive Imitation Learning (IIL) is a branch of Imitation Learning (IL)
where human feedback is provided intermittently during robot execution allowing
an online improvement of the robot's behavior. In recent years, IIL has
increasingly started to carve out its own space as a promising data-driven
alternative for solving complex robotic tasks. The advantages of IIL are its
data-efficient, as the human feedback guides the robot directly towards an
improved behavior, and its robustness, as the distribution mismatch between the
teacher and learner trajectories is minimized by providing feedback directly
over the learner's trajectories. Nevertheless, despite the opportunities that
IIL presents, its terminology, structure, and applicability are not clear nor
unified in the literature, slowing down its development and, therefore, the
research of innovative formulations and discoveries. In this article, we
attempt to facilitate research in IIL and lower entry barriers for new
practitioners by providing a survey of the field that unifies and structures
it. In addition, we aim to raise awareness of its potential, what has been
accomplished and what are still open research questions. We organize the most
relevant works in IIL in terms of human-robot interaction (i.e., types of
feedback), interfaces (i.e., means of providing feedback), learning (i.e.,
models learned from feedback and function approximators), user experience
(i.e., human perception about the learning process), applications, and
benchmarks. Furthermore, we analyze similarities and differences between IIL
and RL, providing a discussion on how the concepts offline, online, off-policy
and on-policy learning should be transferred to IIL from the RL literature. We
particularly focus on robotic applications in the real world and discuss their
implications, limitations, and promising future areas of research
Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning
Intraluminal procedures have opened up a new sub-field of minimally invasive surgery that use flexible instruments to navigate through complex luminal structures of the body, resulting in reduced invasiveness and improved patient benefits. One of the major challenges in this field is the accurate and precise control of the instrument inside the human body. Robotics has emerged as a promising solution to this problem. However, to achieve successful robotic intraluminal interventions, the control of the instrument needs to be automated to a large extent. The thesis first examines the state-of-the-art in intraluminal surgical robotics and identifies the key challenges in this field, which include the need for safe and effective tool manipulation, and the ability to adapt to unexpected changes in the luminal environment. To address these challenges, the thesis proposes several levels of autonomy that enable the robotic system to perform individual subtasks autonomously, while still allowing the surgeon to retain overall control of the procedure. The approach facilitates the development of specialized algorithms such as Deep Reinforcement Learning (DRL) for subtasks like navigation and tissue manipulation to produce robust surgical gestures. Additionally, the thesis proposes a safety framework that provides formal guarantees to prevent risky actions. The presented approaches are evaluated through a series of experiments using simulation and robotic platforms. The experiments demonstrate that subtask automation can improve the accuracy and efficiency of tool positioning and tissue manipulation, while also reducing the cognitive load on the surgeon. The results of this research have the potential to improve the reliability and safety of intraluminal surgical interventions, ultimately leading to better outcomes for patients and surgeons
Perception architecture exploration for automotive cyber-physical systems
2022 Spring.Includes bibliographical references.In emerging autonomous and semi-autonomous vehicles, accurate environmental perception by automotive cyber physical platforms are critical for achieving safety and driving performance goals. An efficient perception solution capable of high fidelity environment modeling can improve Advanced Driver Assistance System (ADAS) performance and reduce the number of lives lost to traffic accidents as a result of human driving errors. Enabling robust perception for vehicles with ADAS requires solving multiple complex problems related to the selection and placement of sensors, object detection, and sensor fusion. Current methods address these problems in isolation, which leads to inefficient solutions. For instance, there is an inherent accuracy versus latency trade-off between one stage and two stage object detectors which makes selecting an enhanced object detector from a diverse range of choices difficult. Further, even if a perception architecture was equipped with an ideal object detector performing high accuracy and low latency inference, the relative position and orientation of selected sensors (e.g., cameras, radars, lidars) determine whether static or dynamic targets are inside the field of view of each sensor or in the combined field of view of the sensor configuration. If the combined field of view is too small or contains redundant overlap between individual sensors, important events and obstacles can go undetected. Conversely, if the combined field of view is too large, the number of false positive detections will be high in real time and appropriate sensor fusion algorithms are required for filtering. Sensor fusion algorithms also enable tracking of non-ego vehicles in situations where traffic is highly dynamic or there are many obstacles on the road. Position and velocity estimation using sensor fusion algorithms have a lower margin for error when trajectories of other vehicles in traffic are in the vicinity of the ego vehicle, as incorrect measurement can cause accidents. Due to the various complex inter-dependencies between design decisions, constraints and optimization goals a framework capable of synthesizing perception solutions for automotive cyber physical platforms is not trivial. We present a novel perception architecture exploration framework for automotive cyber- physical platforms capable of global co-optimization of deep learning and sensing infrastructure. The framework is capable of exploring the synthesis of heterogeneous sensor configurations towards achieving vehicle autonomy goals. As our first contribution, we propose a novel optimization framework called VESPA that explores the design space of sensor placement locations and orientations to find the optimal sensor configuration for a vehicle. We demonstrate how our framework can obtain optimal sensor configurations for heterogeneous sensors deployed across two contemporary real vehicles. We then utilize VESPA to create a comprehensive perception architecture synthesis framework called PASTA. This framework enables robust perception for vehicles with ADAS requiring solutions to multiple complex problems related not only to the selection and placement of sensors but also object detection, and sensor fusion as well. Experimental results with the Audi-TT and BMW Minicooper vehicles show how PASTA can intelligently traverse the perception design space to find robust, vehicle-specific solutions
Modular MRI Guided Device Development System: Development, Validation and Applications
Since the first robotic surgical intervention was performed in 1985 using a PUMA industrial manipulator, development in the field of surgical robotics has been relatively fast paced, despite the tremendous costs involved in developing new robotic interventional devices. This is due to the clear advantages to augmented a clinicians skill and dexterity with the precision and reliability of computer controlled motion. A natural extension of robotic surgical intervention is the integration of image guided interventions, which give the promise of reduced trauma, procedure time and inaccuracies. Despite magnetic resonance imaging (MRI) being one of the most effective imaging modalities for visualizing soft tissue structures within the body, MRI guided surgical robotics has been frustrated by the high magnetic field in the MRI image space and the extreme sensitivity to electromagnetic interference. The primary contributions of this dissertation relate to enabling the use of direct, live MR imaging to guide and assist interventional procedures. These are the two focus areas: creation both of an integrated MRI-guided development platform and of a stereotactic neural intervention system. The integrated series of modules of the development platform represent a significant advancement in the practice of creating MRI guided mechatronic devices, as well as an understanding of design requirements for creating actuated devices to operate within a diagnostic MRI. This knowledge was gained through a systematic approach to understanding, isolating, characterizing, and circumventing difficulties associated with developing MRI-guided interventional systems. These contributions have been validated on the levels of the individual modules, the total development system, and several deployed interventional devices. An overview of this work is presented with a summary of contributions and lessons learned along the way
Physical Diagnosis and Rehabilitation Technologies
The book focuses on the diagnosis, evaluation, and assistance of gait disorders; all the papers have been contributed by research groups related to assistive robotics, instrumentations, and augmentative devices
A Low Complexity 6DoF Magnetic Tracking System For Biomedical Applications
L'abstract è presente nell'allegato / the abstract is in the attachmen
- …