84 research outputs found

    Distributed Robotic Systems in the Edge-Cloud Continuum with ROS 2: a Review on Novel Architectures and Technology Readiness

    Full text link
    Robotic systems are more connected, networked, and distributed than ever. New architectures that comply with the \textit{de facto} robotics middleware standard, ROS\,2, have recently emerged to fill the gap in terms of hybrid systems deployed from edge to cloud. This paper reviews new architectures and technologies that enable containerized robotic applications to seamlessly run at the edge or in the cloud. We also overview systems that include solutions from extension to ROS\,2 tooling to the integration of Kubernetes and ROS\,2. Another important trend is robot learning, and how new simulators and cloud simulations are enabling, e.g., large-scale reinforcement learning or distributed federated learning solutions. This has also enabled deeper integration of continuous interaction and continuous deployment (CI/CD) pipelines for robotic systems development, going beyond standard software unit tests with simulated tests to build and validate code automatically. We discuss the current technology readiness and list the potential new application scenarios that are becoming available. Finally, we discuss the current challenges in distributed robotic systems and list open research questions in the field

    Social shaping of digital publishing: exploring the interplay between culture and technology

    Get PDF
    The processes and forms of electronic publishing have been changing since the advent of the Web. In recent years, the open access movement has been a major driver of scholarly communication, and change is also evident in other fields such as e-government and e-learning. Whilst many changes are driven by technological advances, an altered social reality is also pushing the boundaries of digital publishing. With 23 articles and 10 posters, Elpub 2012 focuses on the social shaping of digital publishing and explores the interplay between culture and technology. This book contains the proceedings of the conference, consisting of 11 accepted full articles and 12 articles accepted as extended abstracts. The articles are presented in groups, and cover the topics: digital scholarship and publishing; special archives; libraries and repositories; digital texts and readings; and future solutions and innovations. Offering an overview of the current situation and exploring the trends of the future, this book will be of interest to all those whose work involves digital publishing

    Методи, алгоритми та програмне забезпечення для біонічного хапання з уникненням перешкод за допомогою дерев октантів та глибокого навчання

    Get PDF
    Метою дослідження є підвищення ефективності процесу розпізнавання об’єктів, розпізнавання позицій для їх захоплення, розпізнання перешкод, руху до пози хапання з уникненням перешкод та надійного хапання об’єкту. Об’єктом дослідження є процес захоплення роботизованої руки в складних умовах із використанням дерева октантів для планування шляху з уникненням перешкод. Предметом дослідження є моделі та методи виявлення положення захоплення, сприйняття перешкод та планування шляху захоплення та успішного захоплення об’єкта. Методи дослідження. Для вирішення цієї проблеми використовуються такі методи, як пошук шляхів, встановлення порогів, вейвлет-перетворення, вилучення особливостей, штучні нейронні мережі та машинне навчання. Наукова новизна. полягає у тому, що удосконалено процесс розпізнавання об’єктів, розпізнавання позицій для їх захоплення, розпізнання перешкод, руху до пози хапання з уникненням перешкод та надійного хапання об’єкту. Практичне значення результатів полягає у тому, що запропонована система розпізнавання об’єктів, розпізнавання позицій для їх захоплення, розпізнання перешкод, руху до пози хапання з уникненням перешкод та надійного хапання об’єкту ефективно виконує свої задачі

    A Semantics-based User Interface Model for Content Annotation, Authoring and Exploration

    Get PDF
    The Semantic Web and Linked Data movements with the aim of creating, publishing and interconnecting machine readable information have gained traction in the last years. However, the majority of information still is contained in and exchanged using unstructured documents, such as Web pages, text documents, images and videos. This can also not be expected to change, since text, images and videos are the natural way in which humans interact with information. Semantic structuring of content on the other hand provides a wide range of advantages compared to unstructured information. Semantically-enriched documents facilitate information search and retrieval, presentation, integration, reusability, interoperability and personalization. Looking at the life-cycle of semantic content on the Web of Data, we see quite some progress on the backend side in storing structured content or for linking data and schemata. Nevertheless, the currently least developed aspect of the semantic content life-cycle is from our point of view the user-friendly manual and semi-automatic creation of rich semantic content. In this thesis, we propose a semantics-based user interface model, which aims to reduce the complexity of underlying technologies for semantic enrichment of content by Web users. By surveying existing tools and approaches for semantic content authoring, we extracted a set of guidelines for designing efficient and effective semantic authoring user interfaces. We applied these guidelines to devise a semantics-based user interface model called WYSIWYM (What You See Is What You Mean) which enables integrated authoring, visualization and exploration of unstructured and (semi-)structured content. To assess the applicability of our proposed WYSIWYM model, we incorporated the model into four real-world use cases comprising two general and two domain-specific applications. These use cases address four aspects of the WYSIWYM implementation: 1) Its integration into existing user interfaces, 2) Utilizing it for lightweight text analytics to incentivize users, 3) Dealing with crowdsourcing of semi-structured e-learning content, 4) Incorporating it for authoring of semantic medical prescriptions

    Integrating institutional repositories into the Semantic Web

    Get PDF
    The Web has changed the face of scientific communication; and the Semantic Web promises new ways of adding value to research material by making it more accessible to automatic discovery, linking, and analysis. Institutional repositories contain a wealth of information which could benefit from the application of this technology. In this thesis I describe the problems inherent in the informality of traditional repository metadata, and propose a data model based on the Semantic Web which will support more efficient use of this data, with the aim of streamlining scientific communication and promoting efficient use of institutional research output

    Establishing a Framework for the development of Multimodal Virtual Reality Interfaces with Applicability in Education and Clinical Practice

    Get PDF
    The development of Virtual Reality (VR) and Augmented Reality (AR) content with multiple sources of both input and output has led to countless contributions in a great many number of fields, among which medicine and education. Nevertheless, the actual process of integrating the existing VR/AR media and subsequently setting it to purpose is yet a highly scattered and esoteric undertaking. Moreover, seldom do the architectures that derive from such ventures comprise haptic feedback in their implementation, which in turn deprives users from relying on one of the paramount aspects of human interaction, their sense of touch. Determined to circumvent these issues, the present dissertation proposes a centralized albeit modularized framework that thus enables the conception of multimodal VR/AR applications in a novel and straightforward manner. In order to accomplish this, the aforesaid framework makes use of a stereoscopic VR Head Mounted Display (HMD) from Oculus Rift©, a hand tracking controller from Leap Motion©, a custom-made VR mount that allows for the assemblage of the two preceding peripherals and a wearable device of our own design. The latter is a glove that encompasses two core modules in its innings, one that is able to convey haptic feedback to its wearer and another that deals with the non-intrusive acquisition, processing and registering of his/her Electrocardiogram (ECG), Electromyogram (EMG) and Electrodermal Activity (EDA). The software elements of the aforementioned features were all interfaced through Unity3D©, a powerful game engine whose popularity in academic and scientific endeavors is evermore increasing. Upon completion of our system, it was time to substantiate our initial claim with thoroughly developed experiences that would attest to its worth. With this premise in mind, we devised a comprehensive repository of interfaces, amid which three merit special consideration: Brain Connectivity Leap (BCL), Ode to Passive Haptic Learning (PHL) and a Surgical Simulator

    An inertial motion capture framework for constructing body sensor networks

    Get PDF
    Motion capture is the process of measuring and subsequently reconstructing the movement of an animated object or being in virtual space. Virtual reconstructions of human motion play an important role in numerous application areas such as animation, medical science, ergonomics, etc. While optical motion capture systems are the industry standard, inertial body sensor networks are becoming viable alternatives due to portability, practicality and cost. This thesis presents an innovative inertial motion capture framework for constructing body sensor networks through software environments, smartphones and web technologies. The first component of the framework is a unique inertial motion capture software environment aimed at providing an improved experimentation environment, accompanied by programming scaffolding and a driver development kit, for users interested in studying or engineering body sensor networks. The software environment provides a bespoke 3D engine for kinematic motion visualisations and a set of tools for hardware integration. The software environment is used to develop the hardware behind a prototype motion capture suit focused on low-power consumption and hardware-centricity. Additional inertial measurement units, which are available commercially, are also integrated to demonstrate the functionality the software environment while providing the framework with additional sources for motion data. The smartphone is the most ubiquitous computing technology and its worldwide uptake has prompted many advances in wearable inertial sensing technologies. Smartphones contain gyroscopes, accelerometers and magnetometers, a combination of sensors that is commonly found in inertial measurement units. This thesis presents a mobile application that investigates whether the smartphone is capable of inertial motion capture by constructing a novel omnidirectional body sensor network. This thesis proposes a novel use for web technologies through the development of the Motion Cloud, a repository and gateway for inertial data. Web technologies have the potential to replace motion capture file formats with online repositories and to set a new standard for how motion data is stored. From a single inertial measurement unit to a more complex body sensor network, the proposed architecture is extendable and facilitates the integration of any inertial hardware configuration. The Motion Cloud’s data can be accessed through an application-programming interface or through a web portal that provides users with the functionality for visualising and exporting the motion data

    Computer-Assisted Interactive Documentary and Performance Arts in Illimitable Space

    Get PDF
    This major component of the research described in this thesis is 3D computer graphics, specifically the realistic physics-based softbody simulation and haptic responsive environments. Minor components include advanced human-computer interaction environments, non-linear documentary storytelling, and theatre performance. The journey of this research has been unusual because it requires a researcher with solid knowledge and background in multiple disciplines; who also has to be creative and sensitive in order to combine the possible areas into a new research direction. [...] It focuses on the advanced computer graphics and emerges from experimental cinematic works and theatrical artistic practices. Some development content and installations are completed to prove and evaluate the described concepts and to be convincing. [...] To summarize, the resulting work involves not only artistic creativity, but solving or combining technological hurdles in motion tracking, pattern recognition, force feedback control, etc., with the available documentary footage on film, video, or images, and text via a variety of devices [....] and programming, and installing all the needed interfaces such that it all works in real-time. Thus, the contribution to the knowledge advancement is in solving these interfacing problems and the real-time aspects of the interaction that have uses in film industry, fashion industry, new age interactive theatre, computer games, and web-based technologies and services for entertainment and education. It also includes building up on this experience to integrate Kinect- and haptic-based interaction, artistic scenery rendering, and other forms of control. This research work connects all the research disciplines, seemingly disjoint fields of research, such as computer graphics, documentary film, interactive media, and theatre performance together.Comment: PhD thesis copy; 272 pages, 83 figures, 6 algorithm

    Developing a User-Friendly and Modular Framework for Deep Learning Methods in 3D Bioimage Segmentation

    Get PDF
    The emergence of deep learning has breathed new life into image analysis, especially for the segmentation, a challenging step required to quantify bidimensional (2D) and tridimensional (3D) objects. Despite deep learning promises, these methods are only slowly spreading in the biological field. In this PhD project, the 3D nucleus of the cell is used as the object of interest to understand how its shape variations contribute to the organisation of the genetic material. First a literature survey showed that very few publicly available methods for 3D nucleus segmentation provide the minimum requirements for their reproducibility. These methods were subsequently benchmarked and only one of them called nnU-Net surpassed the best specialized computer vision tool. Based on these observations, a new development philosophy was designed and, from it, Biom3d, a novel deep learning framework emerged. Biom3d is a user-friendly tool successfully used by biologists involved in 3D nucleus segmentation and provides a new alternative for automatically and accurately computing nuclear shape parameters. Being well optimized, Biom3d also surpasses the performance of cutting-edge methods on a wide variety of biological and medical segmentation problems. Being modular, Biom3d is a sustainable framework compatible with the latest deep learning innovations, such as self-supervised methods. Self-supervision aims at tackling the important need for deep learning methods in manual annotations by pretraining models on large unannotated datasets to extract information first before retraining them on annotated datasets. In this work, a self-supervised approach based on pretraining an entire U-Net model with the Triplet and Arcface losses was developed and demonstrates significant improvements over supervised methods for 3D segmentation. The performance, modularity and interdisciplinary nature of the tools developed during this project will serve as an innovation platform for a wide panel of users ranging from biologist users to future deep learning developers
    corecore