2,302 research outputs found

    MBAT: A scalable informatics system for unifying digital atlasing workflows

    Get PDF
    Abstract Background Digital atlases provide a common semantic and spatial coordinate system that can be leveraged to compare, contrast, and correlate data from disparate sources. As the quality and amount of biological data continues to advance and grow, searching, referencing, and comparing this data with a researcher's own data is essential. However, the integration process is cumbersome and time-consuming due to misaligned data, implicitly defined associations, and incompatible data sources. This work addressing these challenges by providing a unified and adaptable environment to accelerate the workflow to gather, align, and analyze the data. Results The MouseBIRN Atlasing Toolkit (MBAT) project was developed as a cross-platform, free open-source application that unifies and accelerates the digital atlas workflow. A tiered, plug-in architecture was designed for the neuroinformatics and genomics goals of the project to provide a modular and extensible design. MBAT provides the ability to use a single query to search and retrieve data from multiple data sources, align image data using the user's preferred registration method, composite data from multiple sources in a common space, and link relevant informatics information to the current view of the data or atlas. The workspaces leverage tool plug-ins to extend and allow future extensions of the basic workspace functionality. A wide variety of tool plug-ins were developed that integrate pre-existing as well as newly created technology into each workspace. Novel atlasing features were also developed, such as supporting multiple label sets, dynamic selection and grouping of labels, and synchronized, context-driven display of ontological data. Conclusions MBAT empowers researchers to discover correlations among disparate data by providing a unified environment for bringing together distributed reference resources, a user's image data, and biological atlases into the same spatial or semantic context. Through its extensible tiered plug-in architecture, MBAT allows researchers to customize all platform components to quickly achieve personalized workflows

    A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction

    Full text link
    Picking up objects requested by a human user is a common task in human-robot interaction. When multiple objects match the user's verbal description, the robot needs to clarify which object the user is referring to before executing the action. Previous research has focused on perceiving user's multimodal behaviour to complement verbal commands or minimising the number of follow up questions to reduce task time. In this paper, we propose a system for reference disambiguation based on visualisation and compare three methods to disambiguate natural language instructions. In a controlled experiment with a YuMi robot, we investigated real-time augmentations of the workspace in three conditions -- mixed reality, augmented reality, and a monitor as the baseline -- using objective measures such as time and accuracy, and subjective measures like engagement, immersion, and display interference. Significant differences were found in accuracy and engagement between the conditions, but no differences were found in task time. Despite the higher error rates in the mixed reality condition, participants found that modality more engaging than the other two, but overall showed preference for the augmented reality condition over the monitor and mixed reality conditions

    A Distributed Software Architecture for Collaborative Teleoperation based on a VR Platform and Web Application Interoperability

    Full text link
    Augmented Reality and Virtual Reality can provide to a Human Operator (HO) a real help to complete complex tasks, such as robot teleoperation and cooperative teleassistance. Using appropriate augmentations, the HO can interact faster, safer and easier with the remote real world. In this paper, we present an extension of an existing distributed software and network architecture for collaborative teleoperation based on networked human-scaled mixed reality and mobile platform. The first teleoperation system was composed by a VR application and a Web application. However the 2 systems cannot be used together and it is impossible to control a distant robot simultaneously. Our goal is to update the teleoperation system to permit a heterogeneous collaborative teleoperation between the 2 platforms. An important feature of this interface is based on different Mobile platforms to control one or many robots

    Collaborative visualization and virtual reality in construction projects

    Get PDF
    In the Colombian construction industry it is recognized as a general practice that di!erent designers deliver 2D drawings to the project construction team -- Some 3D modeling applications are used but only with commercial intentions, thus wasting visualization tools that facilitate the understanding of the project, that allow the coordination of plans between di!erent specialists, and that can prevent errors with high impact on costs in the construction phase of the project -- As a continuation of the project "immersive virtual reality for construction" developed by EAFIT University, the present work intends to demonstrate how a collaborative virtual environment can be helpful in order to improve visualization of construction projects and achieve the interaction of di!erent specialties, evaluating the impact of collaborative work in the design process of the same -- The end result of this research is an application created using freely available tools and a use case scenario on how this application can be used to perform review meetings by di!erent specialist in real time -- Initial test on the system has been made with civil engineering students showing that this virtual reality tool ease the burden of performing reviews where traditionally plans and sharing the same geographical space were neede

    EPOS : evolving personal to organizational knowledge spaces

    Get PDF
    EPOS will leverage the user´s personal workspace with its manyfold native information structures to his personal knowledge space and in cooperation with other personal workspaces contribute to the organizational knowledge space which is represented in the organizational memory. This first milestone presents results from the project´s first year in the areas of the personal informational model, user observation for context elicitation, collaborative information retrieval and information visualization
    corecore