2,772 research outputs found
MBAT: A scalable informatics system for unifying digital atlasing workflows
Abstract Background Digital atlases provide a common semantic and spatial coordinate system that can be leveraged to compare, contrast, and correlate data from disparate sources. As the quality and amount of biological data continues to advance and grow, searching, referencing, and comparing this data with a researcher's own data is essential. However, the integration process is cumbersome and time-consuming due to misaligned data, implicitly defined associations, and incompatible data sources. This work addressing these challenges by providing a unified and adaptable environment to accelerate the workflow to gather, align, and analyze the data. Results The MouseBIRN Atlasing Toolkit (MBAT) project was developed as a cross-platform, free open-source application that unifies and accelerates the digital atlas workflow. A tiered, plug-in architecture was designed for the neuroinformatics and genomics goals of the project to provide a modular and extensible design. MBAT provides the ability to use a single query to search and retrieve data from multiple data sources, align image data using the user's preferred registration method, composite data from multiple sources in a common space, and link relevant informatics information to the current view of the data or atlas. The workspaces leverage tool plug-ins to extend and allow future extensions of the basic workspace functionality. A wide variety of tool plug-ins were developed that integrate pre-existing as well as newly created technology into each workspace. Novel atlasing features were also developed, such as supporting multiple label sets, dynamic selection and grouping of labels, and synchronized, context-driven display of ontological data. Conclusions MBAT empowers researchers to discover correlations among disparate data by providing a unified environment for bringing together distributed reference resources, a user's image data, and biological atlases into the same spatial or semantic context. Through its extensible tiered plug-in architecture, MBAT allows researchers to customize all platform components to quickly achieve personalized workflows
The Cardiac Atlas Project—an imaging database for computational modeling and statistical atlases of the heart
Motivation: Integrative mathematical and statistical models of cardiac anatomy and physiology can play a vital role in understanding cardiac disease phenotype and planning therapeutic strategies. However, the accuracy and predictive power of such models is dependent upon the breadth and depth of noninvasive imaging datasets. The Cardiac Atlas Project (CAP) has established a large-scale database of cardiac imaging examinations and associated clinical data in order to develop a shareable, web-accessible, structural and functional atlas of the normal and pathological heart for clinical, research and educational purposes. A goal of CAP is to facilitate collaborative statistical analysis of regional heart shape and wall motion and characterize cardiac function among and within population groups
The Cardiac Atlas Project--An Imaging Database for Computational Modeling and Statistical Atlases of the Heart
MOTIVATION: Integrative mathematical and statistical models of cardiac anatomy and physiology can play a vital role in understanding cardiac disease phenotype and planning therapeutic strategies. However, the accuracy and predictive power of such models is dependent upon the breadth and depth of noninvasive imaging datasets. The Cardiac Atlas Project (CAP) has established a large-scale database of cardiac imaging examinations and associated clinical data in order to develop a shareable, web-accessible, structural and functional atlas of the normal and pathological heart for clinical, research and educational purposes. A goal of CAP is to facilitate collaborative statistical analysis of regional heart shape and wall motion and characterize cardiac function among and within population groups.
RESULTS: Three main open-source software components were developed: (i) a database with web-interface; (ii) a modeling client for 3D + time visualization and parametric description of shape and motion; and (iii) open data formats for semantic characterization of models and annotations. The database was implemented using a three-tier architecture utilizing MySQL, JBoss and Dcm4chee, in compliance with the DICOM standard to provide compatibility with existing clinical networks and devices. Parts of Dcm4chee were extended to access image specific attributes as search parameters. To date, approximately 3000 de-identified cardiac imaging examinations are available in the database. All software components developed by the CAP are open source and are freely available under the Mozilla Public License Version 1.1 (http://www.mozilla.org/MPL/MPL-1.1.txt)
Conceitos e métodos para apoio ao desenvolvimento e avaliação de colaboração remota utilizando realidade aumentada
Remote Collaboration using Augmented Reality (AR) shows great
potential to establish a common ground in physically distributed
scenarios where team-members need to achieve a shared goal.
However, most research efforts in this field have been devoted to
experiment with the enabling technology and propose methods to
support its development. As the field evolves, evaluation and
characterization of the collaborative process become an essential,
but difficult endeavor, to better understand the contributions of AR.
In this thesis, we conducted a critical analysis to identify the main
limitations and opportunities of the field, while situating its maturity
and proposing a roadmap of important research actions. Next, a
human-centered design methodology was adopted, involving
industrial partners to probe how AR could support their needs
during remote maintenance. These outcomes were combined with
literature methods into an AR-prototype and its evaluation was
performed through a user study. From this, it became clear the
necessity to perform a deep reflection in order to better understand
the dimensions that influence and must/should be considered in
Collaborative AR. Hence, a conceptual model and a humancentered
taxonomy were proposed to foster systematization of
perspectives. Based on the model proposed, an evaluation
framework for contextualized data gathering and analysis was
developed, allowing support the design and performance of
distributed evaluations in a more informed and complete manner.
To instantiate this vision, the CAPTURE toolkit was created,
providing an additional perspective based on selected dimensions
of collaboration and pre-defined measurements to obtain “in situ”
data about them, which can be analyzed using an integrated
visualization dashboard. The toolkit successfully supported
evaluations of several team-members during tasks of remote
maintenance mediated by AR. Thus, showing its versatility and
potential in eliciting a comprehensive characterization of the added
value of AR in real-life situations, establishing itself as a generalpurpose
solution, potentially applicable to a wider range of
collaborative scenarios.Colaboração Remota utilizando Realidade Aumentada (RA)
apresenta um enorme potencial para estabelecer um entendimento
comum em cenários onde membros de uma equipa fisicamente
distribuídos precisam de atingir um objetivo comum. No entanto, a
maioria dos esforços de investigação tem-se focado nos aspetos
tecnológicos, em fazer experiências e propor métodos para apoiar
seu desenvolvimento. À medida que a área evolui, a avaliação e
caracterização do processo colaborativo tornam-se um esforço
essencial, mas difícil, para compreender as contribuições da RA.
Nesta dissertação, realizámos uma análise crítica para identificar
as principais limitações e oportunidades da área, ao mesmo tempo
em que situámos a sua maturidade e propomos um mapa com
direções de investigação importantes. De seguida, foi adotada uma
metodologia de Design Centrado no Humano, envolvendo
parceiros industriais de forma a compreender como a RA poderia
responder às suas necessidades em manutenção remota. Estes
resultados foram combinados com métodos da literatura num
protótipo de RA e a sua avaliação foi realizada com um caso de
estudo. Ficou então clara a necessidade de realizar uma reflexão
profunda para melhor compreender as dimensões que influenciam
e devem ser consideradas na RA Colaborativa. Foram então
propostos um modelo conceptual e uma taxonomia centrada no ser
humano para promover a sistematização de perspetivas. Com base
no modelo proposto, foi desenvolvido um framework de avaliação
para recolha e análise de dados contextualizados, permitindo
apoiar o desenho e a realização de avaliações distribuídas de
forma mais informada e completa. Para instanciar esta visão, o
CAPTURE toolkit foi criado, fornecendo uma perspetiva adicional
com base em dimensões de colaboração e medidas predefinidas
para obter dados in situ, que podem ser analisados utilizando o
painel de visualização integrado. O toolkit permitiu avaliar com
sucesso vários colaboradores durante a realização de tarefas de
manutenção remota apoiada por RA, permitindo mostrar a sua
versatilidade e potencial em obter uma caracterização abrangente
do valor acrescentado da RA em situações da vida real. Sendo
assim, estabelece-se como uma solução genérica, potencialmente
aplicável a uma gama diversificada de cenários colaborativos.Programa Doutoral em Engenharia Informátic
AFFECTIVE COMPUTING AND AUGMENTED REALITY FOR CAR DRIVING SIMULATORS
Car simulators are essential for training and for analyzing the behavior, the responses and the performance of the driver. Augmented Reality (AR) is the technology that enables virtual images to be overlaid on views of the real world. Affective Computing (AC) is the technology that helps reading emotions by means of computer systems, by analyzing body gestures, facial expressions, speech and physiological signals. The key aspect of the research relies on investigating novel interfaces that help building situational awareness and emotional awareness, to enable affect-driven remote collaboration in AR for car driving simulators. The problem addressed relates to the question about how to build situational awareness (using AR technology) and emotional awareness (by AC technology), and how to integrate these two distinct technologies [4], into a unique affective framework for training, in a car driving simulator
Spatial Interaction for Immersive Mixed-Reality Visualizations
Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics.
Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis.
Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis.
Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research.
One of the resulting challenges, however, is the design of user interaction for these often complex systems.
In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions:
1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them?
2) How does spatial interaction benefit these visualizations and how should such interactions be designed?
3) How can spatial interaction in these immersive environments be analyzed and evaluated?
To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts.
For the second question, I study how spatial interaction in particular can help to explore data in mixed reality.
There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels.
Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights.
Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations.Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, führen zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse.
Insbesondere bei inhärent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natürliche Interaktionstechniken als hilfreich für die Datenanalyse erwiesen.
Darüber hinaus spielt in solchen Anwendungsfällen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext für die Analyse dient.
Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung geführt hat.
Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion für diese oft komplexen Systeme.
In meiner Dissertation beschäftige ich mich mit dieser Herausforderung, indem ich die Interaktion für immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche:
1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden?
2) Wie profitieren diese Visualisierungen von räumlicher Interaktion und wie sollten solche Interaktionen gestaltet werden?
3) Wie kann räumliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden?
Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen für immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren.
Für die zweite Frage untersuche ich, wie insbesondere die räumliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann.
Dabei betrachte ich die Interaktion mit räumlichen Geräten im Vergleich zur Touch-Eingabe, die Verwendung zusätzlicher mobiler Geräte als Controller und das Potenzial transparenter Interaktionspanels.
Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darüber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann.
Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie räumliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstützen können
BioIMAX : a Web2.0 approach to visual data mining in bioimage data
Loyek C. BioIMAX : a Web2.0 approach to visual data mining in bioimage data. Bielefeld: Universität Bielefeld; 2012
An IoT-Based Framework of Webvr Visualization for Medical Big Data in Connected Health
Recently, telemedicine has been widely applied in remote diagnosis, treatment and counseling, where the Internet of Things (IoT) technology plays an important role. In the process of telemedicine, data are collected from remote medical equipment, such as CT machine and MRI machine, and then transmitted and reconstructed locally in three-dimensions. Due to the large amount of data to be transmitted in the reconstructed model and the small storage capacity, data need to be compressed progressively before transmission. On this basis, we proposed a lightweight progressive transmission algorithm based on large data visualization in telemedicine to improve transmission efficiency and achieve lossless transmission of original data. Moreover, a novel four-layer system architecture based on IoT has been introduced, including the sensing layer, analysis layer, network layer and application layer. In this way, the three-dimensional reconstructed data at the local end is compressed and transmitted to the remote end, and then visualized at the remote end to show reconstructed 3D models. Thus, it is conducive to doctors in remote real-time diagnosis and treatment, and then realize the data processing and transmission between doctors, patients and medical equipment
- …