108 research outputs found
Medical Data Visual Synchronization and Information interaction Using Internet-based Graphics Rendering and Message-oriented Streaming
The rapid technology advances in medical devices make possible the generation of vast amounts of data, which contain massive quantities of diagnostic information. Interactively accessing and sharing the acquired data on the Internet is critically important in telemedicine. However, due to the lack of efficient algorithms and high computational cost, collaborative medical data exploration on the Internet is still a challenging task in clinical settings. Therefore, we develop a web-based medical image rendering and visual synchronization software platform, in which novel algorithms are created for parallel data computing and image feature enhancement, where Node.js and Socket.IO libraries are utilized to establish bidirectional connections between server and clients in real time. In addition, we design a new methodology to stream medical information among all connected users, whose identities and input messages can be automatically stored in database and extracted in web browsers. The presented software framework will provide multiple medical practitioners with immediate visual feedback and interactive information in applications such as collaborative therapy planning, distributed treatment, and remote clinical health care
BCI2000Web and WebFM: Browser-Based Tools for Brain Computer Interfaces and Functional Brain Mapping
BCI2000 has been a popular platform for development of real-time brain computer interfaces (BCIs). Since BCI2000's initial release, web browsers have evolved considerably, enabling rapid development of internet-enabled applications and interactive visualizations. Linking the amplifier abstraction and signal processing native to BCI2000 with the host of technologies and ease of development afforded by modern web browsers could enable a new generation of browser-based BCIs and visualizations. We developed a server and filter module called BCI2000Web providing an HTTP connection capable of escalation into an RFC6455 WebSocket, which enables direct communication between a browser and a BCI2000 distribution in real-time, facilitating a number of novel applications. We also present a JavaScript module, bci2k.js, that allows web developers to create paradigms and visualizations using this interface in an easy-to-use and intuitive manner. To illustrate the utility of BCI2000Web, we demonstrate a browser-based implementation of a real-time electrocorticographic (ECoG) functional mapping suite called WebFM. We also explore how the unique characteristics of our browser-based framework make BCI2000Web an attractive tool for future BCI applications. BCI2000Web leverages the advances of BCI2000 to provide real-time browser-based interactions with human neurophysiological recordings, allowing for web-based BCIs and other applications, including real-time functional brain mapping. Both BCI2000 and WebFM are provided under open source licenses. Enabling a powerful BCI suite to communicate with today's most technologically progressive software empowers a new cohort of developers to engage with BCI technology, and could serve as a platform for internet-enabled BCIs
Architectures for ubiquitous 3D on heterogeneous computing platforms
Today, a wide scope for 3D graphics applications exists, including domains such as scientific visualization, 3D-enabled web pages, and entertainment. At the same time, the devices and platforms that run and display the applications are more heterogeneous than ever. Display environments range from mobile devices to desktop systems and ultimately to distributed displays that facilitate collaborative interaction. While the capability of the client devices may vary considerably, the visualization experiences running on them should be consistent. The field of application should dictate how and on what devices users access the application, not the technical requirements to realize the 3D output.
The goal of this thesis is to examine the diverse challenges involved in providing consistent and scalable visualization experiences to heterogeneous computing platforms and display setups. While we could not address the myriad of possible use cases, we developed a comprehensive set of rendering architectures in the major domains of scientific and medical visualization, web-based 3D applications, and movie virtual production. To provide the required service quality, performance, and scalability for different client devices and displays, our architectures focus on the efficient utilization and combination of the available client, server, and network resources. We present innovative solutions that incorporate methods for hybrid and distributed rendering as well as means to manage data sets and stream rendering results. We establish the browser as a promising platform for accessible and portable visualization services. We collaborated with experts from the medical field and the movie industry to evaluate the usability of our technology in real-world scenarios.
The presented architectures achieve a wide coverage of display and rendering setups and at the same time share major components and concepts. Thus, they build a strong foundation for a unified system that supports a variety of use cases.Heutzutage existiert ein großer Anwendungsbereich für 3D-Grafikapplikationen wie wissenschaftliche Visualisierungen, 3D-Inhalte in Webseiten, und Unterhaltungssoftware. Gleichzeitig sind die Geräte und Plattformen, welche die Anwendungen ausführen und anzeigen, heterogener als je zuvor. Anzeigegeräte reichen von mobilen Geräten zu Desktop-Systemen bis hin zu verteilten Bildschirmumgebungen, die eine kollaborative Anwendung begünstigen. Während die Leistungsfähigkeit der Geräte stark schwanken kann, sollten die dort laufenden Visualisierungen konsistent sein. Das Anwendungsfeld sollte bestimmen, wie und auf welchem Gerät Benutzer auf die Anwendung zugreifen, nicht die technischen Voraussetzungen zur Erzeugung der 3D-Grafik.
Das Ziel dieser Thesis ist es, die diversen Herausforderungen zu untersuchen, die bei der Bereitstellung von konsistenten und skalierbaren Visualisierungsanwendungen auf heterogenen Plattformen eine Rolle spielen. Während wir nicht die Vielzahl an möglichen Anwendungsfällen abdecken konnten, haben wir eine repräsentative Auswahl an Rendering-Architekturen in den Kernbereichen wissenschaftliche Visualisierung, web-basierte 3D-Anwendungen, und virtuelle Filmproduktion entwickelt. Um die geforderte Qualität, Leistung, und Skalierbarkeit für verschiedene Client-Geräte und -Anzeigen zu gewährleisten, fokussieren sich unsere Architekturen auf die effiziente Nutzung und Kombination der verfügbaren Client-, Server-, und Netzwerkressourcen. Wir präsentieren innovative Lösungen, die hybrides und verteiltes Rendering als auch das Verwalten der Datensätze und Streaming der 3D-Ausgabe umfassen. Wir etablieren den Web-Browser als vielversprechende Plattform für zugängliche und portierbare Visualisierungsdienste. Um die Verwendbarkeit unserer Technologie in realitätsnahen Szenarien zu testen, haben wir mit Experten aus der Medizin und Filmindustrie zusammengearbeitet.
Unsere Architekturen erreichen eine umfassende Abdeckung von Anzeige- und Rendering-Szenarien und teilen sich gleichzeitig wesentliche Komponenten und Konzepte. Sie bilden daher eine starke Grundlage für ein einheitliches System, das eine Vielzahl an Anwendungsfällen unterstützt
Development of an open access system for remote operation of robotic manipulators
Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáExploring the realms of research, training, and learning in the field of robotic systems poses obstacles for institutions lacking the necessary infrastructure. The significant investment required to acquire physical robotic systems often limits access and hinders progress in these areas. While robotic simulation platforms provide a virtual environment for experimentation, the potential of remote robotic environments surpasses this by enabling users to interact with real robotic systems during training and research activities. This way, users, including students and researchers, can engage in a virtual experience that transcends geographical boundaries, connecting them to real-world robotic systems though the Internet. By bridging the gap between virtual and physical worlds, remote environments offer a more practical and immersive experience, and open up new horizons for collaborative research and training. Democratizing access to these technologies means
empower educational institutions and research centers to engage in practical and handson learning experiences. However, the implementation of remote robotic environments comes with its own set of technical challenges: communication, security, stability and access. In light of these challenges, a ROS-based system has been developed, providing open access with promising results (low delay and run-time visualization). This system enables remote control of robotic manipulators and has been successfully validated through the
remote operation of a real UR3 manipulator.Explorar as áreas de pesquisa, treinamento e aprendizado no campo de sistemas robóticos apresenta obstáculos para instituições que não possuem a infraestrutura necessária. O investimento significativo exigido para adquirir sistemas robóticos físicos muitas vezes limita o acesso e dificulta o progresso nessas áreas. Embora as plataformas de simulação robótica forneçam um ambiente virtual para experimentação, o potencial dos ambientes robóticos remotos vai além disso, permitindo que os usuários interajam com sistemas robóticos reais durante atividades de treinamento e pesquisa. Dessa forma, os usuários, incluindo estudantes e pesquisadores, podem participar de uma experiência virtual que transcende as fronteiras geográficas, conectando-os a sistemas robóticos do mundo real por meio da Internet. Ao estabelecer uma ponte entre os mundos virtual e físico, os ambientes remotos oferecem uma experiência mais prática e imersiva, abrindo novos horizontes para a pesquisa colaborativa e o treinamento. Democratizar o acesso a essas tecnologias significa capacitar instituições educacionais e centros de pesquisa a se envolverem em experiências práticas e de aprendizado prático. No entanto, a implementação de ambientes robóticos
remotos traz consigo um conjunto próprio de desafios técnicos: comunicação, segurança, estabilidade e acesso. Diante desses desafios, foi desenvolvida uma plataforma baseada em ROS, oferecendo acesso aberto com resultados promissores (baixo delay e visualização em run-time). Essa plataforma possibilita o controle remoto de manipuladores robóticos e foi validada com sucesso por meio da operação remota de um manipulador UR3 real
Interactive web-based visualization
The visualization of large amounts of data, which cannot be easily copied for processing on a user’s local machine, is not yet a fully solved problem. Remote visualization represents one possible solution approach to the problem, and has long been an important research topic. Depending on the device used, modern hardware, such as high-performance GPUs, is sometimes not available. This is another reason for the use of remote visualization. Additionally, due to the growing global networking and collaboration among research groups, collaborative remote visualization solutions are becoming more important. The additional use of collaborative visualization solutions is eventually due to the growing global networking and collaboration among research groups.
The attractiveness of web-based remote visualization is greatly increased by the wide availability of web browsers on almost all devices; these are available today on all systems - from desktop computers to smartphones. In order to ensure interactivity, network bandwidth and latency are the biggest challenges that web-based visualization algorithms have to solve. Despite the steady improvements in available bandwidth, these improvements are still significantly slower than, for example, processor performance, resulting in increasing the impact of this bottleneck. For example, visualization of large dynamic data in low-bandwidth environments can be challenging because it requires continuous data transfer. However, bandwidth improvement alone cannot improve the latency because it is also affected by factors such as the distance between server and client and network utilization.
To overcome these challenges, a combination of techniques is needed to customize the individual processing steps of the visualization pipeline, from efficient data representation to hardware-accelerated rendering on the client side. This thesis first deals with related work in the field of remote visualization with a particular focus on interactive web-based visualization and then presents techniques for interactive visualization in the browser using modern web standards such as WebGL and HTML5. These techniques enable the visualization of dynamic molecular data sets with more than one million atoms at interactive frame rates using GPU-based ray casting. Due to the limitations which exist in a browser-based environment, the concrete implementation of the GPU-based ray casting had to be customized. Evaluation of the resulting performance shows that GPU-based techniques enable the interactive rendering of large data sets and achieve higher image quality compared to polygon-based techniques.
In order to reduce data transfer times and network latency, and improve rendering speed, efficient approaches for data representation and transmission are used. Furthermore, this thesis introduces a GPU-based volume-ray marching technique based on WebGL 2.0, which uses progressive brick-wise data transfer, as well as multiple levels of detail in order to achieve interactive volume rendering of datasets stored on a server.
The concepts and results presented in this thesis contribute to the further spread of interactive web-based visualization. The algorithmic and technological advances that have been achieved form a basis for further development of interactive browser-based visualization applications. At the same time, this approach has the potential for enabling future collaborative visualization in the cloud.Die Visualisierung großer Datenmengen, welche nicht ohne Weiteres zur Verarbeitung auf den lokalen Rechner des Anwenders kopiert werden können, ist ein bisher nicht zufriedenstellend gelöstes Problem. Remote-Visualisierung stellt einen möglichen Lösungsansatz dar und ist deshalb seit langem ein relevantes Forschungsthema. Abhängig vom verwendeten Endgerät ist moderne Hardware, wie etwa performante GPUs, teilweise nicht verfügbar. Dies ist ein weiterer Grund für den Einsatz von Remote-Visualisierung. Durch die zunehmende globale Vernetzung und Kollaboration von Forschungsgruppen gewinnt kollaborative Remote-Visualisierung zusätzlich an Bedeutung.
Die Attraktivität web-basierter Remote-Visualisierung wird durch die weitreichende Verfügbarkeit von Web-Browsern auf nahezu allen Endgeräten enorm gesteigert; diese sind heutzutage auf allen Systemen - vom Desktop-Computer bis zum Smartphone - vorhanden. Bei der Gewährleistung der Interaktivität sind Bandbreite und Latenz der Netzwerkverbindung die größten Herausforderungen, welche von web-basierten Visualisierungs-Algorithmen gelöst werden müssen. Trotz der stetigen Verbesserungen hinsichtlich der verfügbaren Bandbreite steigt diese signifikant langsamer als beispielsweise die Prozessorleistung, wodurch sich die Auswirkung dieses Flaschenhalses immer weiter verstärkt. So kann beispielsweise die Visualisierung großer dynamischer Daten in Umgebungen mit geringer Bandbreite eine Herausforderung darstellen, da kontinuierlicher Datentransfer benötigt wird. Dennoch kann die alleinige Verbesserung der Bandbreite keine entsprechende Verbesserung der Latenz bewirken, da diese zudem von Faktoren wie der Distanz zwischen Server und Client sowie der Netzwerkauslastung beeinflusst wird.
Um diese Herausforderungen zu bewältigen, wird eine Kombination verschiedener Techniken für die Anpassung der einzelnen Verarbeitungsschritte der Visualisierungspipeline benötigt, angefangen bei effizienter Datenrepräsentation bis hin zu hardware-beschleunigtem Rendering auf der Client-Seite. Diese Doktorarbeit befasst sich zunächst mit verwandten Arbeiten auf dem Gebiet der Remote-Visualisierung mit besonderem Fokus auf interaktiver web-basierter Visualisierung und präsentiert danach Techniken für die interaktive Visualisierung im Browser mit Hilfe moderner Web-Standards wie WebGL und HTML5. Diese Techniken ermöglichen die Visualisierung dynamischer molekularer Datensätze mit mehr als einer Million Atomen bei interaktiven Frameraten durch die Verwendung GPU-basierten Raycastings. Aufgrund der Einschränkungen, welche in einer Browser-basierten Umgebung vorliegen, musste die konkrete Implementierung des GPU-basierten Raycastings angepasst werden. Die Evaluation der daraus resultierenden Performanz zeigt, dass GPU-basierte Techniken das interaktive Rendering von großen Datensätzen ermöglichen und eine im Vergleich zu Polygon-basierten Techniken höhere Bildqualität erreichen.
Zur Verringerung der Übertragungszeiten, Reduktion der Latenz und Verbesserung der Darstellungsgeschwindigkeit werden effiziente Ansätze zur Datenrepräsentation und übertragung verwendet. Des Weiteren wird in dieser Doktorarbeit eine GPU-basierte Volumen-Ray-Marching-Technik auf Basis von WebGL 2.0 eingeführt, welche progressive blockweise Datenübertragung verwendet, sowie verschiedene Detailgrade, um ein interaktives Volumenrendering von auf dem Server gespeicherten Datensätzen zu erreichen.
Die in dieser Doktorarbeit präsentierten Konzepte und Resultate tragen zur weiteren Verbreitung von interaktiver web-basierter Visualisierung bei. Die erzielten algorithmischen und technologischen Fortschritte bilden eine Grundlage für weiterführende Entwicklungen von interaktiven Visualisierungsanwendungen auf Browser-Basis. Gleichzeitig hat dieser Ansatz das Potential, zukünftig kollaborative Visualisierung in der Cloud zu ermöglichen
Bagadus App: Notational data capture and instant video analysis using mobile devices
Enormous amounts of money and other resources are poured into professional soccer today. Teams will do anything to get a competitive advantage, including investing heavily in new technology for player development and analysis. In this thesis, we investigate and implement an instant analytical system that captures sports notational data and combines it with high-quality virtual view video from the Bagadus system, removing the manual labor of traditional video analysis. We present a multi-platform mobile application and a playback system, which together act as a state-of-the-art analytical tool providing soccer experts with the means of capturing annotations and immediately play back zoomable and pannable video on stadium big screens, computers and mobile devices. By controlling remote playback and drawing on video through the app, sports professionals can provide instant, video-backed analysis of interesting situations on the pitch to players, analysts or even spectators. We investigate how to best design, implement and combine these components into a Instant Replay Analytical Subsystem for the Bagadus project to create anautomated way of viewing and controlling video based on annotations. We describe how the system is optimized in terms of performance, to achieve real-time video control and drawing; scalability, by minimizing network data and memory usage; and usability, through a user-tested interface optimized for accuracy and speed for notational data capture, as well as user customization based on roles and easy filtering of annotations. The system has been tested and adapted through real life scenarios at Alfheim Stadium for Tromsø Idrettslag (TIL) and at Ullevaal Stadion for the Norway national football team
Bagadus App: Notational data capture and instant video analysis using mobile devices
Enormous amounts of money and other resources are poured into professional soccer today. Teams will do anything to get a competitive advantage, including investing heavily in new technology for player development and analysis. In this thesis, we investigate and implement an instant analytical system that captures sports notational data and combines it with high-quality virtual view video from the Bagadus system, removing the manual labor of traditional video analysis. We present a multi-platform mobile application and a playback system, which together act as a state-of-the-art analytical tool providing soccer experts with the means of capturing annotations and immediately play back zoomable and pannable video on stadium big screens, computers and mobile devices. By controlling remote playback and drawing on video through the app, sports professionals can provide instant, video-backed analysis of interesting situations on the pitch to players, analysts or even spectators. We investigate how to best design, implement and combine these components into a Instant Replay Analytical Subsystem for the Bagadus project to create an automated way of viewing and controlling video based on annotations. We describe how the system is optimized in terms of performance, to achieve real-time video control and drawing; scalability, by minimizing network data and memory usage; and usability, through a user tested interface optimized for accuracy and speed for notational data capture, as well as user customization based on roles and easy filtering of annotations. The system has been tested and adapted through real life scenarios at Alfheim Stadium for Tromsø Idrettslag (TIL) and at Ullevaal Stadion for the Norway national football team
The use of extended reality and machine learning to improve healthcare and promote greenhealth
Com a Quarta Revolução Industrial, a propagação da Internet das Coisas, o avanço nas áreas de Inteligência Artificial e de Machine Learning até à migração para a Computação em Nuvem, o
termo "Ambientes Inteligentes" cada vez mais deixa de ser uma idealização para se tornar realidade.
Da mesma forma as tecnologias de Realidade Extendida também elas têm aumentado a
sua presença no mundo tecnológico após um "período de hibernação", desde a popularização do
conceito de Metaverse assim como a entrada das grandes empresas informáticas como a Apple e
a Google num mercado onde a Realidade Virtual, Realidade Aumentada e Realidade Mista eram
dominadas por empresas com menos experiência no desenvolvimento de sistemas (e.g. Meta),
reconhecimento a nível mundial (e.g. HTC Vive), ou suporte financeiro e confiança do mercado.
Esta tese tem como foco o estudo do potencial uso das tecnologias de Realidade Estendida de
forma a promover Saúde Verde assim como seu uso em Hospitais Inteligentes, uma das variantes
de Ambientes Inteligentes, incorporando Machine Learning e Computer Vision, como ferramenta
de suporte e de melhoria de cuidados de saúde, tanto do ponto de vista do profissional de saúde
como do paciente, através duma revisão literarária e análise da atualidade. Resultando na elaboração
de um modelo conceptual com a sugestão de tecnologias a poderem ser usadas para alcançar
esse cenário selecionadas pelo seu potencial, sendo posteriormente descrito o desenvolvimento de
protótipos de partes do modelo conceptual para Óculos de Realidade Extendida como validação
de conceito.With the Fourth Industrial Revolution, the spread of the Internet of Things, the advance in the areas of Artificial Intelligence and Machine Learning until the migration to Cloud Computing, the term "Intelligent Environments" increasingly ceases to be an idealization to become reality. Likewise, Extended Reality technologies have also increased their presence in the technological world after a "hibernation period", since the popularization of the Metaverse concept, as well as the entry of large computer companies such as Apple and Google into a market where Virtual Reality, Augmented Reality and Mixed Reality were dominated by companies with less experience in system development (e.g. Meta), worldwide recognition (e.g. HTC Vive) or financial support and trust in the market. This thesis focuses on the study of the potential use of Extended Reality technologies in order to promote GreenHealth as well as their use in Smart Hospitals, one of the variants of Smart Environments, incorporating Machine Learning and Computer Vision, as a tool to support and improve healthcare, both from the point of view of the health professional and the patient, through a literature review and analysis of the current situation. Resulting in the elaboration of a conceptual model with the suggestion of technologies that can be used to achieve this scenario selected for their potential, and then the development of prototypes of parts of the conceptual model for Extended Reality Headsets as concept validation
- …