888 research outputs found

    KOLAM : human computer interfaces fro visual analytics in big data imagery

    Get PDF
    In the present day, we are faced with a deluge of disparate and dynamic information from multiple heterogeneous sources. Among these are the big data imagery datasets that are rapidly being generated via mature acquisition methods in the geospatial, surveillance (specifically, Wide Area Motion Imagery or WAMI) and biomedical domains. The need to interactively visualize these imagery datasets by using multiple types of views (as needed) into the data is common to these domains. Furthermore, researchers in each domain have additional needs: users of WAMI datasets also need to interactively track objects of interest using algorithms of their choice, visualize the resulting object trajectories and interactively edit these results as needed. While software tools that fulfill each of these requirements individually are available and well-used at present, there is still a need for tools that can combine the desired aspects of visualization, human computer interaction (HCI), data analysis, data management, and (geo-)spatial and temporal data processing into a single flexible and extensible system. KOLAM is an open, cross-platform, interoperable, scalable and extensible framework for visualization and analysis that we have developed to fulfil the above needs. The novel contributions in this thesis are the following: 1) Spatio-temporal caching for animating both giga-pixel and Full Motion Video (FMV) imagery, 2) Human computer interfaces purposefully designed to accommodate big data visualization, 3) Human-in-the-loop interactive video object tracking - ground-truthing of moving objects in wide area imagery using algorithm assisted human-in-the-loop coupled tracking, 4) Coordinated visualization using stacked layers, side-by-side layers/video sub-windows and embedded imagery, 5) Efficient one-click manual tracking, editing and data management of trajectories, 6) Efficient labeling of image segmentation regions and passing these results to desired modules, 7) Visualization of image processing results generated by non-interactive operators using layers, 8) Extension of interactive imagery and trajectory visualization to multi-monitor wall display environments, 9) Geospatial applications: Providing rapid roam, zoom and hyper-jump spatial operations, interactive blending, colormap and histogram enhancement, spherical projection and terrain maps, 10) Biomedical applications: Visualization and target tracking of cell motility in time-lapse cell imagery, collecting ground-truth from experts on whole-slide imagery (WSI) for developing histopathology analytic algorithms and computer-aided diagnosis for cancer grading, and easy-to-use tissue annotation features.Includes bibliographical reference

    Interaction in an immersive virtual Beijing courtyard house

    Get PDF
    Courtyard housing had been a standard dwelling type in China for more than 3000 years, which integrated tightly with local customs, aesthetics, philosophy, and natural conditions. As the representative of Chinese courtyard housing, Beijing\u27s style has its unique features including structure, plan layout, and urban form. How to present these features effectively is of great importance to understand Beijing courtyard housing. The current major visualization methods in architecture include physical model, digital imaging, and hand drawing. All of them have two common limitations--small dimensions and non-interaction. As an alternative, VR owns two advantages--immersion and interactivity. In a full-immersive VR environment, such as the C6, users can examine virtual buildings at full-scale and operate models interactively at real-time. Thus, this project attempts to implement an interactive simulation of Beijing courtyard house in C6, and find out if architectural knowledge can be presented through this environment. The methodological steps include VR modeling, interaction planning, and C6 implementation. A four-yard house in Beijing was used as the prototype of VR modeling. By generating the model into six versions with different nodes and textures, it was found that the fewer nodes a model has, the quicker it is in C6. The main interaction mechanism is to demonstrate the main hall\u27s structure interactively through menu selection. The sequence to show the structure is based on its constructional process. Each menu item uses the name of structural components, and by clicking a menu item, the corresponding constructional step is shown in C6. There were five viewers invited to see the simulation and comment on the functionality of full-immersion and interactivity in this product. Overall, the results are positive that the full-immersive and interactive VR environment is potentially effective to present architectural knowledge. A major suggestion from the viewers is that more details can be added in the simulation, such as characters and furniture. Upon the accomplishment of this project, a method to implement architectural simulations efficiently in C6 could be found. In the future, this study could involve more complex interactions such as virtual inhabitants, as a means to show the Chinese culture vividly

    Software-Enhanced Teaching and Visualization Capabilities of an Ultra-High-Resolution Video Wall

    Full text link
    This paper presents a modular approach to enhance the capabilities and features of a visualization and teaching room using software. This approach was applied to a room with a large, high resolution (7680×\times4320 pixels), tiled screen of 13 ×\times 7.5 feet as its main display, and with a variety of audio and video inputs, connected over a network. Many of the techniques described are possible because of a software-enhanced setup, utilizing existing hardware and a collection of mostly open-source tools, allowing to perform collaborative, high-resolution visualizations as well as broadcasting and recording workshops and lectures. The software approach is flexible and allows one to add functionality without changing the hardware.Comment: PEARC'19: "Practice and Experience in Advanced Research Computing", July 28-August 1, 2019 - Chicago, IL, US

    09251 Abstracts Collection -- Scientific Visualization

    Get PDF
    From 06-14-2009 to 06-19-2009, the Dagstuhl Seminar 09251 ``Scientific Visualization \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, over 50 international participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general

    IMPROVE: collaborative design review in mobile mixed reality

    Get PDF
    In this paper we introduce an innovative application designed to make collaborative design review in the architectural and automotive domain more effective. For this purpose we present a system architecture which combines variety of visualization displays such as high resolution multi-tile displays, TabletPCs and head-mounted displays with innovative 2D and 3D Interaction Paradigms to better support collaborative mobile mixed reality design reviews. Our research and development is motivated by two use scenarios: automotive and architectural design review involving real users from Page\Park architects and FIAT Elasis. Our activities are supported by the EU IST project IMPROVE aimed at developing advanced display techniques, fostering activities in the areas of: optical see-through HMD development using unique OLED technology, marker-less optical tracking, mixed reality rendering, image calibration for large tiled displays, collaborative tablet-based and projection wall oriented interaction and stereoscopic video streaming for mobile users. The paper gives an overview of the hardware and software developments within IMPROVE and concludes with results from first user tests

    Interactive Visualization on High-Resolution Tiled Display Walls with Network Accessible Compute- and Display-Resources

    Get PDF
    Papers number 2-7 and appendix B and C of this thesis are not available in Munin: 2. Hagen, T-M.S., Johnsen, E.S., Stødle, D., Bjorndalen, J.M. and Anshus, O.: 'Liberating the Desktop', First International Conference on Advances in Computer-Human Interaction (2008), pp 89-94. Available at http://dx.doi.org/10.1109/ACHI.2008.20 3. Tor-Magne Stien Hagen, Oleg Jakobsen, Phuong Hoai Ha, and Otto J. Anshus: 'Comparing the Performance of Multiple Single-Cores versus a Single Multi-Core' (manuscript)4. Tor-Magne Stien Hagen, Phuong Hoai Ha, and Otto J. Anshus: 'Experimental Fault-Tolerant Synchronization for Reliable Computation on Graphics Processors' (manuscript) 5. Tor-Magne Stien Hagen, Daniel Stødle and Otto J. Anshus: 'On-Demand High-Performance Visualization of Spatial Data on High-Resolution Tiled Display Walls', Proceedings of the International Conference on Imaging Theory and Applications and International Conference on Information Visualization Theory and Applications (2010), pages 112-119. Available at http://dx.doi.org/10.5220/0002849601120119 6. Bård Fjukstad, Tor-Magne Stien Hagen, Daniel Stødle, Phuong Hoai Ha, John Markus Bjørndalen and Otto Anshus: 'Interactive Weather Simulation and Visualization on a Display Wall with Many-Core Compute Nodes', Para 2010 – State of the Art in Scientific and Parallel Computing. Available at http://vefir.hi.is/para10/extab/para10-paper-60 7. Tor-Magne Stien Hagen, Daniel Stødle, John Markus Bjørndalen, and Otto Anshus: 'A Step towards Making Local and Remote Desktop Applications Interoperable with High-Resolution Tiled Display Walls', Lecture Notes in Computer Science (2011), Volume 6723/2011, 194-207. Available at http://dx.doi.org/10.1007/978-3-642-21387-8_15The vast volume of scientific data produced today requires tools that can enable scientists to explore large amounts of data to extract meaningful information. One such tool is interactive visualization. The amount of data that can be simultaneously visualized on a computer display is proportional to the display’s resolution. While computer systems in general have seen a remarkable increase in performance the last decades, display resolution has not evolved at the same rate. Increased resolution can be provided by tiling several displays in a grid. A system comprised of multiple displays tiled in such a grid is referred to as a display wall. Display walls provide orders of magnitude more resolution than typical desktop displays, and can provide insight into problems not possible to visualize on desktop displays. However, their distributed and parallel architecture creates several challenges for designing systems that can support interactive visualization. One challenge is compatibility issues with existing software designed for personal desktop computers. Another set of challenges include identifying characteristics of visualization systems that can: (i) Maintain synchronous state and display-output when executed over multiple display nodes; (ii) scale to multiple display nodes without being limited by shared interconnect bottlenecks; (iii) utilize additional computational resources such as desktop computers, clusters and supercomputers for workload distribution; and (iv) use data from local and remote compute- and data-resources with interactive performance. This dissertation presents Network Accessible Compute (NAC) resources and Network Accessible Display (NAD) resources for interactive visualization of data on displays ranging from laptops to high-resolution tiled display walls. A NAD is a display having functionality that enables usage over a network connection. A NAC is a computational resource that can produce content for network accessible displays. A system consisting of NACs and NADs is either push-based (NACs provide NADs with content) or pull-based (NADs request content from NACs). To attack the compatibility challenge, a push-based system was developed. The system enables several simultaneous users to mirror multiple regions from the desktop of their computers (NACs) onto nearby NADs (among others a 22 megapixel display wall) without requiring usage of separate DVI/VGA cables, permanent installation of third party software or opening firewall ports. The system has lower performance than that of a DVI/VGA cable approach, but increases flexibility such as the possibility to share network accessible displays from multiple computers. At a resolution of 800 by 600 pixels, the system can mirror dynamic content between a NAC and a NAD at 38.6 frames per second (FPS). At 1600x1200 pixels, the refresh rate is 12.85 FPS. The bottleneck of the system is frame buffer capturing and encoding/decoding of pixels. These two functional parts are executed in sequence, limiting the usage of additional CPU cores. By pipelining and executing these parts on separate CPU cores, higher frame rates can be expected and by a factor of two in the best case. To attack all presented challenges, a pull-based system, WallScope, was developed. WallScope enables interactive visualization of local and remote data sets on high-resolution tiled display walls. The WallScope architecture comprises a compute-side and a display-side. The compute-side comprises a set of static and dynamic NACs. Static NACs are considered permanent to the system once added. This type of NAC typically has strict underlying security and access policies. Examples of such NACs are clusters, grids and supercomputers. Dynamic NACs are compute resources that can register on-the-fly to become compute nodes in the system. Examples of this type of NAC are laptops and desktop computers. The display-side comprises of a set of NADs and a data set containing data customized for the particular application domain of the NADs. NADs are based on a sort-first rendering approach where a visualization client is executed on each display-node. The state of these visualization clients is provided by a separate state server, enabling central control of load and refresh-rate. Based on the state received from the state server, the visualization clients request content from the data set. The data set is live in that it translates these requests into compute messages and forwards them to available NACs. Results of the computations are returned to the NADs for the final rendering. The live data set is close to the NADs, both in terms of bandwidth and latency, to enable interactive visualization. WallScope can visualize the Earth, gigapixel images, and other data available through the live data set. When visualizing the Earth on a 28-node display wall by combining the Blue Marble data set with the Landsat data set using a set of static NACs, the bottleneck of WallScope is the computation involved in combining the data sets. However, the time used to combine data sets on the NACs decreases by a factor of 23 when going from 1 to 26 compute nodes. The display-side can decode 414.2 megapixels of images per second (19 frames per second) when visualizing the Earth. The decoding process is multi-threaded and higher frame rates are expected using multi-core CPUs. WallScope can rasterize a 350-page PDF document into 550 megapixels of image-tiles and display these image-tiles on a 28-node display wall in 74.66 seconds (PNG) and 20.66 seconds (JPG) using a single quad-core desktop computer as a dynamic NAC. This time is reduced to 4.20 seconds (PNG) and 2.40 seconds (JPG) using 28 quad-core NACs. This shows that the application output from personal desktop computers can be decoupled from the resolution of the local desktop and display for usage on high-resolution tiled display walls. It also shows that the performance can be increased by adding computational resources giving a resulting speedup of 17.77 (PNG) and 8.59 (JPG) using 28 compute nodes. Three principles are formulated based on the concepts and systems researched and developed: (i) Establishing the end-to-end principle through customization, is a principle stating that the setup and interaction between a display-side and a compute-side in a visualization context can be performed by customizing one or both sides; (ii) Personal Computer (PC) – Personal Compute Resource (PCR) duality states that a user’s computer is both a PC and a PCR, implying that desktop applications can be utilized locally using attached interaction devices and display(s), or remotely by other visualization systems for domain specific production of data based on a user’s personal desktop install; and (iii) domain specific best-effort synchronization stating that for distributed visualization systems running on tiled display walls, state handling can be performed using a best-effort synchronization approach, where visualization clients eventually will get the correct state after a given period of time. Compared to state-of-the-art systems presented in the literature, the contributions of this dissertation enable utilization of a broader range of compute resources from a display wall, while at the same time providing better control over where to provide functionality and where to distribute workload between compute-nodes and display-nodes in a visualization context

    Hydrascope: Creating Multi-Surface Meta-Applications Through View Synchronization and Input Multiplexing

    Get PDF
    International audienceAs computing environments that combine multiple displays and input devices become more common, the need for applications that take advantage of these capabilities becomes more pressing. However, few applications are designed to support such multi-surface environments. We investigate how to adapt existing applications without access to their source code. We introduce HydraScope, a framework for transforming existing web applications into meta-applications that execute and synchronize multiple copies of applications in parallel, with a multi-user input layer for interacting with it. We describe the Hydra-Scope architecture, validated with five meta-applications

    Capturing improved TLS data of Maulbronn Monastery andintegration of the mesh into the existing UNITY visualization

    Full text link
    [EN] This Master Thesis consists in improving the existing 3D visualization of the Maulbronn monastery, because there are areas with excess brightness that is produced by the windows. To achieve this purpose, the old scans that were part of an existing FARO SCENE project have been analysed. After analysing the scans, those areas that had to be repeated to improve texture were detected. Additionally, tests have been done to find out which parameters are best suited to improve the quality of the HDR images. Afterwards, different scans have been taken with the best parameters. This data has been processed and recorded with the data from the previous scans, resulting in the creation of a mesh for each zone, along with the position file and HDR images. Geomagic Qualify has also been used to improve mesh geometry. Then the images have been edited in Photoshop to represent a better texture for the mesh, as well as masks have been created not to apply those areas of the images that do not have good quality. In order to reproject the images on the mesh, the Agisoft Metashape program has been used, resulting in a tiled model. Once the tiled model is obtained, only the last level has been used to incorporate the new meshes into UNITY. Finally, the texture and some parts related to walkability have been improved through the use of several scripts. This project is divided into three parts. The first is the theoretical part, where the basic concepts of 3D visualization and data processing are explained. The different types of software that have been used are also explained. The second part is the explanation of the practical part, in what it consists and in what steps it is divided. Finally, in the last part of the document are the results, conclusions, future lines of the project and references.[ES] Este proyecto consiste en mejorar la visualización 3D existente del monasterio de Maulbronn, porque hay áreas con exceso de brillo que producen las ventanas. Para lograr este propósito, se analizaron los antiguos escaneos que formaban parte de un proyecto FARO SCENE existente. Después de analizar los escaneos, se detectaron aquellas áreas que tuvieron que repetirse para mejorar la textura. Además, se han realizado pruebas para descubrir qué parámetros son los más adecuados para mejorar la calidad de las imágenes HDR. Posteriormente, se han realizado diferentes escaneos con los mejores parámetros. Estos datos se procesaron y registraron con los datos de los escaneos anteriores, lo que resultó en la creación de una malla para cada zona, junto con el archivo de posición y las imágenes HDR. Geomagic Qualify también se ha utilizado para mejorar la geometría de la malla. Luego, las imágenes se han editado en Photoshop para representar una mejor textura para la malla, así como se han creado máscaras para no aplicar aquellas áreas de las imágenes que no tienen buena calidad. Para reproyectar las imágenes en la malla, se ha utilizado el programa Agisoft Metashape, lo que da como resultado un modelo en mosaico. Una vez que se obtiene el modelo de mosaico, solo se ha utilizado el último nivel para incorporar las nuevas mallas a UNITY. Finalmente, la textura y algunas partes relacionadas con la capacidad de caminar se han mejorado mediante el uso de varios scripts. Este proyecto se divide en tres partes. La primera es la parte teórica, donde se explican los conceptos básicos de visualización 3D y procesamiento de datos. También se explican los diferentes tipos de software que se han utilizado. La segunda parte es la explicación de la parte práctica, en qué consiste y en qué pasos se divide. Finalmente, en la última parte del documento están los resultados, conclusiones, líneas futuras del proyecto y referencias.Arcón Navarro, R. (2020). Capturing improved TLS data of Maulbronn Monastery andintegration of the mesh into the existing UNITY visualization. http://hdl.handle.net/10251/139512TFG
    • …
    corecore