3,538 research outputs found

    Exploring the Design Space of Immersive Urban Analytics

    Full text link
    Recent years have witnessed the rapid development and wide adoption of immersive head-mounted devices, such as HTC VIVE, Oculus Rift, and Microsoft HoloLens. These immersive devices have the potential to significantly extend the methodology of urban visual analytics by providing critical 3D context information and creating a sense of presence. In this paper, we propose an theoretical model to characterize the visualizations in immersive urban analytics. Further more, based on our comprehensive and concise model, we contribute a typology of combination methods of 2D and 3D visualizations that distinguish between linked views, embedded views, and mixed views. We also propose a supporting guideline to assist users in selecting a proper view under certain circumstances by considering visual geometry and spatial distribution of the 2D and 3D visualizations. Finally, based on existing works, possible future research opportunities are explored and discussed.Comment: 23 pages,11 figure

    Visualização de trajectórias humanas em dispositivos móveis

    Get PDF
    Tese de mestrado, Engenharia Informática (Sistemas de Informação), Universidade de Lisboa, Faculdade de Ciências, 2016Com a crescente popularidade de dispositivos móveis (como smartphones e tablets) e de aplicações que capturam e armazenam dados geográficos, cada vez mais pessoas gravam as suas deslocações sob a forma de dados de trajectórias. Este padrão emergente é exemplificado com o aumento do uso de aplicações, como o Endomondo ou Runtastic, que, para além de gravarem a evolução da trajectória seguida pelo utilizador, permitem a visualização desses mesmos dados, tipicamente sobre a forma de mapas estáticos 2D, complementados com vários diagramas de modo a extrair conhecimento dos dados. Os mapas animados têm emergido como uma potencial técnica para a visualização de informação de forma dinâmica sendo, geralmente, considerados como intuitivos para a detecção de relações entre a informação espacial e temporal. Apesar dos vários estudos na área da visualização de dados espácio-temporais, a aplicação deste tipo de técnicas em dispositivos móveis para representação de dados de trajectos pessoais ainda se encontra pouco explorada. Este projecto tem como objectivo estudar este problema e explorar/avaliar a adequabilidade na utilização de mapas animados para a representação de trajectos de atividades físicas em dispositivos móveis. Para isso, foi criado o protótipo PATH, uma aplicação Android para a visualização de trajectos pessoais utilizando mapas animados, e, consequentemente, um teste de usabilidade de modo a comparar diferentes tipos de representações, estáticas e animadas, de trajectórias humanas. Globalmente, os resultados sugerem que apesar de os mapas animados não beneficiarem significativamente a compreensão dos dados por parte dos utilizadores, este tipo de visualização é geralmente preferido e menos exigente, em termos de interactividade, comparativamente como mapas estáticos. Por outro lado, é important também ter em consideração o tipo/foco da animação utilizada, pois poderá afectar a usabilidade da aplicação.With the growing popularity of mobile devices (such as smartphones and tablets) and applications that capture and store geographic data, more people record their movements as trajectory data. This emerging pattern is exemplified with the increase of use of applications such as, Endomondo or Runtastic, which, in addition to recording these personal trajectories, also support their visualization and analysis, typically as static maps 2D, complemented with some diagrams. Animated maps have emerged as a potential technique for dynamic visualization of information, being usually regarded as intuitive for the analysis of spatial and temporal information. Despite several studies in spatio-temporal data visualization area, the use of this kind of technique in mobile devices, for the representation of personal trajectories, remains unexplored. This project aims to study this problem and explore / evaluate the adequacy of animated maps for the representation of trajectories related with physical activity, in a mobile device context. For that, the prototype PATH was developed, an Android application for the visualization of personal trajectories using animated maps. A usability study was also conducted, comparing different types of animated representations. Overall, the results suggest that although animated maps don’t show a significantly benefit for understanding data, this kind of technique is generally preferred when compared with static maps. On the other hand, it is important to consider that the focus of the used animation, may affect the usability of the application

    The possibilities of visual spatial data analysis methods on human migration data.

    Get PDF
    The theme of this Master’s thesis is to research the possible methods for visual analysis of human migration. Precedents of analysis methods are considered as viable techniques that would successfully achieve a rewarding result if conducted on Finland´s migration database. Human migration phenomenon can be analyzed with the visualization of spatial data. Spatial data visualization components such as spatial data, maps and methods are discussed with focus on human migration. Human migration data are a type of spatial data that are georeferenced so as to give a context in relation to a location in real world. Maps are geographical presentation interfaces used for visual analysis of migration data. Special purpose and thematic maps or the combination of both are deemed suitable for this task as they are able to portray the information desired. Spatial data visualization techniques such as flow lines, flowstrates, map animation and space time cube are researched as tools for fully analyzing migration data. Flowstrates is a specialized visualization method that analyses the spatial and temporal dimensions of migration data by utilizing a combination technique of flow lines, timelines, and origin-destination matrices for the visualization of migration data. With this novel method, detailed information can be visualized on individual migration flow/route. The result of this thesis are the examples of the most fitting visual analysis methods for migration data. These examples serve as the possible methods for analyzing actual migration data in Finland

    Spatial Interaction for Immersive Mixed-Reality Visualizations

    Get PDF
    Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics. Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis. Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis. Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research. One of the resulting challenges, however, is the design of user interaction for these often complex systems. In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions: 1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them? 2) How does spatial interaction benefit these visualizations and how should such interactions be designed? 3) How can spatial interaction in these immersive environments be analyzed and evaluated? To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts. For the second question, I study how spatial interaction in particular can help to explore data in mixed reality. There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels. Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights. Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations.Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, führen zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse. Insbesondere bei inhärent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natürliche Interaktionstechniken als hilfreich für die Datenanalyse erwiesen. Darüber hinaus spielt in solchen Anwendungsfällen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext für die Analyse dient. Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung geführt hat. Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion für diese oft komplexen Systeme. In meiner Dissertation beschäftige ich mich mit dieser Herausforderung, indem ich die Interaktion für immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche: 1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden? 2) Wie profitieren diese Visualisierungen von räumlicher Interaktion und wie sollten solche Interaktionen gestaltet werden? 3) Wie kann räumliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden? Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen für immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren. Für die zweite Frage untersuche ich, wie insbesondere die räumliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann. Dabei betrachte ich die Interaktion mit räumlichen Geräten im Vergleich zur Touch-Eingabe, die Verwendung zusätzlicher mobiler Geräte als Controller und das Potenzial transparenter Interaktionspanels. Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darüber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann. Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie räumliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstützen können

    Interface iOS for control of an unmanned helicopter in ROS

    Get PDF
    Práce je zaměřená na ovládání bezpilotni helikoptery za pomoci mobilního zařízení s operačním systémem iOS. Cílem je implementovat řešení, které bude schopno ovládat helikoptéru ve více módech. První je ovládání joystickem, druhá nastavení trajektorie pomocí Google Map, poslední ja navigace v interiéru budov, pro kterou je potřeba získat přesnou mapu. Práce vychází z výzkumu pracovníku na Katedře kybernetiky skupiny multi-robotických systémů.This work aims to control unmanned helicopter with a mobile device with operating system iOS. The goal is to implement a solution, that can command helicopter in three ways. The first is Joystick Control, the second set trajectory in Google Maps and last one is indoor navigation, for which a precise map is needed. The work is based on results from Department of Cybernetics at Czech Technical University in Prague group Multi-Robotic systems

    Visibility computation through image generalization

    Get PDF
    This dissertation introduces the image generalization paradigm for computing visibility. The paradigm is based on the observation that an image is a powerful tool for computing visibility. An image can be rendered efficiently with the support of graphics hardware and each of the millions of pixels in the image reports a visible geometric primitive. However, the visibility solution computed by a conventional image is far from complete. A conventional image has a uniform sampling rate which can miss visible geometric primitives with a small screen footprint. A conventional image can only find geometric primitives to which there is direct line of sight from the center of projection (i.e. the eye) of the image; therefore, a conventional image cannot compute the set of geometric primitives that become visible as the viewpoint translates, or as time changes in a dynamic dataset. Finally, like any sample-based representation, a conventional image can only confirm that a geometric primitive is visible, but it cannot confirm that a geometric primitive is hidden, as that would require an infinite number of samples to confirm that the primitive is hidden at all of its points. ^ The image generalization paradigm overcomes the visibility computation limitations of conventional images. The paradigm has three elements. (1) Sampling pattern generalization entails adding sampling locations to the image plane where needed to find visible geometric primitives with a small footprint. (2) Visibility sample generalization entails replacing the conventional scalar visibility sample with a higher dimensional sample that records all geometric primitives visible at a sampling location as the viewpoint translates or as time changes in a dynamic dataset; the higher-dimensional visibility sample is computed exactly, by solving visibility event equations, and not through sampling. Another form of visibility sample generalization is to enhance a sample with its trajectory as the geometric primitive it samples moves in a dynamic dataset. (3) Ray geometry generalization redefines a camera ray as the set of 3D points that project at a given image location; this generalization supports rays that are not straight lines, and enables designing cameras with non-linear rays that circumvent occluders to gather samples not visible from a reference viewpoint. ^ The image generalization paradigm has been used to develop visibility algorithms for a variety of datasets, of visibility parameter domains, and of performance-accuracy tradeoff requirements. These include an aggressive from-point visibility algorithm that guarantees finding all geometric primitives with a visible fragment, no matter how small primitive\u27s image footprint, an efficient and robust exact from-point visibility algorithm that iterates between a sample-based and a continuous visibility analysis of the image plane to quickly converge to the exact solution, a from-rectangle visibility algorithm that uses 2D visibility samples to compute a visible set that is exact under viewpoint translation, a flexible pinhole camera that enables local modulations of the sampling rate over the image plane according to an input importance map, an animated depth image that not only stores color and depth per pixel but also a compact representation of pixel sample trajectories, and a curved ray camera that integrates seamlessly multiple viewpoints into a multiperspective image without the viewpoint transition distortion artifacts of prior art methods

    How context influences the segmentation of movement trajectories - an experimental approach for environmental and behavioral context

    Full text link
    In the digital information age where large amounts of movement data are generated daily through technological devices, such as mobile phones, GPS, and digital navigation aids, the exploration of moving point datasets for identifying movement patterns has become a research focus in GIScience (Dykes and Mountain 2003). Visual analytics (VA) tools, such as GeoVISTA Studio (Gahegan 2001), have been developed to explore large amounts of movement data based on the contention that VA combine computational methods with the outstanding human capabilities for pattern recognition, imagination, association, and reasoning (Andrienko et al. 2008). However, exploring, extracting and understanding the meaning encapsulated in movement data from a user perspective has become a major bottleneck, not only in GIScience, but in all areas of science where this kind of data is collected (Holyoak et al. 2008). Specifically the inherent complex and multidimensional nature of spatio-temporal data has not been sufficiently integrated into visual analytics tools. To ensure the inclusion of cognitive principles for the integration of space-time data, visual analytics has to consider how users conceptualize and understand movement data (Fabrikant et al. 2008). A review on cognitively motivated work exemplifies the urgent need to identify how humans make inferences and derive knowledge from movement data. In order to enhance visual analytics tools by integrating cognitive principles we have to first ask to what extent cognitive factors influence our understanding, reasoning, and analysis of movement pattern extraction. It is especially important to comprehend human knowledge construction and reasoning about spatial and temporal phenomena and processes. This paper proposes an experimental approach with human subject testing to evaluate the importance of contextual information in visual displays of movement patterns. This research question is part of a larger research project, with two main objectives, namely * getting a better understanding of how humans process spatio-temporal information * and empirically validating guidelines to improve the design of visual analytics tools to enhance visual data exploration

    Crowd simulation and visualization

    Get PDF
    Large-scale simulation and visualization are essential topics in areas as different as sociology, physics, urbanism, training, entertainment among others. This kind of systems requires a vast computational power and memory resources commonly available in High Performance Computing HPC platforms. Currently, the most potent clusters have heterogeneous architectures with hundreds of thousands and even millions of cores. The industry trends inferred that exascale clusters would have thousands of millions. The technical challenges for simulation and visualization process in the exascale era are intertwined with difficulties in other areas of research, including storage, communication, programming models and hardware. For this reason, it is necessary prototyping, testing, and deployment a variety of approaches to address the technical challenges identified and evaluate the advantages and disadvantages of each proposed solution. The focus of this research is interactive large-scale crowd simulation and visualization. To exploit to the maximum the capacity of the current HPC infrastructure and be prepared to take advantage of the next generation. The project develops a new approach to scale crowd simulation and visualization on heterogeneous computing cluster using a task-based technique. Its main characteristic is hardware agnostic. It abstracts the difficulties that imply the use of heterogeneous architectures like memory management, scheduling, communications, and synchronization — facilitating development, maintenance, and scalability. With the goal of flexibility and take advantage of computing resources as best as possible, the project explores different configurations to connect the simulation with the visualization engine. This kind of system has an essential use in emergencies. Therefore, urban scenes were implemented as realistic as possible; in this way, users will be ready to face real events. Path planning for large-scale crowds is a challenge to solve, due to the inherent dynamism in the scenes and vast search space. A new path-finding algorithm was developed. It has a hierarchical approach which offers different advantages: it divides the search space reducing the problem complexity, it can obtain a partial path instead of wait for the complete one, which allows a character to start moving and compute the rest asynchronously. It can reprocess only a part if necessary with different levels of abstraction. A case study is presented for a crowd simulation in urban scenarios. Geolocated data are used, they were produced by mobile devices to predict individual and crowd behavior and detect abnormal situations in the presence of specific events. It was also address the challenge of combining all these individual’s location with a 3D rendering of the urban environment. The data processing and simulation approach are computationally expensive and time-critical, it relies thus on a hybrid Cloud-HPC architecture to produce an efficient solution. Within the project, new models of behavior based on data analytics were developed. It was developed the infrastructure to be able to consult various data sources such as social networks, government agencies or transport companies such as Uber. Every time there is more geolocation data available and better computation resources which allow performing analysis of greater depth, this lays the foundations to improve the simulation models of current crowds. The use of simulations and their visualization allows to observe and organize the crowds in real time. The analysis before, during and after daily mass events can reduce the risks and associated logistics costs.La simulación y visualización a gran escala son temas esenciales en áreas tan diferentes como la sociología, la física, el urbanismo, la capacitación, el entretenimiento, entre otros. Este tipo de sistemas requiere una gran capacidad de cómputo y recursos de memoria comúnmente disponibles en las plataformas de computo de alto rendimiento. Actualmente, los equipos más potentes tienen arquitecturas heterogéneas con cientos de miles e incluso millones de núcleos. Las tendencias de la industria infieren que los equipos en la era exascale tendran miles de millones. Los desafíos técnicos en el proceso de simulación y visualización en la era exascale se entrelazan con dificultades en otras áreas de investigación, incluidos almacenamiento, comunicación, modelos de programación y hardware. Por esta razón, es necesario crear prototipos, probar y desplegar una variedad de enfoques para abordar los desafíos técnicos identificados y evaluar las ventajas y desventajas de cada solución propuesta. El foco de esta investigación es la visualización y simulación interactiva de multitudes a gran escala. Aprovechar al máximo la capacidad de la infraestructura actual y estar preparado para aprovechar la próxima generación. El proyecto desarrolla un nuevo enfoque para escalar la simulación y visualización de multitudes en un clúster de computo heterogéneo utilizando una técnica basada en tareas. Su principal característica es que es hardware agnóstico. Abstrae las dificultades que implican el uso de arquitecturas heterogéneas como la administración de memoria, las comunicaciones y la sincronización, lo que facilita el desarrollo, el mantenimiento y la escalabilidad. Con el objetivo de flexibilizar y aprovechar los recursos informáticos lo mejor posible, el proyecto explora diferentes configuraciones para conectar la simulación con el motor de visualización. Este tipo de sistemas tienen un uso esencial en emergencias. Por lo tanto, se implementaron escenas urbanas lo más realistas posible, de esta manera los usuarios estarán listos para enfrentar eventos reales. La planificación de caminos para multitudes a gran escala es un desafío a resolver, debido al dinamismo inherente en las escenas y el vasto espacio de búsqueda. Se desarrolló un nuevo algoritmo de búsqueda de caminos. Tiene un enfoque jerárquico que ofrece diferentes ventajas: divide el espacio de búsqueda reduciendo la complejidad del problema, puede obtener una ruta parcial en lugar de esperar a la completa, lo que permite que un personaje comience a moverse y calcule el resto de forma asíncrona, puede reprocesar solo una parte si es necesario con diferentes niveles de abstracción. Se presenta un caso de estudio para una simulación de multitud en escenarios urbanos. Se utilizan datos geolocalizados producidos por dispositivos móviles para predecir el comportamiento individual y público y detectar situaciones anormales en presencia de eventos específicos. También se aborda el desafío de combinar la ubicación de todos estos individuos con una representación 3D del entorno urbano. Dentro del proyecto, se desarrollaron nuevos modelos de comportamiento basados ¿¿en el análisis de datos. Se creo la infraestructura para poder consultar varias fuentes de datos como redes sociales, agencias gubernamentales o empresas de transporte como Uber. Cada vez hay más datos de geolocalización disponibles y mejores recursos de cómputo que permiten realizar un análisis de mayor profundidad, esto sienta las bases para mejorar los modelos de simulación de las multitudes actuales. El uso de simulaciones y su visualización permite observar y organizar las multitudes en tiempo real. El análisis antes, durante y después de eventos multitudinarios diarios puede reducir los riesgos y los costos logísticos asociadosPostprint (published version
    corecore