411 research outputs found
Cabinet Tree: an orthogonal enclosure approach to visualizing and exploring big data
Treemaps are well-known for visualizing hierarchical data. Most related approaches have been focused on layout algorithms and paid little attention to other display properties and interactions. Furthermore, the structural information in conventional Treemaps is too implicit for viewers to perceive. This paper presents Cabinet Tree, an approach that: i) draws branches explicitly to show relational structures, ii) adapts a space-optimized layout for leaves and maximizes the space utilization, iii) uses coloring and labeling strategies to clearly reveal patterns and contrast different attributes intuitively. We also apply the continuous node selection and detail window techniques to support user interaction with different levels of the hierarchies. Our quantitative evaluations demonstrate that Cabinet Tree achieves good scalability for increased resolutions and big datasets
Data center visualization
Dissertação de mestrado integrado em Engenharia InformáticaAs more projects are financed by public institutions, demonstrating tasks and other information developed is increasingly required. Traditional monitoring tools like Prometheus are used in data centers to collect data from
the machines and supervise them as the devices can malfunction and make the service unavailable. Currently,
applications like Prometheus that display the machines’ performance are limited to a restricted set of people: the
data center managers. These applications have limited data visualization methods since their focus is on retrieving the data from the machines. These applications are associated with specialized frameworks in showing the
machines’ performance. These frameworks present the data in several visualization methods, such as graphic
lines and gauge graphics. However, the forms of exposing data are not attractive for people in general. So, other
ways to expose the data need to be developed.
Data visualization in three dimensions can expose data more attractively. Besides, 3D has some advantages
over traditional ways. With a data friendlier exposition, catching the attention of people who do not manage the
data center is easier.
This project aims to build an application to expose the data center’s data in a 3D scenario. The data exposed
are the machines’ tasks, components, and performance. By exposing the data center’s tasks and other information to the general public, the application can present to the viewer the usefulness of the data center. The
application must have its components flexible, so any data center can use it. Moreover, those data centers should
expose any visualization they desire through plugins.
To complete the goals, first, different techniques to explore and view the data are investigated. Several applications that expose data from a data center are analyzed to know the current status of these applications.
Furthermore, different scenarios are constructed based on the research made. Using a tool capable of handling
web requests makes the application available to everyone. Besides, the application is flexible in some parts of the
architecture to be adaptable to any framework. So, any data center can use the application. Those parts are the
server that contains the machines’ performance data and the database management system. The system allows
the creation of a plugin to communicate with the machine’s performance server. Following a simple interface, a
new plugin can be developed with relative ease. Besides, the webserver is replicable, making it adaptable to the
data center’s needs. Moreover, the application allows the creation of arbitrary 3D scenarios. By following a set
of steps a simple 3D scenario can be built, including the visualization and communication server stages. Such a
scenario can be expanded freely, as long as the communication API is observed. The created scenario functions
as a plugin that can be inserted into the application effortlessly. The application’s usefulness is validated through
an experience with information from a real data center. Finally, the application’s performance is corroborated,
supporting a considerable amount of concurrent requests.Como há um número maior de projetos financiados por instituições públicas, é necessário revelar que tarefas
e outras informações a instituição desenvolve. Ferramentas de monitorização como o Prometheus são usadas
em centros de dados para recolher o desempenho das máquinas e supervisioná-las uma vez que podem ter
problemas tornando o serviço indisponível. Aplicações como o Prometheus que exibem o desempenho das
máquinas são limitados a um conjunto restrito de pessoas: os gestores dos centros de dados. Estas aplicações
têm métodos de visualização limitados, uma vez que se focam em obter os dados das máquinas. Estas aplicações são associadas com aplicações especializadas em mostrar o desempenho das máquinas. Os dados
são apresentados em vários métodos de visualização, como gráficos de linhas e de área. No entanto, estas
formas não são atraentes para o público em geral. Portanto, é preciso usar outras formas de expor os dados.
Visualização de dados em três dimensões pode expor os dados de uma forma mais eficiente. Além disso, 3D
tem algumas vantagens em relação às formas tradicionais. Com um cenário mais amigável, é mais fácil captar
a atenção das pessoas.
Este projeto tem o objetivo de construir uma aplicação para expor os dados do centro de dados em 3D. Os
dados expostos são as tarefas, o desempenho e os componentes das máquinas. Ao expor as tarefas para
o público em geral a aplicação pode apresentar a utilidade do centro de dados. A aplicação deverá ter os
componentes flexíveis para que qualquer centro de dados o possa usar. Além disso, os centros de dados
deverão expor qualquer tipo de visualização que desejarem.
Para completar os objetivos, são investigadas diferentes técnicas de exposição de dados. São analisadas
várias aplicações que expõem os dados de um centro de dados para conhecer o estado atual das mesmas.
Além do mais, são construídos vários cenários com base nos dados da investigação. Ao usar uma ferramenta
capaz de lidar com pedidos web torna-a disponível para todos. A aplicação também deve ser flexível em alguns
dos componentes para serem adaptados a qualquer ferramenta. Desta forma qualquer centro de dados pode
usar a aplicação. As partes flexíveis devem ser o servidor que contém os dados do desempenho das máquinas
e a base de dados. O sistema permite o uso de diferentes plugins para comunicar com esse servidor. Ao seguir
um conjunto de passos a criação do plugin é relativamente fácil. O servidor aplicacional é replicável, tornando o
sistema adaptável para as necessidades do centro de dados. A aplicação permite o desenvolvimento de novos
cenários 3D. Ao seguir um conjunto de passos é criado um cenário 3D simples, incluindo os passos da visualização e comunicação com o servidor. O cenário pode ser expandido, desde que siga a API de comunicação.
O cenário criado funciona como um plugin que pode ser adicionado na aplicação facilmente. A utilidade da aplicação é validada através de uma experiência com dados reais de um centro de dados. Por fim, o desempenho
da máquina é validado, uma vez que suporta uma quantidade considerável de pedidos concorrentes
Advances in Data-Driven Analysis and Synthesis of 3D Indoor Scenes
This report surveys advances in deep learning-based modeling techniques that
address four different 3D indoor scene analysis tasks, as well as synthesis of
3D indoor scenes. We describe different kinds of representations for indoor
scenes, various indoor scene datasets available for research in the
aforementioned areas, and discuss notable works employing machine learning
models for such scene modeling tasks based on these representations.
Specifically, we focus on the analysis and synthesis of 3D indoor scenes. With
respect to analysis, we focus on four basic scene understanding tasks -- 3D
object detection, 3D scene segmentation, 3D scene reconstruction and 3D scene
similarity. And for synthesis, we mainly discuss neural scene synthesis works,
though also highlighting model-driven methods that allow for human-centric,
progressive scene synthesis. We identify the challenges involved in modeling
scenes for these tasks and the kind of machinery that needs to be developed to
adapt to the data representation, and the task setting in general. For each of
these tasks, we provide a comprehensive summary of the state-of-the-art works
across different axes such as the choice of data representation, backbone,
evaluation metric, input, output, etc., providing an organized review of the
literature. Towards the end, we discuss some interesting research directions
that have the potential to make a direct impact on the way users interact and
engage with these virtual scene models, making them an integral part of the
metaverse.Comment: Published in Computer Graphics Forum, Aug 202
Recommended from our members
Active Learning from Examples, Queries and Explanations
Humans are remarkably efficient in learning by interacting with other people and observing their behavior. Children learn by watching their parents’ actions and mimic their behavior. When they are not sure about their parents demonstration, they communicate with them, ask questions, and learn from their feedback. On the other hand, parents and teachers ask children to explain their behavior. This explanation helps the parents know whether the children learned their task correctly or not. So, why not have intelligent systems that learn from examples and interaction with humans, and explain their decisions to humans? This dissertation makes three contributions toward this goal.
The first contribution is towards designing an intelligent system that incorporates human’s knowledge in discovering of hierarchical structure in sequential decision problems. Given a set of expert demonstrations. We proposed a new approach that learns a hierarchical policy by actively selecting demonstrations and using queries to explicate their intentional structure at selected points.
The second contribution is a generalization of the framework of adaptive submodularity. Adaptive submodular optimization, where a sequence of items is selected adaptively to optimize a submodular function, has been found to have many applications from sensor placement to active learning. We extend this work to the setting of multiple queries at each time step, where the set of available queries is randomly constrained. A primary contribution of this paper is to prove the first near optimal approximation bound for a greedy policy in this setting. A natural application of this framework is to crowd-sourced active learning problem where the set of available experts and examples might vary randomly. We instantiate the new framework for multi-label learning and evaluate it in multiple benchmark domains with promising results.
The third contribution of this dissertation is the introduction of a framework for explaining the decisions of deep neural networks using human-recognizable visual concepts. Our approach, called interactive naming, is based on enabling human annotators to interactively group the excitation patterns of the neurons in the critical layer of the network into groups called ”visual concepts". We performed two user studies of visual concepts produced by human annotators. We found that a large fraction of the activation maps have recognizable visual concepts, and that there is significant agreement between the different annotators about their denotations. Many of the visual concepts created by human annotators can be generalized reliably from a modest number of examples
Visualizing participatory development communication in social change processes: challenging the notion that visual research methods are inherently participatory
Participatory development communication approaches increasingly use visual research methods with little critical reflection. This article challenges the implicit assumption across the community and international development sector that visual research methods are inherently participatory. I analyze a workshop held in Papua New Guinea that explored a visual multimethod approach in a participatory development context. In particular, I review the methods used in respect to the key participatory development communication principles of horizontal dialogue and local ownership. The findings show that visual research methods are not inherently participatory, but require reflection and conscious decision making by the facilitator(s) to ensure high levels of participation
The robot's vista space : a computational 3D scene analysis
Swadzba A. The robot's vista space : a computational 3D scene analysis. Bielefeld (Germany): Bielefeld University; 2011.The space that can be explored quickly from a fixed view point without locomotion is known as the vista space. In indoor environments single rooms and room parts follow this definition. The vista space plays an important role in situations with agent-agent interaction as it is the directly surrounding environment in which the interaction takes place. A collaborative interaction of the partners in and with the environment requires that both partners know where they are, what spatial structures they are talking about, and what scene elements they are going to manipulate. This thesis focuses on the analysis of a robot's vista space. Mechanisms for extracting relevant spatial information are developed which enable the robot to recognize in which place it is, to detect the scene elements the human partner is talking about, and to segment scene structures the human is changing. These abilities are addressed by the proposed holistic, aligned, and articulated modeling approach. For a smooth human-robot interaction, the computed models should be aligned to the partner's representations. Therefore, the design of the computational models is based on the combination of psychological results from studies on human scene perception with basic physical properties of the perceived scene and the perception itself. The holistic modeling realizes a categorization of room percepts based on the observed 3D spatial layout. Room layouts have room type specific features and fMRI studies have shown that some of the human brain areas being active in scene recognition are sensitive to the 3D geometry of a room. With the aligned modeling, the robot is able to extract the hierarchical scene representation underlying a scene description given by a human tutor. Furthermore, it is able to ground the inferred scene elements in its own visual perception of the scene. This modeling follows the assumption that cognition and language schematize the world in the same way. This is visible in the fact that a scene depiction mainly consists of relations between an object and its supporting structure or between objects located on the same supporting structure. Last, the articulated modeling equips the robot with a methodology for articulated scene part extraction and fast background learning under short and disturbed observation conditions typical for human-robot interaction scenarios. Articulated scene parts are detected model-less by observing scene changes caused by their manipulation. Change detection and background learning are closely coupled because change is defined phenomenologically as variation of structure. This means that change detection involves a comparison of currently visible structures with a representation in memory. In range sensing this comparison can be nicely implement as subtraction of these two representations. The three modeling approaches enable the robot to enrich its visual perceptions of the surrounding environment, the vista space, with semantic information about meaningful spatial structures useful for further interaction with the environment and the human partner
- …