33,692 research outputs found
Recommended from our members
Semantics-Space-Time Cube. A Conceptual Framework for Systematic Analysis of Texts in Space and Time
We propose an approach to analyzing data in which texts are associated with spatial and temporal references with the aim to understand how the text semantics vary over space and time. To represent the semantics, we apply probabilistic topic modeling. After extracting a set of topics and representing the texts by vectors of topic weights, we aggregate the data into a data cube with the dimensions corresponding to the set of topics, the set of spatial locations (e.g., regions), and the time divided into suitable intervals according to the scale of the planned analysis. Each cube cell corresponds to a combination (topic, location, time interval) and contains aggregate measures characterizing the subset of the texts concerning this topic and having the spatial and temporal references within these location and interval. Based on this structure, we systematically describe the space of analysis tasks on exploring the interrelationships among the three heterogeneous information facets, semantics, space, and time. We introduce the operations of projecting and slicing the cube, which are used to decompose complex tasks into simpler subtasks. We then present a design of a visual analytics system intended to support these subtasks. To reduce the complexity of the user interface, we apply the principles of structural, visual, and operational uniformity while respecting the specific properties of each facet. The aggregated data are represented in three parallel views corresponding to the three facets and providing different complementary perspectives on the data. The views have similar look-and-feel to the extent allowed by the facet specifics. Uniform interactive operations applicable to any view support establishing links between the facets. The uniformity principle is also applied in supporting the projecting and slicing operations on the data cube. We evaluate the feasibility and utility of the approach by applying it in two analysis scenarios using geolocated social media data for studying people's reactions to social and natural events of different spatial and temporal scales
Planning Support Systems: Progress, Predictions, and Speculations on the Shape of Things to Come
In this paper, we review the brief history of planning support systems, sketching the way both the fields of planning and the software that supports and informs various planning tasks have fragmented and diversified. This is due to many forces which range from changing conceptions of what planning is for and who should be involved, to the rapid dissemination of computers and their software, set against the general quest to build ever more generalized software products applicable to as many activities as possible. We identify two main drivers – the move to visualization which dominates our very interaction with the computer and the move to disseminate and share software data and ideas across the web. We attempt a brief and somewhat unsatisfactory classification of tools for PSS in terms of the planning process and the software that has evolved, but this does serve to point up the state-ofthe- art and to focus our attention on the near and medium term future. We illustrate many of these issues with three exemplars: first a land usetransportation model (LUTM) as part of a concern for climate change, second a visualization of cities in their third dimension which is driving an interest in what places look like and in London, a concern for high buildings, and finally various web-based services we are developing to share spatial data which in turn suggests ways in which stakeholders can begin to define urban issues collaboratively. All these are elements in the larger scheme of things – in the development of online collaboratories for planning support. Our review far from comprehensive and our examples are simply indicative, not definitive. We conclude with some brief suggestions for the future
Teaching complex theoretical multi-step problems in ICT networking through 3D printing and augmented reality
This paper presents a pilot study rationale and research methodology using a mixed media visualisation (3D printing and Augmented Reality simulation) learning intervention to help students in an ICT degree represent theoretical complex multi-step problems without a corresponding real world physical analog model. This is important because these concepts are difficult to visualise without a corresponding mental model. The proposed intervention uses an augmented reality application programmed with free commercially available tools, tested through an action research methodology, to evaluate the effectiveness of the mixed media visualisation techniques to teach ICT students networking. Specifically, 3D models of network equipment will be placed in a field and then the augmented reality app can be used to observe packet traversal and routing between the different devices as data travels from the source to the destination. Outcomes are expected to be an overall improvement in final skill level for all students
Improving Big Data Visual Analytics with Interactive Virtual Reality
For decades, the growth and volume of digital data collection has made it
challenging to digest large volumes of information and extract underlying
structure. Coined 'Big Data', massive amounts of information has quite often
been gathered inconsistently (e.g from many sources, of various forms, at
different rates, etc.). These factors impede the practices of not only
processing data, but also analyzing and displaying it in an efficient manner to
the user. Many efforts have been completed in the data mining and visual
analytics community to create effective ways to further improve analysis and
achieve the knowledge desired for better understanding. Our approach for
improved big data visual analytics is two-fold, focusing on both visualization
and interaction. Given geo-tagged information, we are exploring the benefits of
visualizing datasets in the original geospatial domain by utilizing a virtual
reality platform. After running proven analytics on the data, we intend to
represent the information in a more realistic 3D setting, where analysts can
achieve an enhanced situational awareness and rely on familiar perceptions to
draw in-depth conclusions on the dataset. In addition, developing a
human-computer interface that responds to natural user actions and inputs
creates a more intuitive environment. Tasks can be performed to manipulate the
dataset and allow users to dive deeper upon request, adhering to desired
demands and intentions. Due to the volume and popularity of social media, we
developed a 3D tool visualizing Twitter on MIT's campus for analysis. Utilizing
emerging technologies of today to create a fully immersive tool that promotes
visualization and interaction can help ease the process of understanding and
representing big data.Comment: 6 pages, 8 figures, 2015 IEEE High Performance Extreme Computing
Conference (HPEC '15); corrected typo
Mixed reality participants in smart meeting rooms and smart home enviroments
Human–computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in a virtual meeting room, we discuss how remote meeting participants can take part in meeting activities and they have some observations on translating research results to smart home environments
Exploring the Use of Virtual Worlds as a Scientific Research Platform: The Meta-Institute for Computational Astrophysics (MICA)
We describe the Meta-Institute for Computational Astrophysics (MICA), the
first professional scientific organization based exclusively in virtual worlds
(VWs). The goals of MICA are to explore the utility of the emerging VR and VWs
technologies for scientific and scholarly work in general, and to facilitate
and accelerate their adoption by the scientific research community. MICA itself
is an experiment in academic and scientific practices enabled by the immersive
VR technologies. We describe the current and planned activities and research
directions of MICA, and offer some thoughts as to what the future developments
in this arena may be.Comment: 15 pages, to appear in the refereed proceedings of "Facets of Virtual
Environments" (FaVE 2009), eds. F. Lehmann-Grube, J. Sablating, et al., ICST
Lecture Notes Ser., Berlin: Springer Verlag (2009); version with full
resolution color figures is available at
http://www.mica-vw.org/wiki/index.php/Publication
Interpretation at the controller's edge: designing graphical user interfaces for the digital publication of the excavations at Gabii (Italy)
This paper discusses the authors’ approach to designing an interface for the Gabii Project’s digital volumes that attempts to fuse elements of traditional synthetic publications and site reports with rich digital datasets. Archaeology, and classical archaeology in particular, has long engaged with questions of the formation and lived experience of towns and cities. Such studies might draw on evidence of local topography, the arrangement of the built environment, and the placement of architectural details, monuments and inscriptions (e.g. Johnson and Millett 2012). Fundamental to the continued development of these studies is the growing body of evidence emerging from new excavations. Digital techniques for recording evidence “on the ground,” notably SFM (structure from motion aka close range photogrammetry) for the creation of detailed 3D models and for scene-level modeling in 3D have advanced rapidly in recent years. These parallel developments have opened the door for approaches to the study of the creation and experience of urban space driven by a combination of scene-level reconstruction models (van Roode et al. 2012, Paliou et al. 2011, Paliou 2013) explicitly combined with detailed SFM or scanning based 3D models representing stratigraphic evidence. It is essential to understand the subtle but crucial impact of the design of the user interface on the interpretation of these models. In this paper we focus on the impact of design choices for the user interface, and make connections between design choices and the broader discourse in archaeological theory surrounding the practice of the creation and consumption of archaeological knowledge. As a case in point we take the prototype interface being developed within the Gabii Project for the publication of the Tincu House. In discussing our own evolving practices in engagement with the archaeological record created at Gabii, we highlight some of the challenges of undertaking theoretically-situated user interface design, and their implications for the publication and study of archaeological materials
- …