12,146 research outputs found

    Interactive Content-Aware Zooming

    Get PDF
    SESSION: Photo zoomInternational audienceWe propose a novel, interactive content-aware zooming operator that allows effective and efficient visualization of high resolution images on small screens, which may have different aspect ratios compared to the input images. Our approach applies an image retargeting method in order to fit an entire image into the limited screen space. This can provide global, but approximate views for lower zoom levels. However, as we zoom more closely into the image, we continuously unroll the distortion to provide local, but more detailed and accurate views for higher zoom levels. In addition, we propose to use an adaptive view-dependent mesh to achieve high retargeting quality, while maintaining interactive performance. We demonstrate the effectiveness of the proposed operator by comparing it against the traditional zooming approach, and a method stemming from a direct combination of existing works

    Supporting Focus and Context Awareness in 3D Modelling Tasks Using Multi-Layered Displays

    Get PDF
    Most 3D modelling software have been developed for conventional 2D displays, and as such, lack support for true depth perception. This contributes to making polygonal 3D modelling tasks challenging, particularly when models are complex and consist of a large number of overlapping components (e.g. vertices, edges) and objects (i.e. parts). Research has shown that users of 3D modelling software often encounter a range of difficulties, which collectively can be defined as focus and context awareness problems. These include maintaining position and orientation awarenesses, as well as recognizing distance between individual components and objects in 3D spaces. In this paper, we present five visualization and interaction techniques we have developed for multi-layered displays, to better support focus and context awareness in 3D modelling tasks. The results of a user study we conducted shows that three of these five techniques improve users' 3D modelling task performance

    Advanced Mid-Water Tools for 4D Marine Data Fusion and Visualization

    Get PDF
    Mapping and charting of the seafloor underwent a revolution approximately 20 years ago with the introduction of multibeam sonars -- sonars that provided complete, high-resolution coverage of the seafloor rather than sparse measurements. The initial focus of these sonar systems was the charting of depths in support of safety of navigation and offshore exploration; more recently innovations in processing software have led to approaches to characterize seafloor type and for mapping seafloor habitat in support of fisheries research. In recent years, a new generation of multibeam sonars has been developed that, for the first time, have the ability to map the water column along with the seafloor. This ability will potentially allow multibeam sonars to address a number of critical ocean problems including the direct mapping of fish and marine mammals, the location of mid-water targets and, if water column properties are appropriate, a wide range of physical oceanographic processes. This potential relies on suitable software to make use of all of the new available data. Currently, the users of these sonars have a limited view of the mid-water data in real-time and limited capacity to store it, replay it, or run further analysis. The data also needs to be integrated with other sensor assets such as bathymetry, backscatter, sub-bottom, seafloor characterizations and other assets so that a “complete” picture of the marine environment under analysis can be realized. Software tools developed for this type of data integration should support a wide range of sonars with a unified format for the wide variety of mid-water sonar types. This paper describes the evolution and result of an effort to create a software tool that meets these needs, and details case studies using the new tools in the areas of fisheries research, static target search, wreck surveys and physical oceanographic processes

    WISeREP - An Interactive Supernova Data Repository

    Full text link
    We have entered an era of massive data sets in astronomy. In particular, the number of supernova (SN) discoveries and classifications has substantially increased over the years from few tens to thousands per year. It is no longer the case that observations of a few prototypical events encapsulate most spectroscopic information about SNe, motivating the development of modern tools to collect, archive, organize and distribute spectra in general, and SN spectra in particular. For this reason we have developed the Weizmann Interactive Supernova data REPository - WISeREP - an SQL-based database (DB) with an interactive web-based graphical interface. The system serves as an archive of high quality SN spectra, including both historical (legacy) data as well as data that is accumulated by ongoing modern programs. The archive provides information about objects, their spectra, and related meta-data. Utilizing interactive plots, we provide a graphical interface to visualize data, perform line identification of the major relevant species, determine object redshifts, classify SNe and measure expansion velocities. Guest users may view and download spectra or other data that have been placed in the public domain. Registered users may also view and download data that are proprietary to specific programs with which they are associated. The DB currently holds >8000 spectra, of which >5000 are public; the latter include published spectra from the Palomar Transient Factory, all of the SUSPECT archive, the Caltech-Core-Collapse Program, the CfA SN spectra archive and published spectra from the UC Berkeley SNDB repository. It offers an efficient and convenient way to archive data and share it with colleagues, and we expect that data stored in this way will be easy to access, increasing its visibility, usefulness and scientific impact.Comment: To be published in PASP. WISeREP: http://www.weizmann.ac.il/astrophysics/wiserep

    Integrating 2D Mouse Emulation with 3D Manipulation for Visualizations on a Multi-Touch Table

    Get PDF
    We present the Rizzo, a multi-touch virtual mouse that has been designed to provide the fine grained interaction for information visualization on a multi-touch table. Our solution enables touch interaction for existing mouse-based visualizations. Previously, this transition to a multi-touch environment was difficult because the mouse emulation of touch surfaces is often insufficient to provide full information visualization functionality. We present a unified design, combining many Rizzos that have been designed not only to provide mouse capabilities but also to act as zoomable lenses that make precise information access feasible. The Rizzos and the information visualizations all exist within a touch-enabled 3D window management system. Our approach permits touch interaction with both the 3D windowing environment as well as with the contents of the individual windows contained therein. We describe an implementation of our technique that augments the VisLink 3D visualization environment to demonstrate how to enable multi-touch capabilities on all visualizations written with the popular prefuse visualization toolkit.

    Animating the evolution of software

    Get PDF
    The use and development of open source software has increased significantly in the last decade. The high frequency of changes and releases across a distributed environment requires good project management tools in order to control the process adequately. However, even with these tools in place, the nature of the development and the fact that developers will often work on many other projects simultaneously, means that the developers are unlikely to have a clear picture of the current state of the project at any time. Furthermore, the poor documentation associated with many projects has a detrimental effect when encouraging new developers to contribute to the software. A typical version control repository contains a mine of information that is not always obvious and not easy to comprehend in its raw form. However, presenting this historical data in a suitable format by using software visualisation techniques allows the evolution of the software over a number of releases to be shown. This allows the changes that have been made to the software to be identified clearly, thus ensuring that the effect of those changes will also be emphasised. This then enables both managers and developers to gain a more detailed view of the current state of the project. The visualisation of evolving software introduces a number of new issues. This thesis investigates some of these issues in detail, and recommends a number of solutions in order to alleviate the problems that may otherwise arise. The solutions are then demonstrated in the definition of two new visualisations. These use historical data contained within version control repositories to show the evolution of the software at a number of levels of granularity. Additionally, animation is used as an integral part of both visualisations - not only to show the evolution by representing the progression of time, but also to highlight the changes that have occurred. Previously, the use of animation within software visualisation has been primarily restricted to small-scale, hand generated visualisations. However, this thesis shows the viability of using animation within software visualisation with automated visualisations on a large scale. In addition, evaluation of the visualisations has shown that they are suitable for showing the changes that have occurred in the software over a period of time, and subsequently how the software has evolved. These visualisations are therefore suitable for use by developers and managers involved with open source software. In addition, they also provide a basis for future research in evolutionary visualisations, software evolution and open source development

    Collocating Interface Objects: Zooming into Maps

    Get PDF
    May, Dean and Barnard [10] used a theoretically based model to argue that objects in a wide range of interfaces should be collocated following screen changes such as a zoom-in to detail. Many existing online maps do not follow this principle, but move a clicked point to the centre of the subsequent display, leaving the user looking at an unrelated location. This paper presents three experiments showing that collocating the point clicked on a map so that the detailed location appears in the place previously occupied by the overview location makes the map easier to use, reducing eye movements and interaction duration. We discuss the benefit of basing design principles on theoretical models so that they can be applied to novel situations, and so designers can infer when to use and not use them

    Understanding user experience of mobile video: Framework, measurement, and optimization

    Get PDF
    Since users have become the focus of product/service design in last decade, the term User eXperience (UX) has been frequently used in the field of Human-Computer-Interaction (HCI). Research on UX facilitates a better understanding of the various aspects of the user’s interaction with the product or service. Mobile video, as a new and promising service and research field, has attracted great attention. Due to the significance of UX in the success of mobile video (Jordan, 2002), many researchers have centered on this area, examining users’ expectations, motivations, requirements, and usage context. As a result, many influencing factors have been explored (Buchinger, Kriglstein, Brandt & Hlavacs, 2011; Buchinger, Kriglstein & Hlavacs, 2009). However, a general framework for specific mobile video service is lacking for structuring such a great number of factors. To measure user experience of multimedia services such as mobile video, quality of experience (QoE) has recently become a prominent concept. In contrast to the traditionally used concept quality of service (QoS), QoE not only involves objectively measuring the delivered service but also takes into account user’s needs and desires when using the service, emphasizing the user’s overall acceptability on the service. Many QoE metrics are able to estimate the user perceived quality or acceptability of mobile video, but may be not enough accurate for the overall UX prediction due to the complexity of UX. Only a few frameworks of QoE have addressed more aspects of UX for mobile multimedia applications but need be transformed into practical measures. The challenge of optimizing UX remains adaptations to the resource constrains (e.g., network conditions, mobile device capabilities, and heterogeneous usage contexts) as well as meeting complicated user requirements (e.g., usage purposes and personal preferences). In this chapter, we investigate the existing important UX frameworks, compare their similarities and discuss some important features that fit in the mobile video service. Based on the previous research, we propose a simple UX framework for mobile video application by mapping a variety of influencing factors of UX upon a typical mobile video delivery system. Each component and its factors are explored with comprehensive literature reviews. The proposed framework may benefit in user-centred design of mobile video through taking a complete consideration of UX influences and in improvement of mobile videoservice quality by adjusting the values of certain factors to produce a positive user experience. It may also facilitate relative research in the way of locating important issues to study, clarifying research scopes, and setting up proper study procedures. We then review a great deal of research on UX measurement, including QoE metrics and QoE frameworks of mobile multimedia. Finally, we discuss how to achieve an optimal quality of user experience by focusing on the issues of various aspects of UX of mobile video. In the conclusion, we suggest some open issues for future study
    corecore