360 research outputs found

    Using digital field notebooks in geoscientific learning in polar environments

    Get PDF
    Postponed access: the file will be available after 2022-08-21The emergence of digital tools, including tablets with a multitude of built-in sensors, allows gathering many geological observations digitally and in a geo-referenced context. This is particularly important in the polar environments where (1) limited time is available at each outcrop due to harsh weather conditions, and (2) outcrops are rarely re-visited due to the high economic and environmental cost of accessing the localities and the short field season. In an educational development project, we explored the use of digital field notebooks in student groups of 3–4 persons during five geological field campaigns in the Arctic archipelago of Svalbard. The field campaigns formed part of the Bachelor and Master/PhD courses at the University Centre in Svalbard in Longyearbyen at 78°N. The digital field notebooks comprise field-proofed tablets with relevant applications, notably FieldMove. Questionnaires and analyses of students’ FieldMove projects provided data on student experience of using digital field notebooks, and insight into what students used the digital notebooks for, the notebooks’ functionality and best practices. We found that electronic and geo-referenced note- and photo-taking was by far the dominant function of the digital field notebooks. In addition, some student groups collected significant amounts of structural data using the built-in sensors. Graduate students found the ability to conduct large-scale field mapping and directly display it within the digital field notebook particularly useful. Our study suggests that the digital field notebooks add value to field-based education in polar environments.acceptedVersio

    Strokes2Surface: Recovering Curve Networks From 4D Architectural Design Sketches

    Full text link
    We present Strokes2Surface, an offline geometry reconstruction pipeline that recovers well-connected curve networks from imprecise 4D sketches to bridge concept design and digital modeling stages in architectural design. The input to our pipeline consists of 3D strokes' polyline vertices and their timestamps as the 4th dimension, along with additional metadata recorded throughout sketching. Inspired by architectural sketching practices, our pipeline combines a classifier and two clustering models to achieve its goal. First, with a set of extracted hand-engineered features from the sketch, the classifier recognizes the type of individual strokes between those depicting boundaries (Shape strokes) and those depicting enclosed areas (Scribble strokes). Next, the two clustering models parse strokes of each type into distinct groups, each representing an individual edge or face of the intended architectural object. Curve networks are then formed through topology recovery of consolidated Shape clusters and surfaced using Scribble clusters guiding the cycle discovery. Our evaluation is threefold: We confirm the usability of the Strokes2Surface pipeline in architectural design use cases via a user study, we validate our choice of features via statistical analysis and ablation studies on our collected dataset, and we compare our outputs against a range of reconstructions computed using alternative methods.Comment: 15 pages, 14 figure

    Video interaction using pen-based technology

    Get PDF
    Dissertação para obtenção do Grau de Doutor em InformáticaVideo can be considered one of the most complete and complex media and its manipulating is still a difficult and tedious task. This research applies pen-based technology to video manipulation, with the goal to improve this interaction. Even though the human familiarity with pen-based devices, how they can be used on video interaction, in order to improve it, making it more natural and at the same time fostering the user’s creativity is an open question. Two types of interaction with video were considered in this work: video annotation and video editing. Each interaction type allows the study of one of the interaction modes of using pen-based technology: indirectly, through digital ink, or directly, trough pen gestures or pressure. This research contributes with two approaches for pen-based video interaction: pen-based video annotations and video as ink. The first uses pen-based annotations combined with motion tracking algorithms, in order to augment video content with sketches or handwritten notes. It aims to study how pen-based technology can be used to annotate a moving objects and how to maintain the association between a pen-based annotations and the annotated moving object The second concept replaces digital ink by video content, studding how pen gestures and pressure can be used on video editing and what kind of changes are needed in the interface, in order to provide a more familiar and creative interaction in this usage context.This work was partially funded by the UTAustin-Portugal, Digital Media, Program (Ph.D. grant: SFRH/BD/42662/2007 - FCT/MCTES); by the HP Technology for Teaching Grant Initiative 2006; by the project "TKB - A Transmedia Knowledge Base for contemporary dance" (PTDC/EAT/AVP/098220/2008 funded by FCT/MCTES); and by CITI/DI/FCT/UNL (PEst-OE/EEI/UI0527/2011

    Using natural user interfaces to support synchronous distributed collaborative work

    Get PDF
    Synchronous Distributed Collaborative Work (SDCW) occurs when group members work together at the same time from different places together to achieve a common goal. Effective SDCW requires good communication, continuous coordination and shared information among group members. SDCW is possible because of groupware, a class of computer software systems that supports group work. Shared-workspace groupware systems are systems that provide a common workspace that aims to replicate aspects of a physical workspace that is shared among group members in a co-located environment. Shared-workspace groupware systems have failed to provide the same degree of coordination and awareness among distributed group members that exists in co-located groups owing to unintuitive interaction techniques that these systems have incorporated. Natural User Interfaces (NUIs) focus on reusing natural human abilities such as touch, speech, gestures and proximity awareness to allow intuitive human-computer interaction. These interaction techniques could provide solutions to the existing issues of groupware systems by breaking down the barrier between people and technology created by the interaction techniques currently utilised. The aim of this research was to investigate how NUI interaction techniques could be used to effectively support SDCW. An architecture for such a shared-workspace groupware system was proposed and a prototype, called GroupAware, was designed and developed based on this architecture. GroupAware allows multiple users from distributed locations to simultaneously view and annotate text documents, and create graphic designs in a shared workspace. Documents are represented as visual objects that can be manipulated through touch gestures. Group coordination and awareness is maintained through document updates via immediate workspace synchronization, user action tracking via user labels and user availability identification via basic proxemic interaction. Members can effectively communicate via audio and video conferencing. A user study was conducted to evaluate GroupAware and determine whether NUI interaction techniques effectively supported SDCW. Ten groups of three members each participated in the study. High levels of performance, user satisfaction and collaboration demonstrated that GroupAware was an effective groupware system that was easy to learn and use, and effectively supported group work in terms of communication, coordination and information sharing. Participants gave highly positive comments about the system that further supported the results. The successful implementation of GroupAware and the positive results obtained from the user evaluation provides evidence that NUI interaction techniques can effectively support SDCW

    The Usability and Learnability of Pen/Tablet Mode Inferencing

    Get PDF
    The inferred mode protocol uses contextual reasoning and local mediators to eliminate the need to access specic modes to perform draw, select, move and delete operations in a sketch interface. This thesis describe an observational experiment to understand the learn- ability, user preference and frequency of use of mode inferencing in a sketch appli- cation. Novel methodology is presented to study both quantitative and long term qualitative facets of mode inferencing. The experiment demonstrated that participants instructed in the in- terface features enjoyed fluid transitions between modes. As well, interaction techniques were not self-revealing: Participants who were not instructed in interaction techniques took longer to learn about inferred mode features and were more negative about the interaction techniques. Over multiple sketching sessions, as users develop expertise with the system, they combine inferred mode techniques to speed interaction, and frequently make use of scratch space on the display to retrain themselves and to tune their behaviors. Lastly, post- task interviews outline impediments to discoverability and how performance is affected by negative perceptions around computational intelligence. The results of this work inform the design of sketch interface techniques that incorporate noncommand features

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally

    The WOZ Recognizer: A Tool For Understanding User Perceptions of Sketch-Based Interfaces

    Get PDF
    Sketch recognition has the potential to be an important input method for computers in the coming years; however, designing and building an accurate and sophisticated sketch recognition system is a time consuming and daunting task. Since sketch recognition is still at a level where mistakes are common, it is important to understand how users perceive and tolerate recognition errors and other user interface elements with these imperfect systems. A problem in performing this type of research is that we cannot easily control aspects of recognition in order to rigorously study the systems. We performed a study examining user perceptions of three pen-based systems for creating logic gate diagrams: a sketch-based interface, a WIMP-based interface, and a hybrid interface that combined elements of sketching and WIMP. We found that users preferred the sketch-based interface and we identified important criteria for pen-based application design. This work exposed the issue of studying recognition systems without fine-grained control over accuracy, recognition mode, and other recognizer properties. In order to solve this problem, we developed a Wizard of Oz sketch recognition tool, the WOZ Recognizer, that supports controlled symbol and position accuracy and batch and streaming recognition modes for a variety of sketching domains. We present the design of the WOZ Recognizer, modeling recognition domains using graphs, symbol alphabets, and grammars; and discuss the types of recognition errors we included in its design. Further, we discuss how the WOZ Recognizer simulates sketch recognition, controlling the WOZ Recognizer, and how users interact with it. In addition, we present an evaluative user study of the WOZ Recognizer and the lessons we learned. We have used the WOZ Recognizer to perform two user studies examining user perceptions of sketch recognition; both studies focused on mathematical sketching. In the first study, we examined whether users prefer recognition feedback now (real-time recognition) or later (batch recognition) in relation to different recognition accuracies and sketch complexities. We found that participants displayed a preference for real-time recognition in some situations (multiple expressions, low accuracy), but no statistical preference in others. In our second study, we examined whether users displayed a greater tolerance for recognition errors when they used mathematical sketching applications they found interesting or useful compared to applications they found less interesting. Participants felt they had a greater tolerance for the applications they preferred, although our statistical analysis did not positively support this. In addition to the research already performed, we propose several avenues for future research into user perceptions of sketch recognition that we believe will be of value to sketch recognizer researchers and application designers

    An evaluation of user experience with a sketch-based 3D modeling system

    Get PDF
    Abstract With the availability of pen-enabled digital hardware, sketch-based 3D modeling is becoming an increasingly attractive alternative to traditional methods in many design environments. To date, a variety of methodologies and implemented systems have been proposed that all seek to make sketching the primary interaction method for 3D geometric modeling. While many of these methods are promising, a general lack of end user evaluations makes it difficult to assess and improve upon these methods. Based on our ongoing work, we present the usage and a user evaluation of a sketch-based 3D modeling tool we have been developing for industrial styling design. The study investigates the usability of our techniques in the hands of non-experts by gauging (1) the speed with which users can comprehend and adopt to constituent modeling steps, and (2) how effectively users can utilize the newly learned skills to design 3D models. Our observations and users' feedback indicate that overall users could learn the investigated techniques relatively easily and put them in use immediately. However, users pointed out several usability and technical issues such as difficulty in mode selection and lack of sophisticated surface modeling tools as some of the key limitations of the current system. We believe the lessons learned from this study can be used in the development of more powerful and satisfying sketch-based modeling tools in the future.

    Sketch-based digital storyboards and floor plans for authoring computer-generated film pre-visuals

    Get PDF
    Pre-visualisation is an important tool for planning films during the pre-production phase of filmmaking. Existing pre-visualisation authoring tools do not effectively support the user in authoring pre-visualisations without impairing software usability. These tools require the user to either have programming skills, be experienced in modelling and animation, or use drag-and-drop style interfaces. These interaction methods do not intuitively fit with pre-production activities such as floor planning and storyboarding, and existing tools that apply a storyboarding metaphor do not automatically interpret user sketches. The goal of this research was to investigate how sketch-based user interfaces and methods from computer vision could be used for supporting pre-visualisation authoring using a storyboarding approach. The requirements for such a sketch-based storyboarding tool were determined from literature and an interview with Triggerfish Animation Studios. A framework was developed to support sketch-based pre-visualisation authoring using a storyboarding approach. Algorithms for describing user sketches, recognising objects and performing pose estimation were designed to automatically interpret user sketches. A proof of concept prototype implementation of this framework was evaluated in order to assess its usability benefit. It was found that the participants could author pre-visualisations effectively, efficiently and easily. The results of the usability evaluation also showed that the participants were satisfied with the overall design and usability of the prototype tool. The positive and negative findings of the evaluation were interpreted and combined with existing heuristics in order to create a set of guidelines for designing similar sketch-based pre-visualisation authoring tools that apply the storyboarding approach. The successful implementation of the proof of concept prototype tool provides practical evidence of the feasibility of sketch-based pre-visualisation authoring. The positive results from the usability evaluation established that sketch-based interfacing techniques can be used effectively with a storyboarding approach for authoring pre-visualisations without impairing software usability
    • 

    corecore