13 research outputs found

    Multi-touch RST in 2D and 3D Spaces: Studying the Impact of Directness on User Performance

    Get PDF
    International audienceThe RST multi-touch technique allows one to simultaneously control \emph{Rotations}, \emph{Scaling}, and \emph{Translations} from multi-touch gestures. We conducted a user study to better understand the impact of directness on user performance for a RST docking task, for both 2D and 3D visualization conditions. This study showed that direct-touch shortens completion times, but indirect interaction improves efficiency and precision, and this is particularly true for 3D visualizations. The study also showed that users' trajectories are comparable for all conditions (2D/3D and direct/indirect). This tends to show that indirect RST control may be valuable for interactive visualization of 3D content. To illustrate this finding, we present a demo application that allows novice users to arrange 3D objects on a 2D virtual plane in an easy and efficient way

    Relative and Absolute Mappings for Rotating Remote 3D Objects on Multi-Touch Tabletops

    Get PDF
    The use of human fingers as an object selection and manipulation tool has raised significant challenges when interacting with direct-touch tabletop displays. This is particularly an issue when manipulating remote objects in 3D environments as finger presses can obscure objects at a distance that are rendered very small. Techniques to support remote manipulation either provide absolute mappings between finger presses and object transformation or rely on tools that support relative mappings t o selected objects. This paper explores techniques to manipulate remote 3D objects on direct-touch tabletops using absolute and relative mapping modes. A user study was conducted to compare absolute and relative mappings in support of a rotation task. Overall results did not show a statistically significant difference between these two mapping modes on both task completion time and the number of touches. However, the absolute mapping mode was found to be less efficient than the relative mapping mode when rotating a small object. Also participants preferred relative mapping for small objects. Four mapping techniques were then compared for perceived ease of use and learnability. Touchpad, voodoo doll and telescope techniques were found to be comparable for manipulating remote objects in a 3D scene. A flying camera technique was considered too complex and required increased effort by participants. Participants preferred an absolute mapping technique augmented to support small object manipulation, e.g. the voodoo doll technique

    APPLICATIONS OF MULTI-TOUCH TABLETOP DISPLAYS AND THEIR CHALLENGING ISSUES: AN OVERVIEW

    Full text link

    TangiWheel: A widget for manipulating collections on tabletop displays supporting hybrid Input modality

    Full text link
    In this paper we present TangiWheel, a collection manipulation widget for tabletop displays. Our implementation is flexible, allowing either multi-touch or interaction, or even a hybrid scheme to better suit user choice and convenience. Different TangiWheel aspects and features are compared with other existing widgets for collection manipulation. The study reveals that TangiWheel is the first proposal to support a hybrid input modality with large resemblance levels between touch and tangible interaction styles. Several experiments were conducted to evaluate the techniques used in each input scheme for a better understanding of tangible surface interfaces in complex tasks performed by a single user (e.g., involving a typical master-slave exploration pattern). The results show that tangibles perform significantly better than fingers, despite dealing with a greater number of interactions, in situations that require a large number of acquisitions and basic manipulation tasks such as establishing location and orientation. However, when users have to perform multiple exploration and selection operations that do not require previous basic manipulation tasks, for instance when collections are fixed in the interface layout, touch input is significantly better in terms of required time and number of actions. Finally, when a more elastic collection layout or more complex additional insertion or displacement operations are needed, the hybrid and tangible approaches clearly outperform finger-based interactions.. ©2012 Springer Science+Business Media, LLC & Science Press, ChinaThe work is supported by the Ministry of Education of Spain under Grant No. TSI2010-20488. Alejandro Catala is supported by an FPU fellowship for pre-doctoral research staff training granted by the Ministry of Education of Spain with reference AP2006-00181.CatalĂĄ BolĂłs, A.; GarcĂ­a Sanjuan, F.; JaĂ©n MartĂ­nez, FJ.; Mocholi AgĂŒes, JA. (2012). TangiWheel: A widget for manipulating collections on tabletop displays supporting hybrid Input modality. Journal of Computer Science and Technology. 27(4):811-829. doi:10.1007/s11390-012-1266-4S811829274JordĂ  S, Geiger G, Alonso M, Kaltenbrunner M. The reacTable: Exploring the synergy between live music performance and tabletop tangible interfaces. In Proc. TEI 2007, Baton Rouge, LA, USA, Feb. 15-17, 2007, pp.139–146.Vandoren P, van Laerhoven T, Claesen L, Taelman J, Raymaekers C, van Reeth F. IntuPaint: Bridging the gap between physical and digital painting. In Proc. TABLETOP2008, Amterdam, the Netherlands, Oct. 1-3, 2008, pp.65–72.Schöning J, Hecht B, Raubal M, KrĂŒger A, Marsh M, Rohs M. Improving interaction with virtual globes through spatial thinking: Helping users ask “why?”. In Proc. IUI 2008, Canary Islans, Spain, Jan. 13-16, 2008, pp.129–138.Fitzmaurice GW, BuxtonW. An empirical evaluation of graspable user interfaces: Towards specialized, space-multiplexed input. In Proc. CHI 1997, Atlanta, USA, March 22-27, 1997, pp.43–50.Tuddenham P, Kirk D, Izadi S. Graspables revisited: Multitouch vs. tangible input for tabletop displays in acquisition and manipulation tasks. In Proc. CHI 2010, Atlanta, USA, April 10-15, 2010, pp.2223–2232.Lucchi A, Jermann P, Zufferey G, Dillenbourg P. An empirical evaluation of touch and tangible interfaces for tabletop displays. In Proc. TEI 2010, Cambridge, USA, Jan. 25-27, 2010, pp.177–184.Fitzmaurice G W, Ishii H, Buxton W. Bricks: Laying the foundations for graspable user interfaces. In Proc. CHI 1995, Denver, USA, May 7-11, 1995, pp.442–449.Ishii H, Ullmer B. Tangible bits: Towards seamless interfaces between people, bits and atoms. In Proc. CHI 1997, Atlanta, USA, March 22-27, 1997, pp.234–241.Ullmer B, Ishii H, Glas D. mediaBlocks: Physical containers, transports, and controls for online media. In Proc. SIGGRAPH1998, Orlando, USA, July 19-24, 1998, pp.379–386.Shen C, Hancock M S, Forlines C, Vernier F D. CoR2Ds: Context-rooted rotatable draggables for tabletop interaction. In Proc. CHI 2005, Portland, USA, April 2-7, 2005, pp.1781–1784.Lepinski G J, Grossman T, Fitzmaurice G. The design and evaluation of multitouch marking menus. In Proc. CHI 2010, Atlanta, USA, April 10-15, 2010, pp.2233–2242.Accot J, Zhai S. Beyond Fitts’ law: Models for trajectorybased HCI tasks. In Proc. CHI 1997, Atlanta, USA, March 22-27, 1997, pp.295–302.Song H, Kim B, Lee B, Seo J. A comparative evaluation on tree visualization methods for hierarchical structures with large fan-outs. In Proc. CHI 2010, Atlanta, USA, April 10-15, 2010, pp.223–232.Bailly G, Lecolinet E, Nigay L. Wave menus: Improving the novice mode of hierarchical marking menus. In Proc. INTERACT2007, RĂ­o de Janeiro, Brazil, Sept. 10-14, 2007, pp.475–488.Zhao S, Agrawala M, Hinckley K. Zone and polygon menus: Using relative position to increase the breadth of multi-stroke marking menus. In Proc. CHI 2006, Montreal, Canada, April 24-27, 2006, pp.1077–1086.Patten J, Recht B, Ishii H. Interaction techniques for musical performance with tabletop tangible interfaces. In Proc. ACE2006, Hollywood, USA, Jun. 14-16, 2006, Article No.27.Weiss M, Wagner J, Jansen Y, Jennings R, Khoshabeh R, Hollan J D, Borchers J. SLAP widgets: Bridging the gap between virtual and physical controls on tabletops. In Proc. CHI 2009, Boston, USA, April 4-9, 2009, pp.481–490.Hancock M, Hilliges O, Collins C, Baur D, Carpendale S. Exploring tangible and direct touch interfaces for manipulating 2D and 3D information on a digital table. In Proc. ITS 2009, Banff, Canada, Nov. 23-25, pp.77–84.Hilliges O, Baur D, Butz A. Photohelix: Browsing, sorting and sharing digital photo collections. In Proc. Horizontal Interactive Human-Computer Systems (TABLETOP2007), Newport, Rhode Island, USA, Oct. 10-12, 2007, pp.87–94.Hesselmann T, Flöring S, Schmidt M. Stacked half-Pie menus: Navigating nested menus on interactive tabletops. In Proc. ITS 2009, Banff, Canada, Nov. 23-25, 2009, pp.173–180.Gallardo D, JordĂ  S. Tangible jukebox: Back to palpable music. In Proc. TEI 2010, Boston, USA, Jan. 25-27, 2010, pp.199–202.Fishkin K. A taxonomy for and analysis of tangible interfaces. Personal and Ubiquitous Computing, 2004, 8(5): 347–358.Catala A, Jaen J, Martinez-Villaronga A A, Mocholi J A. AGORAS: Exploring creative learning on tangible user interfaces. In Proc. COMPSAC 2011, Munich, Germany, July 18-22, 2011, pp.326–335.Catala A, Garcia-Sanjuan F, Azorin J, Jaen J, Mocholi J A. Exploring direct communication and manipulation on interactive surfaces to foster novelty in a creative learning environment. IJCSRA, 2012, 2(1): 15–24.Catala A, Jaen J, van Dijk B, Jord`a S. Exploring tabletops as an effective tool to foster creativity traits. In Proc. TEI 2012, Kingston, Canada, Feb. 19-22, 2012, pp.143–150.Hopkins D. Directional selection is easy as pie menus. In: The Usenix Association Newsletter, 1987, 12(5): 103.Microsoft Surface User Experience Guidelines. http://msdn.microsoft.com/en-us/library/ff318692.aspx , May 2011.Maydak M, Stromer R, Mackay H A, Stoddard L T. Stimulus classes in matching to sample and sequence production: The emergence of numeric relations. Research in Developmental Disabilities, 1995, 16(3): 179–204

    Single-Touch to Multi-Touch System Conversion

    Get PDF
    Context: In recent years the education community has seen an acceleration in the adoption of multi-touch surfaces for educational purposes due to a number of features that these surfaces present. Some of these features include the facilitation of multi-user interaction and collaboration. However, an interesting problem exists with legacy, single-touch educational systems that lend themselves well to the features of multi-touch but have been developed with a single-user interface in mind. Objectives: This thesis investigates how to convert an existing single-user, single-touch system into a multi-user, multi-touch system while maintaining the existing educational aims and methods. The end result is a converted application called JLens and a list of goals for converting an educational system. Methods: This study analyses the interaction points and potential conversion factors of an existing education application and defines a set of 4 goals for converting a single-touch educational system into a multi-touch one. The final product is a converted educational system that is evaluated by representatives from the local education authorities, the educational software developers TimeMaps, multi-touch hardware developers and fellow researchers. A combination of questionnaires and observations are used for research methods and the evaluators are asked to freely explore the converted system and provide feedback. Results: The work identifies that the majority of the evaluators responded positively to the converted system. The observations show that the users understood how to operate the system very quickly and began collaborating by sharing data without any prompt. The quantitative analysis provides evidence that the conversion was successful and all of the research goals were met. Conclusion: This thesis has demonstrated that JLens provides a viable framework for converting existing single-user, single-touch systems into multi-user, multi-touch systems by allowing many users to navigate and explore educational applications in a collaborative way

    TangiWheel: disseny i implementació d'un control d'exploració de col·leccions sobre superfícies interactives

    Full text link
    Aquest treball presenta el disseny i la implementaciĂł de TangiWheel, un control per a l'exploraciĂł de col·leccions en forma de menĂș de pastĂ­s circular. El seu disseny estĂ  orientat a les superfĂ­cies interactives que permeten l'Ășs tant de dits com d'elements tangibles per realitzar les interaccions. A mĂ©s, TangiWheel incorpora un estil d'interacciĂł nou en aquest tipus de control que combina l'Ășs dels dits i dels tangibles. Per demostrar l'adequaciĂł del disseny i la comoditat de les interaccions possibles en aquest menĂș, s'han realitzat una sĂšrie d'experiments amb usuaris on s'ha arribat a la conclusiĂł que la modalitat d'interacciĂł hĂ­brida resulta, per regla general, mĂ©s avantatjosa que la tĂ ctil o la tangible per separat.GarcĂ­a Sanjuan, F. (2012). TangiWheel: disseny i implementaciĂł d'un control d'exploraciĂł de col·leccions sobre superfĂ­cies interactives. http://hdl.handle.net/10251/16514.Archivo delegad

    Adapting Multi-touch Systems to Capitalise on Different Display Shapes

    Get PDF
    The use of multi-touch interaction has become more widespread. With this increase of use, the change in input technique has prompted developers to reconsider other elements of typical computer design such as the shape of the display. There is an emerging need for software to be capable of functioning correctly with different display shapes. This research asked: ‘What must be considered when designing multi-touch software for use on different shaped displays?’ The results of two structured literature surveys highlighted the lack of support for multi-touch software to utilise more than one display shape. From a prototype system, observations on the issues of using different display shapes were made. An evaluation framework to judge potential solutions to these issues in multi-touch software was produced and employed. Solutions highlighted as being suitable were implemented into existing multi-touch software. A structured evaluation was then used to determine the success of the design and implementation of the solutions. The hypothesis of the evaluation stated that the implemented solutions would allow the applications to be used with a range of different display shapes in such a way that did not leave visual content items unfit for purpose. The majority of the results conformed to this hypothesis despite minor deviations from the designs of solutions being discovered in the implementation. This work highlights how developers, when producing multi-touch software intended for more than one display shape, must consider the issue of visual content items being occluded. Developers must produce, or identify, solutions to resolve this issue which conform to the criteria outlined in this research. This research shows that it is possible for multi-touch software to be made display shape independent

    The cockpit for the 21st century

    Get PDF
    Interactive surfaces are a growing trend in many domains. As one possible manifestation of Mark Weiser’s vision of ubiquitous and disappearing computers in everywhere objects, we see touchsensitive screens in many kinds of devices, such as smartphones, tablet computers and interactive tabletops. More advanced concepts of these have been an active research topic for many years. This has also influenced automotive cockpit development: concept cars and recent market releases show integrated touchscreens, growing in size. To meet the increasing information and interaction needs, interactive surfaces offer context-dependent functionality in combination with a direct input paradigm. However, interfaces in the car need to be operable while driving. Distraction, especially visual distraction from the driving task, can lead to critical situations if the sum of attentional demand emerging from both primary and secondary task overextends the available resources. So far, a touchscreen requires a lot of visual attention since its flat surface does not provide any haptic feedback. There have been approaches to make direct touch interaction accessible while driving for simple tasks. Outside the automotive domain, for example in office environments, concepts for sophisticated handling of large displays have already been introduced. Moreover, technological advances lead to new characteristics for interactive surfaces by enabling arbitrary surface shapes. In cars, two main characteristics for upcoming interactive surfaces are largeness and shape. On the one hand, spatial extension is not only increasing through larger displays, but also by taking objects in the surrounding into account for interaction. On the other hand, the flatness inherent in current screens can be overcome by upcoming technologies, and interactive surfaces can therefore provide haptically distinguishable surfaces. This thesis describes the systematic exploration of large and shaped interactive surfaces and analyzes their potential for interaction while driving. Therefore, different prototypes for each characteristic have been developed and evaluated in test settings suitable for their maturity level. Those prototypes were used to obtain subjective user feedback and objective data, to investigate effects on driving and glance behavior as well as usability and user experience. As a contribution, this thesis provides an analysis of the development of interactive surfaces in the car. Two characteristics, largeness and shape, are identified that can improve the interaction compared to conventional touchscreens. The presented studies show that large interactive surfaces can provide new and improved ways of interaction both in driver-only and driver-passenger situations. Furthermore, studies indicate a positive effect on visual distraction when additional static haptic feedback is provided by shaped interactive surfaces. Overall, various, non-exclusively applicable, interaction concepts prove the potential of interactive surfaces for the use in automotive cockpits, which is expected to be beneficial also in further environments where visual attention needs to be focused on additional tasks.Der Einsatz von interaktiven OberflĂ€chen weitet sich mehr und mehr auf die unterschiedlichsten Lebensbereiche aus. Damit sind sie eine mögliche AusprĂ€gung von Mark Weisers Vision der allgegenwĂ€rtigen Computer, die aus unserer direkten Wahrnehmung verschwinden. Bei einer Vielzahl von technischen GerĂ€ten des tĂ€glichen Lebens, wie Smartphones, Tablets oder interaktiven Tischen, sind berĂŒhrungsempfindliche OberflĂ€chen bereits heute in Benutzung. Schon seit vielen Jahren arbeiten Forscher an einer Weiterentwicklung der Technik, um ihre Vorteile auch in anderen Bereichen, wie beispielsweise der Interaktion zwischen Mensch und Automobil, nutzbar zu machen. Und das mit Erfolg: Interaktive BenutzeroberflĂ€chen werden mittlerweile serienmĂ€ĂŸig in vielen Fahrzeugen eingesetzt. Der Einbau von immer grĂ¶ĂŸeren, in das Cockpit integrierten Touchscreens in Konzeptfahrzeuge zeigt, dass sich diese Entwicklung weiter in vollem Gange befindet. Interaktive OberflĂ€chen ermöglichen das flexible Anzeigen von kontextsensitiven Inhalten und machen eine direkte Interaktion mit den Bildschirminhalten möglich. Auf diese Weise erfĂŒllen sie die sich wandelnden Informations- und InteraktionsbedĂŒrfnisse in besonderem Maße. Beim Einsatz von Bedienschnittstellen im Fahrzeug ist die gefahrlose Benutzbarkeit wĂ€hrend der Fahrt von besonderer Bedeutung. Insbesondere visuelle Ablenkung von der Fahraufgabe kann zu kritischen Situationen fĂŒhren, wenn PrimĂ€r- und SekundĂ€raufgaben mehr als die insgesamt verfĂŒgbare Aufmerksamkeit des Fahrers beanspruchen. Herkömmliche Touchscreens stellen dem Fahrer bisher lediglich eine flache OberflĂ€che bereit, die keinerlei haptische RĂŒckmeldung bietet, weshalb deren Bedienung besonders viel visuelle Aufmerksamkeit erfordert. Verschiedene AnsĂ€tze ermöglichen dem Fahrer, direkte Touchinteraktion fĂŒr einfache Aufgaben wĂ€hrend der Fahrt zu nutzen. Außerhalb der Automobilindustrie, zum Beispiel fĂŒr BĂŒroarbeitsplĂ€tze, wurden bereits verschiedene Konzepte fĂŒr eine komplexere Bedienung großer Bildschirme vorgestellt. DarĂŒber hinaus fĂŒhrt der technologische Fortschritt zu neuen möglichen AusprĂ€gungen interaktiver OberflĂ€chen und erlaubt, diese beliebig zu formen. FĂŒr die nĂ€chste Generation von interaktiven OberflĂ€chen im Fahrzeug wird vor allem an der Modifikation der Kategorien GrĂ¶ĂŸe und Form gearbeitet. Die Bedienschnittstelle wird nicht nur durch grĂ¶ĂŸere Bildschirme erweitert, sondern auch dadurch, dass Objekte wie Dekorleisten in die Interaktion einbezogen werden können. Andererseits heben aktuelle Technologieentwicklungen die Restriktion auf flache OberflĂ€chen auf, so dass Touchscreens kĂŒnftig ertastbare Strukturen aufweisen können. Diese Dissertation beschreibt die systematische Untersuchung großer und nicht-flacher interaktiver OberflĂ€chen und analysiert ihr Potential fĂŒr die Interaktion wĂ€hrend der Fahrt. Dazu wurden fĂŒr jede Charakteristik verschiedene Prototypen entwickelt und in Testumgebungen entsprechend ihres Reifegrads evaluiert. Auf diese Weise konnten subjektives Nutzerfeedback und objektive Daten erhoben, und die Effekte auf Fahr- und Blickverhalten sowie Nutzbarkeit untersucht werden. Diese Dissertation leistet den Beitrag einer Analyse der Entwicklung von interaktiven OberflĂ€chen im Automobilbereich. Weiterhin werden die Aspekte GrĂ¶ĂŸe und Form untersucht, um mit ihrer Hilfe die Interaktion im Vergleich zu herkömmlichen Touchscreens zu verbessern. Die durchgefĂŒhrten Studien belegen, dass große FlĂ€chen neue und verbesserte Bedienmöglichkeiten bieten können. Außerdem zeigt sich ein positiver Effekt auf die visuelle Ablenkung, wenn zusĂ€tzliches statisches, haptisches Feedback durch nicht-flache OberflĂ€chen bereitgestellt wird. Zusammenfassend zeigen verschiedene, untereinander kombinierbare Interaktionskonzepte das Potential interaktiver OberflĂ€chen fĂŒr den automotiven Einsatz. Zudem können die Ergebnisse auch in anderen Bereichen Anwendung finden, in denen visuelle Aufmerksamkeit fĂŒr andere Aufgaben benötigt wird

    3D-Modellierung mit interaktiven OberflÀchen

    Get PDF
    3D models are at the core of many important applications from industry, science, and also entertainment. The creation of 3D models is a complex and time consuming process. Current modeling tools are hard to learn and require a deep understanding of the underlying mathematical models. Furthermore, established input devices like the mouse and keyboard do not utilize the full interaction potential -- especially regarding bimanual control -- of the human hand. The growing interest and the commercial breakthrough of multi-touch displays and interactive surfaces raises questions about their potential in the context of 3d modeling, which are thoroughly discussed and evaluated in this work. The presented approach is closely aligned to the whole processing chain for multi-touch applications, starting with the hardware and tracking issues, continuing with fundamental design discussions and operations like selection and 3D manipulation of objects and finishing with complex modeling techniques and metaphors. In regard to hardware and tracking, a robust illumination setup for the diffuse illumination technique is presented along with two extensions of this approach, i.e., hover detection and hand distinction. The design space is organized into specific design dimensions characterized by extremal positions to allow a better overview of design choices and a classification of existing and future systems. Fundamental techniques for selection and integrated 3D manipulation with six degrees of freedom are presented and empirically evaluated. Finally, two established modeling techniques -- implicit surfaces and virtual sculpting -- are extended and evaluated for multi-touch input
    corecore