4 research outputs found

    Grasp-sensitive surfaces

    Get PDF
    Grasping objects with our hands allows us to skillfully move and manipulate them. Hand-held tools further extend our capabilities by adapting precision, power, and shape of our hands to the task at hand. Some of these tools, such as mobile phones or computer mice, already incorporate information processing capabilities. Many other tools may be augmented with small, energy-efficient digital sensors and processors. This allows for graspable objects to learn about the user grasping them - and supporting the user's goals. For example, the way we grasp a mobile phone might indicate whether we want to take a photo or call a friend with it - and thus serve as a shortcut to that action. A power drill might sense whether the user is grasping it firmly enough and refuse to turn on if this is not the case. And a computer mouse could distinguish between intentional and unintentional movement and ignore the latter. This dissertation gives an overview of grasp sensing for human-computer interaction, focusing on technologies for building grasp-sensitive surfaces and challenges in designing grasp-sensitive user interfaces. It comprises three major contributions: a comprehensive review of existing research on human grasping and grasp sensing, a detailed description of three novel prototyping tools for grasp-sensitive surfaces, and a framework for analyzing and designing grasp interaction: For nearly a century, scientists have analyzed human grasping. My literature review gives an overview of definitions, classifications, and models of human grasping. A small number of studies have investigated grasping in everyday situations. They found a much greater diversity of grasps than described by existing taxonomies. This diversity makes it difficult to directly associate certain grasps with users' goals. In order to structure related work and own research, I formalize a generic workflow for grasp sensing. It comprises *capturing* of sensor values, *identifying* the associated grasp, and *interpreting* the meaning of the grasp. A comprehensive overview of related work shows that implementation of grasp-sensitive surfaces is still hard, researchers often are not aware of related work from other disciplines, and intuitive grasp interaction has not yet received much attention. In order to address the first issue, I developed three novel sensor technologies designed for grasp-sensitive surfaces. These mitigate one or more limitations of traditional sensing techniques: **HandSense** uses four strategically positioned capacitive sensors for detecting and classifying grasp patterns on mobile phones. The use of custom-built high-resolution sensors allows detecting proximity and avoids the need to cover the whole device surface with sensors. User tests showed a recognition rate of 81%, comparable to that of a system with 72 binary sensors. **FlyEye** uses optical fiber bundles connected to a camera for detecting touch and proximity on arbitrarily shaped surfaces. It allows rapid prototyping of touch- and grasp-sensitive objects and requires only very limited electronics knowledge. For FlyEye I developed a *relative calibration* algorithm that allows determining the locations of groups of sensors whose arrangement is not known. **TDRtouch** extends Time Domain Reflectometry (TDR), a technique traditionally used for inspecting cable faults, for touch and grasp sensing. TDRtouch is able to locate touches along a wire, allowing designers to rapidly prototype and implement modular, extremely thin, and flexible grasp-sensitive surfaces. I summarize how these technologies cater to different requirements and significantly expand the design space for grasp-sensitive objects. Furthermore, I discuss challenges for making sense of raw grasp information and categorize interactions. Traditional application scenarios for grasp sensing use only the grasp sensor's data, and only for mode-switching. I argue that data from grasp sensors is part of the general usage context and should be only used in combination with other context information. For analyzing and discussing the possible meanings of grasp types, I created the GRASP model. It describes five categories of influencing factors that determine how we grasp an object: *Goal* -- what we want to do with the object, *Relationship* -- what we know and feel about the object we want to grasp, *Anatomy* -- hand shape and learned movement patterns, *Setting* -- surrounding and environmental conditions, and *Properties* -- texture, shape, weight, and other intrinsics of the object I conclude the dissertation with a discussion of upcoming challenges in grasp sensing and grasp interaction, and provide suggestions for implementing robust and usable grasp interaction.Die FĂ€higkeit, GegenstĂ€nde mit unseren HĂ€nden zu greifen, erlaubt uns, diese vielfĂ€ltig zu manipulieren. Werkzeuge erweitern unsere FĂ€higkeiten noch, indem sie Genauigkeit, Kraft und Form unserer HĂ€nde an die Aufgabe anpassen. Digitale Werkzeuge, beispielsweise Mobiltelefone oder ComputermĂ€use, erlauben uns auch, die FĂ€higkeiten unseres Gehirns und unserer Sinnesorgane zu erweitern. Diese GerĂ€te verfĂŒgen bereits ĂŒber Sensoren und Recheneinheiten. Aber auch viele andere Werkzeuge und Objekte lassen sich mit winzigen, effizienten Sensoren und Recheneinheiten erweitern. Dies erlaubt greifbaren Objekten, mehr ĂŒber den Benutzer zu erfahren, der sie greift - und ermöglicht es, ihn bei der Erreichung seines Ziels zu unterstĂŒtzen. Zum Beispiel könnte die Art und Weise, in der wir ein Mobiltelefon halten, verraten, ob wir ein Foto aufnehmen oder einen Freund anrufen wollen - und damit als Shortcut fĂŒr diese Aktionen dienen. Eine Bohrmaschine könnte erkennen, ob der Benutzer sie auch wirklich sicher hĂ€lt und den Dienst verweigern, falls dem nicht so ist. Und eine Computermaus könnte zwischen absichtlichen und unabsichtlichen Mausbewegungen unterscheiden und letztere ignorieren. Diese Dissertation gibt einen Überblick ĂŒber Grifferkennung (*grasp sensing*) fĂŒr die Mensch-Maschine-Interaktion, mit einem Fokus auf Technologien zur Implementierung griffempfindlicher OberflĂ€chen und auf Herausforderungen beim Design griffempfindlicher Benutzerschnittstellen. Sie umfasst drei primĂ€re BeitrĂ€ge zum wissenschaftlichen Forschungsstand: einen umfassenden Überblick ĂŒber die bisherige Forschung zu menschlichem Greifen und Grifferkennung, eine detaillierte Beschreibung dreier neuer Prototyping-Werkzeuge fĂŒr griffempfindliche OberflĂ€chen und ein Framework fĂŒr Analyse und Design von griff-basierter Interaktion (*grasp interaction*). Seit nahezu einem Jahrhundert erforschen Wissenschaftler menschliches Greifen. Mein Überblick ĂŒber den Forschungsstand beschreibt Definitionen, Klassifikationen und Modelle menschlichen Greifens. In einigen wenigen Studien wurde bisher Greifen in alltĂ€glichen Situationen untersucht. Diese fanden eine deutlich grĂ¶ĂŸere DiversitĂ€t in den Griffmuster als in existierenden Taxonomien beschreibbar. Diese DiversitĂ€t erschwert es, bestimmten Griffmustern eine Absicht des Benutzers zuzuordnen. Um verwandte Arbeiten und eigene Forschungsergebnisse zu strukturieren, formalisiere ich einen allgemeinen Ablauf der Grifferkennung. Dieser besteht aus dem *Erfassen* von Sensorwerten, der *Identifizierung* der damit verknĂŒpften Griffe und der *Interpretation* der Bedeutung des Griffes. In einem umfassenden Überblick ĂŒber verwandte Arbeiten zeige ich, dass die Implementierung von griffempfindlichen OberflĂ€chen immer noch ein herausforderndes Problem ist, dass Forscher regelmĂ€ĂŸig keine Ahnung von verwandten Arbeiten in benachbarten Forschungsfeldern haben, und dass intuitive Griffinteraktion bislang wenig Aufmerksamkeit erhalten hat. Um das erstgenannte Problem zu lösen, habe ich drei neuartige Sensortechniken fĂŒr griffempfindliche OberflĂ€chen entwickelt. Diese mindern jeweils eine oder mehrere SchwĂ€chen traditioneller Sensortechniken: **HandSense** verwendet vier strategisch positionierte kapazitive Sensoren um Griffmuster zu erkennen. Durch die Verwendung von selbst entwickelten, hochauflösenden Sensoren ist es möglich, schon die AnnĂ€herung an das Objekt zu erkennen. Außerdem muss nicht die komplette OberflĂ€che des Objekts mit Sensoren bedeckt werden. Benutzertests ergaben eine Erkennungsrate, die vergleichbar mit einem System mit 72 binĂ€ren Sensoren ist. **FlyEye** verwendet LichtwellenleiterbĂŒndel, die an eine Kamera angeschlossen werden, um AnnĂ€herung und BerĂŒhrung auf beliebig geformten OberflĂ€chen zu erkennen. Es ermöglicht auch Designern mit begrenzter Elektronikerfahrung das Rapid Prototyping von berĂŒhrungs- und griffempfindlichen Objekten. FĂŒr FlyEye entwickelte ich einen *relative-calibration*-Algorithmus, der verwendet werden kann um Gruppen von Sensoren, deren Anordnung unbekannt ist, semi-automatisch anzuordnen. **TDRtouch** erweitert Time Domain Reflectometry (TDR), eine Technik die ĂŒblicherweise zur Analyse von KabelbeschĂ€digungen eingesetzt wird. TDRtouch erlaubt es, BerĂŒhrungen entlang eines Drahtes zu lokalisieren. Dies ermöglicht es, schnell modulare, extrem dĂŒnne und flexible griffempfindliche OberflĂ€chen zu entwickeln. Ich beschreibe, wie diese Techniken verschiedene Anforderungen erfĂŒllen und den *design space* fĂŒr griffempfindliche Objekte deutlich erweitern. Desweiteren bespreche ich die Herausforderungen beim Verstehen von Griffinformationen und stelle eine Einteilung von Interaktionsmöglichkeiten vor. Bisherige Anwendungsbeispiele fĂŒr die Grifferkennung nutzen nur Daten der Griffsensoren und beschrĂ€nken sich auf Moduswechsel. Ich argumentiere, dass diese Sensordaten Teil des allgemeinen Benutzungskontexts sind und nur in Kombination mit anderer Kontextinformation verwendet werden sollten. Um die möglichen Bedeutungen von Griffarten analysieren und diskutieren zu können, entwickelte ich das GRASP-Modell. Dieses beschreibt fĂŒnf Kategorien von Einflussfaktoren, die bestimmen wie wir ein Objekt greifen: *Goal* -- das Ziel, das wir mit dem Griff erreichen wollen, *Relationship* -- das VerhĂ€ltnis zum Objekt, *Anatomy* -- Handform und Bewegungsmuster, *Setting* -- Umgebungsfaktoren und *Properties* -- Eigenschaften des Objekts, wie OberflĂ€chenbeschaffenheit, Form oder Gewicht. Ich schließe mit einer Besprechung neuer Herausforderungen bei der Grifferkennung und Griffinteraktion und mache VorschlĂ€ge zur Entwicklung von zuverlĂ€ssiger und benutzbarer Griffinteraktion

    Interaction design for live performance

    Get PDF
    PhD Thesis Multimedia item accompanying this thesis to be consulted at Robinson LibraryThe role of interactive technology in live performance has increased substantially in recent years. Practices and experiences of existing forms of live performance have been transformed and new genres of technology-­‐mediated live performance have emerged in response to novel technological opportunities. Consequently, designing for live performance is set to become an increasingly important concern for interaction design researchers and practitioners. However, designing interactive technology for live performance is a challenging activity, as the experiences of both performers and their audiences are shaped and influenced by a number of delicate and interconnected issues, which relate to different forms and individual practices of live performance in varied and often conflicting ways. The research presented in this thesis explores how interaction designers might be better supported in engaging with this intricate and multifaceted design space. This is achieved using a practice-­‐led methodology, which involves the researcher’s participation in both the investigation of, and design response to, issues of live performance as they are embodied in the lived and felt experiences of individual live performers’ practices during three interaction design case studies. This research contributes to the field of interaction design for live performance in three core areas. Understandings of the relationships between key issues of live performance and individual performers’ lived and felt experiences are developed, approaches to support interaction designers in engaging individual live performers’ lived and felt experiences in design are proposed and innovative interfaces and interaction techniques for live performance are designed. It is anticipated that these research outcomes will prove directly applicable or inspiring to the practices of interaction designers wishing to address live performance and will contribute to the ongoing academic discourse around the experience of, and design for, live performance.Engineering and Physical Sciences Research Council

    HumanTop: a multi-object tracking tabletop

    Full text link
    In this paper, a computer vision based interactive multi-touch tabletop system called HumanTop is introduced. HumanTop implements a stereo camera vision subsystem which allows not only an accurate fingertip tracking algorithm but also a precise touch-over-the-working surface detection method. Based on a pair of visible spectra cameras, a novel synchronization circuit makes the camera caption and the image projection independent from each other, providing the minimum basis for the development of computer vision analysis based on visible spectrum cameras without any interference coming from the projector. The assembly of both cameras and the synchronization circuit is not only capable of performing an ad-hoc version of a depth camera, but it also introduces the recognition and tracking of textured planar objects, even when contents are projected over them. On the other hand HumanTop supports the tracking of sheets of paper and ID-code markers. This set of features makes the HumanTop a comprehensive, intuitive and versatile augmented tabletop that provides multitouch interaction with projective augmented reality on any flat surface. As an example to exploit all the capabilities of HumanTop, an educational application has been developed using an augmented book as a launcher to different didactic contents. A pilot study in which 28 fifth graders participated is presented. Results about efficiency, usability/satisfaction and motivation are provided. These results suggest that HumanTop is an interesting platform for the development of educational contents. © 2012 Springer Science+Business Media, LLC.This study was funded by Ministerio de Educacion y Ciencia Spain, Project SALTET (TIN2010-21296-C02-01), Project Game Teen (TIN2010-20187) projects Consolider-C (SEJ2006-14301/PSIC), "CIBER of Physiopathology of Obesity and Nutrition, an initiative of ISCIII" and Excellence Research Program PROMETEO (Generalitat Valenciana. Conselleria de Educacio, 2008-157).Soto Candela, E.; Ortega PĂ©rez, M.; MarĂ­n Romero, C.; PĂ©rez LĂłpez, DC.; Salvador Herranz, GM.; Contero, M.; Alcañiz Raya, ML. (2014). HumanTop: a multi-object tracking tabletop. Multimedia Tools and Applications. 70(3):1837-1868. https://doi.org/10.1007/s11042-012-1193-yS18371868703Agarwal A, Izadi S, Chandraker M, Blake A (2007) High precision multi-touch sensing on surfaces using overhead cameras. In: IEEE int. workshop horiz. interact. hum.-comput. interact., TABLETOP’07. IEEE, pp 197–200Alexa M, Bollensdorff B, Bressler I, Elstner S, Hahne U, Kettlitz N, Lindow N, Lubkoll R, Richter R, Stripf C et al (2008) Continuous reference images for ftir touch sensing. In: ACM SIGGRAPH poster. ACM, p 49Argyros A, Lourakis M (2006) Vision-based interpretation of hand gestures for remote control of a computer mouse. In: Comput. vis. hum.-comput. interact., pp 40–51Barnes C, Jacobs D, Sanders J, Goldman D, Rusinkiewicz S, Finkelstein A, Agrawala M (2008) Video puppetry: a performative interface for cutout animation. ACM Trans Graph (TOG) 27:124Bradski G, Kaehler A (2008) Learning OpenCV: computer vision with the OpenCV library. O’Reilly MediaCampbell D, Stanley J, Gage N (1963) Experimental and quasi-experimental designs for research. Houghton Mifflin, BostonChen D, Zhang G (2005) A new sub-pixel detector for x-corners in camera calibration targets. In: 13th int. conf. cent. Eur. comput. graph., vis. comput. vis.Dietz P, Leigh D (2001) Diamondtouch: a multi-user touch technology. In: Proc. 14th ACM symp. user interface softw. technol. ACM, pp 219–226Do-Lenh S, Kaplan F, Sharma A, Dillenbourg P (2009) Multi-finger interactions with papers on augmented tabletops. In: Proc. 3rd int. conf. tangible embed. int. ACM, pp 267–274Dung L, Mizukawa M (2009) Fast hand feature extraction based on connected component labeling, distance transform and hough transform. J. Robot. Mechatronics 21(6):726–738Echtler F, Sielhorst T, Huber M, Klinker G (2009) A short guide to modulated light. In: Proc. 3rd int. conf. tang. embed. interact. ACM, pp 393–396Echtler F, Pototschnig T, Klinker G (2010) An led-based multitouch sensor for lcd screens. In: Proc. 4th int. conf. tang. embed. interact.. ACM, pp 227–230Han J (2005) Low-cost multi-touch sensing through frustrated total internal reflection. In: Proc. 18th ACM symp. user interface softw. technol. ACM, pp 115–118Holman D, Vertegaal R, Altosaar M, Troje N, Johns D (2005) Paper windows: interaction techniques for digital paper. In: Proc. SIGCHI conf. hum. factor comput. syst. ACM, pp 591–599Izadi S, Agarwal A, Criminisi A, Winn J, Blake A, Fitzgibbon A (2007) C-slate: a multi-touch and object recognition system for remote collaboration using horizontal surfaces. In: IEEE int. workshop horiz. interact. hum.-comput. interact., TABLETOP’07. IEEE, pp 3–10JordĂ  S, Geiger G, Alonso M, Kaltenbrunner M (2007) The reactable: exploring the synergy between live music performance and tabletop tangible interfaces. In: Proc. 1st int. conf. tangible embed. interact. ACM, pp 139–146Kaltenbrunner M (2009) Reactivision and tuio: a tangible tabletop toolkit. In: Proc. ACM int. conf. interact. tabletop. surf. ACM, pp 9–16Katz I, Gabayan K, Aghajan H (2007) A multi-touch surface using multiple cameras. In: Proc. 9th int. conf. adv. concept. intell. vis. syst.. Springer, pp 97–108Kim K, Lepetit V, Woo W (2010) Scalable real-time planar targets tracking for digilog books. Vis Comput 26(6):1145–1154Lee T, Hollerer T (2007) Handy ar: markerless inspection of augmented reality objects using fingertip tracking. In: 11th IEEE int. symp. wearable comput. IEEE, pp 83–90Letessier J, BĂ©rard F (2004) Visual tracking of bare fingers for interactive surfaces. In: Proc. 17th ACM symp. user interface softw. technol. ACM, pp 119–122Likert R (1932) A technique for the measurement of attitudes. Arch Psychol 140:1–55Lucchese L, Mitra S (2002) Using saddle points for subpixel feature detection in camera calibration targets. In: Asian-Pac. conf. circuit. syst., vol 2. IEEE, pp 191–195Malik S, Laszlo J (2004) Visual touchpad: a two-handed gestural input device. In: Proc. 6th int. conf. multimodal interface. ACM, pp 289–296Manresa C, Varona J, Mas R, Perales F (2000) Real–time hand tracking and gesture recognition for human-computer interaction. In: Comput. vis. cent., pp 1–7MartĂ­n-GutiĂ©rrez J, LuĂ­s SaorĂ­n J, Contero M, Alcañiz M, PĂ©rez-LĂłpez D, Ortega M (2010) Design and validation of an augmented book for spatial abilities development in engineering students. Comput Graph 34(1):77–91McNaughton J (2010) Utilising emerging multi-touch table designs. Durham UniversityMicrosoft (2011) Microsoft surface. URL http://www.microsoft.com/surface/Muja M, Lowe D (2009) Fast approximate nearest neighbors with automatic algorithm configuration. In: Int. conf. comput. vis. theory appl. VISSAPP, pp 331–340Nister D, Stewenius H (2006) Scalable recognition with a vocabulary tree. In: IEEE Comput. Soc. conf. comput. vis. pattern recognit., vol 2. IEEE, pp 2161–2168Oka K, Sato Y, Koike H (2002) Real-time fingertip tracking and gesture recognition. IEEE Comput Graph 22(6):64–71OpenSource (2011) Fast sift image features library. URL http://libsift.sourceforge.net/Peer P, Kovac J, Solina F (2003) Human skin color clustering for face detection, vol 2. IEEEPilet J, Saito H (2010) Virtually augmenting hundreds of real pictures: an approach based on learning, retrieval, and tracking. In: IEEE virtual real. conf. (VR). IEEE, pp 71–78Rekimoto J (2002) Smartskin: an infrastructure for freehand manipulation on interactive surfaces. In: Proc. SIGCHI conf. hum. factor. comput. syst.. ACM, pp 113–120Shi J, Tomasi C (1994) Good features to track. In: IEEE comput. soc. conf. proc. comput. vis. pattern recognit. IEEE, pp 593–600Tomasi C, Kanade T (1991) Detection and tracking of point features. School of Computer Science, Carnegie Mellon UniversityVerdiĂ© Y (2008) Evolution of hand tracking algorithms to mirrortrack. Tech. Rep. Vis. Interface Syst. Lab.Vos N, van der Meijden H, Denessen E (2011) Effects of constructing versus playing an educational game on student motivation and deep learning strategy use. Comput Educ 56(1):127–137Wagner D, Reitmayr G, Mulloni A, Drummond T, Schmalstieg D (2010) Real-time detection and tracking for augmented reality on mobile phones. IEEE Trans Vis Comput Graph 16(3):355–368Welch G, Bishop G (1995) An introduction to the Kalman filter. University of North Carolina at Chapel Hill, CiteseerWilson A (2004) Touchlight: an imaging touch screen and display for gesture-based interaction. In: Proc. 6th int. conf. multimodal interface. ACM, pp 69–76Wilson A (2005) Playanywhere: a compact interactive tabletop projection-vision system. In: Proc. 18th ACM symp user interface softw. technol. ACM, pp 83–92Wilson A (2010) Using a depth camera as a touch sensor. In: ACM int. conf. interact. tabletop. surf. ACM, pp 69–72Zerofrog (2011) Libsiftfast. URL http://sourceforge.net/projects/libsiftZhang Z (2000) A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell 22(11):1330–1334Zhang Z, Wu Y, Shan Y, Shafer S (2001) Visual panel: virtual mouse, keyboard and 3d controller with an ordinary piece of paper. In: Proc. workshop percept. user interface. ACM, pp 1–
    corecore