991 research outputs found

    Know Thy Toucher

    Get PDF
    Most of current academic and commercial surface computing systems are capable of multitouch detection and hence allow simultaneous input from multiple users. Although there are so far only few applications in this area which rely on identifying the user, we believe that the association of touches to users will become an essential feature of surface computing as applications mature, new application areas emerge, and the enabling technology is readily available. As the capacitive technology used in present user identification enabled tabletops is limited with respect to the supported number of users and screen size, we outline a user identification enabled tabletop concept based on computer vision and biometric hand shape information, and introduce the prototype system we built to further investigate this concept. In a preliminary consideration, we derive concepts for identifying users by examining what new possibilities are enabled and by introducing different scopes of identification

    Evaluation of Physical Finger Input Properties for Precise Target Selection

    Get PDF
    The multitouch tabletop display provides a collaborative workspace for multiple users around a table. Users can perform direct and natural multitouch interaction to select target elements using their bare fingers. However, physical size of fingertip varies from one person to another which generally introduces a fat finger problem. Consequently, it creates the imprecise selection of small size target elements during direct multitouch input. In this respect, an attempt is made to evaluate the physical finger input properties i.e. contact area and shape in the context of imprecise selection

    Supporting Collaborative Multi-User Interactions in a Video Surveillance Application Using Microsoft Tabletop Surface

    Get PDF
    This report examines the progressing exploration done on my chosen subject, which is A Multi-touch Interface in a Video Surveillance System. It discusses method of early prototype interacting with security surveillance footage using natural user interfaces instead of traditional mouse and keyboard interaction. Current project is an evidence of idea on exhibiting that multi-touch interfaces are helpful in a video surveillance system to specifically control the surveillance videos, both of the live or of recorded. In case of any occurrence, this proposed system of interaction may require the user to spend an extra time amounts time obtaining circumstantial and location awareness, which is counter-beneficial. The framework proposed in this paper show how a multi-touch screen and natural interaction can empower the surveillance observing station users to rapidly recognize the area of a security camera and proficiently react to an occurrence. One of the main objective of this project is to engage more than 1 user to perform moving, scaling, rotating ,highlighting and recording on a surveillance video in the meantime, particularly during emergency periods. Furthermore, the scope of study for this project is to improve user collaborative interactions on Microsoft tabletop surface .A methodology was developed based upon a combination of the available literature and the experiences of the authors, who are actively involved with the development of multi-user interactions. This will cover many parts such as surveys, data gathering from respective Subject-Matter Experts, focal points, and analyzing information. It is intended to have a Surveillance application with user friendly collaborative touch on surface and eye-catching interface to reflect the quick paced nature of today's correspondences and better advertise its new activities and accessible assets. The future improvements and plans have been recommended and discussed in the recommendations section. Up to now, this research report has been run for twelve (12) weeks and going to be continued running for sixteen (16) weeks with a specific end goal to attain project primary objectives

    Real-time person re-identification for interactive environments

    Get PDF
    The work presented in this thesis was motivated by a vision of the future in which intelligent environments in public spaces such as galleries and museums, deliver useful and personalised services to people via natural interaction, that is, without the need for people to provide explicit instructions via tangible interfaces. Delivering the right services to the right people requires a means of biometrically identifying individuals and then re-identifying them as they move freely through the environment. Delivering the service they desire requires sensing their context, for example, sensing their location or proximity to resources. This thesis presents both a context-aware system and a person re-identification method. A tabletop display was designed and prototyped with an infrared person-sensing context function. In experimental evaluation it exhibited tracking performance comparable to other more complex systems. A real-time, viewpoint invariant, person re-identification method is proposed based on a novel set of Viewpoint Invariant Multi-modal (ViMM) feature descriptors collected from depth-sensing cameras. The method uses colour and a combination of anthropometric properties logged as a function of body orientation. A neural network classifier is used to perform re-identification

    Accessibility and tangible interaction in distributed workspaces based on multi-touch surfaces

    Full text link
    [EN] Traditional interaction mechanisms in distributed digital spaces often fail to consider the intrinsic properties of action, perception, and communication among workgroups, which may affect access to the common resources used to mutually organize information. By developing suitable spatial geometries and natural interaction mechanisms, distributed spaces can become blended where the physical and virtual boundaries of local and remote spaces merge together to provide the illusion of a single unified space. In this paper, we discuss the importance of blended interaction in distributed spaces and the particular challenges faced when designing accessible technology. We illustrate this discussion through a new tangible interaction mechanism for collaborative spaces based on tabletop system technology implemented with optical frames. Our tangible elements facilitate the exchange of digital information in distributed collaborative settings by providing a physical manifestation of common digital operations. The tangibles are designed as passive elements that do not require the use of any additional hardware or external power while maintaining a high degree of accuracy.This work was supported by the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund, through the ANNOTA Project (Ref. TIN2013-46036-C3-1-R).Salvador-Herranz, G.; Camba, J.; Contero, M.; Naya Sanchis, F. (2018). Accessibility and tangible interaction in distributed workspaces based on multi-touch surfaces. Universal Access in the Information Society. 17(2):247-256. https://doi.org/10.1007/s10209-017-0563-7S247256172Arkin, E.M., Chew, L.P., Huttenlocher, D.P., Kedem, K., Mitchell, J.S.B.: An efficiently computable metric for comparing polygonal shapes. IEEE Trans. Acoust. Speech Signal Process. 13(3), 209–216 (1991)Benyon, D.: Presence in blended spaces. Interact. Comput. 24(4), 219–226 (2012)Bhalla, M.R., Bhalla, A.V.: Comparative study of various touchscreen technologies. Int. J. Comput. Appl. 6(8), 12–18 (2010)Bradski, G., Kaehler, A.: Learning OpenCV: Computer Vision with the OpenCV Library. O’Reilly Media Inc., Newton (2008)Candela, E.S., PĂ©rez, M.O., Romero, C.M., LĂłpez, D.C.P., Herranz, G.S., Contero, M., Raya, M.A.: Humantop: a multi-object tracking tabletop. Multimed. Tools Appl. 70(3), 1837–1868 (2014)Cohen, J., Withgott, M., Piernot, P.: Logjam: a tangible multi-person interface for video logging. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 128–135. ACM (1999)Couture, N., RiviĂšre, G., Reuter, P.: Geotui: a tangible user interface for geoscience. In: Proceedings of the 2nd International Conference on Tangible and Embedded Interaction, pp. 89–96. ACM (2008)de la GuĂ­a, E., Lozano, M.D., Penichet, V.R.: Cognitive rehabilitation based on collaborative and tangible computer games. In: 2013 7th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth), pp. 389–392. IEEE (2013)Dietz, P., Leigh, D.: Diamondtouch: a multi-user touch technology. In: Proceedings of the 14th Annual ACM Symposium on User Interface Software and Technology, pp. 219–226. ACM (2001)FalcĂŁo, T.P., Price, S.: What have you done! the role of ‘interference’ in tangible environments for supporting collaborative learning. In: Proceedings of the 9th International Conference on Computer Supported Collaborative Learning-Volume 1, pp. 325–334. International Society of the Learning Sciences (2009)Fallman, D.: Wear, point and tilt. In: Proceedings of the Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, pp. 293–302. ACM Press (2002)Fishkin, K.P., Gujar, A., Harrison, B.L., Moran, T.P., Want, R.: Embodied user interfaces for really direct manipulation. Commun. ACM 43(9), 74–80 (2000)Fitzmaurice, G.W., Buxton, W.: An empirical evaluation of graspable user interfaces: towards specialized, space-multiplexed input. In: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 43–50. ACM (1997)Fitzmaurice, G.W., Ishii, H., Buxton, W.A.: Bricks: laying the foundations for graspable user interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 442–449. ACM Press (1995)Graham, R.L., Yao, F.F.: Finding the convex hull of a simple polygon. J. Algorithms 4(4), 324–331 (1983)Hartigan, J.A., Wong, M.A.: Algorithm as 136: a k-means clustering algorithm. J. R. Stat. Soc.: Ser. C (Appl. Stat.) 28(1), 100–108 (1979)Higgins, S.E., Mercier, E., Burd, E., Hatch, A.: Multi-touch tables and the relationship with collaborative classroom pedagogies: a synthetic review. Int. J. Comput. Support. Collab. Learn. 6(4), 515–538 (2011)Hinckley, K., Pausch, R., Goble, J.C., Kassell, N.F.: Passive real-world interface props for neurosurgical visualization. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 452–458. ACM (1994)Hinske, S.: Determining the position and orientation of multi-tagged objects using RFID technology. In: 5th Annual IEEE International Conference on Pervasive Computing and Communications Workshops, 2007. PerCom Workshops’07, pp. 377–381. IEEE (2007)Hornecker, E.: A design theme for tangible interaction: embodied facilitation. In: ECSCW 2005, pp. 23–43. Springer (2005)Hoshi, K., Öhberg, F., Nyberg, A.: Designing blended reality space: conceptual foundations and applications. In: Proceedings of the 25th BCS Conference on Human–Computer Interaction, pp. 217–226. British Computer Society (2011)Ishii, H.: Tangible User Interfaces. CRC Press, Boca Raton (2007)Ishii, H., Ullmer, B.: Tangible bits: towards seamless interfaces between people, bits and atoms. In: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 234–241. ACM (1997)Jacob, R.J., Girouard, A., Hirshfield, L.M., Horn, M.S., Shaer, O., Solovey, E.T., Zigelbaum, J.: Reality-based interaction: a framework for post-wimp interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 201–210. ACM (2008)Jetter, H.C., Dachselt, R., Reiterer, H., Quigley, A., Benyon, D., Haller, M.: Blended Interaction: Envisioning Future Collaborative Interactive Spaces. ACM, New York (2013)Jin, X., Han, J.: Quality threshold clustering. In: Sammut, C., Webb, G.I. (eds.) Encyclopedia of Machine Learning, pp. 820–820. Springer, Boston, MA (2011)JordĂ , S., Geiger, G., Alonso, M., Kaltenbrunner, M.: The reactable: exploring the synergy between live music performance and tabletop tangible interfaces. In: Proceedings of the 1st International Conference on Tangible and Embedded Interaction, pp. 139–146. ACM (2007)Kaltenbrunner, M., Bovermann, T., Bencina, R., Costanza, E.: Tuio: a protocol for table-top tangible user interfaces. In: Proceedings of the 6th International Workshop on Gesture in Human–Computer Interaction and Simulation, pp. 1–5 (2005)Kirk, D., Sellen, A., Taylor, S., Villar, N., Izadi, S.: Putting the physical into the digital: issues in designing hybrid interactive surfaces. In: Proceedings of the 23rd British HCI Group Annual Conference on People and Computers: Celebrating People and Technology, pp. 35–44. British Computer Society (2009)Marques, T., Nunes, F., Silva, P., Rodrigues, R.: Tangible interaction on tabletops for elderly people. In: International Conference on Entertainment Computing, pp. 440–443. Springer (2011)MĂŒller, D.: Mixed reality systems. iJOE 5(S2), 10–11 (2009)Newton-Dunn, H., Nakano, H., Gibson, J.: Block jam: a tangible interface for interactive music. In: Proceedings of the 2003 Conference on New Interfaces for Musical Expression, pp. 170–177. National University of Singapore (2003)Patten, J., Recht, B., Ishii, H.: Audiopad: a tag-based interface for musical performance. In: Proceedings of the 2002 Conference on New Interfaces for Musical Expression, pp. 1–6. National University of Singapore (2002)Patten, J., Recht, B., Ishii, H.: Interaction techniques for musical performance with tabletop tangible interfaces. In: Proceedings of the 2006 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology, p. 27. ACM (2006)PQLabs: Inc. http://multitouch.com/ . Retrieved on 16 October 2016Ryokai, K., Marti, S., Ishii, H.: I/o brush: drawing with everyday objects as ink. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI’04, pp. 303–310. ACM, New York (2004). doi: 10.1145/985692.985731Salvador, G., Bañó, M., Contero, M., Camba, J.: Evaluation of a distributed collaborative workspace as a creativity tool in the context of design education. In: 2014 IEEE Frontiers in Education Conference (FIE) Proceedings, pp. 1–7. IEEE (2014)Salvador-Herranz, G., Contero, M., Camba, J.: Use of tangible marks with optical frame interactive surfaces in collaborative design scenarios based on blended spaces. In: International Conference on Cooperative Design, Visualization and Engineering, pp. 253–260. Springer (2014)Salvador-Herranz, G., Camba, J.D., Naya, F., Contero, M.: On the integration of tangible elements with multi-touch surfaces for the collaborative creation of concept maps. In: International Conference on Learning and Collaboration Technologies, pp. 177–186. Springer (2016)Schöning, J., Hook, J., Bartindale, T., Schmidt, D., Oliver, P., Echtler, F., Motamedi, N., Brandl, P., von Zadow, U.: Building interactive multi-touch surfaces. In: MĂŒller-Tomfelde, C. (ed.) Tabletops-Horizontal Interactive Displays, pp. 27–49. Springer, London, UK (2010)Shaer, O., Hornecker, E.: Tangible user interfaces: past, present, and future directions. Found. Trends Hum. Comput. Interact. 3(1–2), 1–137 (2010)Shen, C., Everitt, K., Ryall, K.: Ubitable: Impromptu face-to-face collaboration on horizontal interactive surfaces. In: International Conference on Ubiquitous Computing, pp. 281–288. Springer (2003)Suzuki, H., Kato, H.: Algoblock: a tangible programming language, a tool for collaborative learning. In: Proceedings of 4th European Logo Conference, pp. 297–303 (1993)Suzuki, H., Kato, H.: Interaction-level support for collaborative learning: Algoblockan open programming language. In: The 1st International Conference on Computer Support for Collaborative Learning, pp. 349–355. L. Erlbaum Associates Inc. (1995)Terrenghi, L., Kirk, D., Richter, H., KrĂ€mer, S., Hilliges, O., Butz, A.: Physical handles at the interactive surface: exploring tangibility and its benefits. In: Proceedings of the Working Conference on Advanced Visual Interfaces, pp. 138–145. ACM (2008)Veltkamp, R.C.: Shape matching: similarity measures and algorithms. In: SMI 2001 International Conference on Shape Modeling and Applications, pp. 188–197. IEEE (2001)Weinberg, G., Gan, S.L.: The squeezables: Toward an expressive and interdependent multi-player musical instrument. Comput. Music J. 25(2), 37–45 (2001)Weiser, M.: Some computer science issues in ubiquitous computing. Commun. ACM 36(7), 75–84 (1993)Wilson, F.: The hand: how its use shapes the brain, language, and human culture. Vintage Series. Vintage Books (1998). https://books.google.es/books?id=l_Boy_-NkwUCZuckerman, O., Arida, S., Resnick, M.: Extending tangible interfaces for education: digital montessori-inspired manipulatives. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 859–868. ACM (2005

    Interacting "Through the Display"

    Get PDF
    The increasing availability of displays at lower costs has led to a proliferation of such in our everyday lives. Additionally, mobile devices are ready to hand and have been proposed as interaction devices for external screens. However, only their input mechanism was taken into account without considering three additional factors in environments hosting several displays: first, a connection needs to be established to the desired target display (modality). Second, screens in the environment may be re-arranged (flexibility). And third, displays may be out of the user’s reach (distance). In our research we aim to overcome the problems resulting from these characteristics. The overall goal is a new interaction model that allows for (1) a non-modal connection mechanism for impromptu use on various displays in the environment, (2) interaction on and across displays in highly flexible environments, and (3) interacting at variable distances. In this work we propose a new interaction model called through the display interaction which enables users to interact with remote content on their personal device in an absolute and direct fashion. To gain a better understanding of the effects of the additional characteristics, we implemented two prototypes each of which investigates a different distance to the target display: LucidDisplay allows users to place their mobile device directly on top of a larger external screen. MobileVue on the other hand enables users to interact with an external screen at a distance. In each of these prototypes we analyzed their effects on the remaining two criteria – namely the modality of the connection mechanism as well as the flexibility of the environment. With the findings gained in this initial phase we designed Shoot & Copy, a system that allows the detection of screens purely based on their visual content. Users aim their personal device’s camera at the target display which then appears in live video shown in the viewfinder. To select an item, users take a picture which is analyzed to determine the targeted region. We further extended this approach to multiple displays by using a centralized component serving as gateway to the display environment. In Tap & Drop we refined this prototype to support real-time feedback. Instead of taking pictures, users can now aim their mobile device at the display resulting and start interacting immediately. In doing so, we broke the rigid sequential interaction of content selection and content manipulation. Both prototypes allow for (1) connections in a non-modal way (i.e., aim at the display and start interacting with it) from the user’s point of view and (2) fully flexible environments (i.e., the mobile device tracks itself with respect to displays in the environment). However, the wide-angle lenses and thus greater field of views of current mobile devices still do not allow for variable distances. In Touch Projector, we overcome this limitation by introducing zooming in combination with temporarily freezing the video image. Based on our extensions to taxonomy of mobile device interaction on external displays, we created a refined model of interacting through the display for mobile use. It enables users to interact impromptu without explicitly establishing a connection to the target display (non-modal). As the mobile device tracks itself with respect to displays in the environment, the model further allows for full flexibility of the environment (i.e., displays can be re-arranged without affecting on the interaction). And above all, users can interact with external displays regardless of their actual size at variable distances without any loss of accuracy.Die steigende VerfĂŒgbarkeit von Bildschirmen hat zu deren Verbreitung in unserem Alltag gefĂŒhrt. Ferner sind mobile GerĂ€te immer griffbereit und wurden bereits als InteraktionsgerĂ€te fĂŒr zusĂ€tzliche Bildschirme vorgeschlagen. Es wurden jedoch nur Eingabemechanismen berĂŒcksichtigt ohne nĂ€her auf drei weitere Faktoren in Umgebungen mit mehreren Bildschirmen einzugehen: (1) Beide GerĂ€te mĂŒssen verbunden werden (ModalitĂ€t). (2) Bildschirme können in solchen Umgebungen umgeordnet werden (FlexibilitĂ€t). (3) Monitore können außer Reichweite sein (Distanz). Wir streben an, die Probleme, die durch diese Eigenschaften auftreten, zu lösen. Das ĂŒbergeordnete Ziel ist ein Interaktionsmodell, das einen nicht-modalen Verbindungsaufbau fĂŒr spontane Verwendung von Bildschirmen in solchen Umgebungen, (2) Interaktion auf und zwischen Bildschirmen in flexiblen Umgebungen, und (3) Interaktionen in variablen Distanzen erlaubt. Wir stellen ein Modell (Interaktion durch den Bildschirm) vor, mit dem Benutzer mit entfernten Inhalten in direkter und absoluter Weise auf ihrem MobilgerĂ€t interagieren können. Um die Effekte der hinzugefĂŒgten Charakteristiken besser zu verstehen, haben wir zwei Prototypen fĂŒr unterschiedliche Distanzen implementiert: LucidDisplay erlaubt Benutzern ihr mobiles GerĂ€t auf einen grĂ¶ĂŸeren, sekundĂ€ren Bildschirm zu legen. GegensĂ€tzlich dazu ermöglicht MobileVue die Interaktion mit einem zusĂ€tzlichen Monitor in einer gewissen Entfernung. In beiden Prototypen haben wir dann die Effekte der verbleibenden zwei Kriterien (d.h. ModalitĂ€t des Verbindungsaufbaus und FlexibilitĂ€t der Umgebung) analysiert. Mit den in dieser ersten Phase erhaltenen Ergebnissen haben wir Shoot & Copy entworfen. Dieser Prototyp erlaubt die Erkennung von Bildschirmen einzig ĂŒber deren visuellen Inhalt. Benutzer zeigen mit der Kamera ihres MobilgerĂ€ts auf einen Bildschirm dessen Inhalt dann in Form von Video im Sucher dargestellt wird. Durch die Aufnahme eines Bildes (und der darauf folgenden Analyse) wird Inhalt ausgewĂ€hlt. Wir haben dieses Konzept zudem auf mehrere Bildschirme erweitert, indem wir eine zentrale Instanz verwendet haben, die als Schnittstelle zur Umgebung agiert. Mit Tap & Drop haben wir den Prototyp verfeinert, um Echtzeit-Feedback zu ermöglichen. Anstelle der Bildaufnahme können Benutzer nun ihr mobiles GerĂ€t auf den Bildschirm richten und sofort interagieren. Dadurch haben wir die strikt sequentielle Interaktion (Inhalt auswĂ€hlen und Inhalt manipulieren) aufgebrochen. Beide Prototypen erlauben bereits nicht-modale Verbindungsmechanismen in flexiblen Umgebungen. Die in heutigen MobilgerĂ€ten verwendeten Weitwinkel-Objektive erlauben jedoch nach wie vor keine variablen Distanzen. Mit Touch Projector beseitigen wir diese EinschrĂ€nkung, indem wir Zoomen in Kombination mit einer vorĂŒbergehenden Pausierung des Videos im Sucher einfĂŒgen. Basierend auf den Erweiterungen der Klassifizierung von Interaktionen mit zusĂ€tzlichen Bildschirmen durch mobile GerĂ€te haben wir ein verbessertes Modell (Interaktion durch den Bildschirm) erstellt. Es erlaubt Benutzern spontan zu interagieren, ohne explizit eine Verbindung zum zweiten Bildschirm herstellen zu mĂŒssen (nicht-modal). Da das mobile GerĂ€t seinen rĂ€umlichen Bezug zu allen Bildschirmen selbst bestimmt, erlaubt unser Modell zusĂ€tzlich volle FlexibilitĂ€t in solchen Umgebungen. DarĂŒber hinaus können Benutzer mit zusĂ€tzlichen Bildschirmen (unabhĂ€ngig von deren GrĂ¶ĂŸe) in variablen Entfernungen interagieren

    Fingers of a Hand Oscillate Together: Phase Syncronisation of Tremor in Hover Touch Sensing

    Get PDF
    When using non-contact finger tracking, fingers can be classified as to which hand they belong to by analysing the phase relation of physiological tremor. In this paper, we show how 3D capacitive sensors can pick up muscle tremor in fingers above a device. We develop a signal processing pipeline based on nonlinear phase synchronisation that can reliably group fingers to hands and experimentally validate our technique. This allows significant new gestural capabilities for 3D finger sensing without additional hardware

    Supporting Collaborative Learning in Computer-Enhanced Environments

    Full text link
    As computers have expanded into almost every aspect of our lives, the ever-present graphical user interface (GUI) has begun facing its limitations. Demanding its own share of attention, GUIs move some of the users\u27 focus away from the task, particularly when the task is 3D in nature or requires collaboration. Researchers are therefore exploring other means of human-computer interaction. Individually, some of these new techniques show promise, but it is the combination of multiple approaches into larger systems that will allow us to more fully replicate our natural behavior within a computing environment. As computers become more capable of understanding our varied natural behavior (speech, gesture, etc.), the less we need to adjust our behavior to conform to computers\u27 requirements. Such capabilities are particularly useful where children are involved, and make using computers in education all the more appealing. Herein are described two approaches and implementations of educational computer systems that work not by user manipulation of virtual objects, but rather, by user manipulation of physical objects within their environment. These systems demonstrate how new technologies can promote collaborative learning among students, thereby enhancing both the students\u27 knowledge and their ability to work together to achieve even greater learning. With these systems, the horizon of computer-facilitated collaborative learning has been expanded. Included among this expansion is identification of issues for general and special education students, and applications in a variety of domains, which have been suggested
    • 

    corecore