6,066 research outputs found
Supporting ethnographic studies of ubiquitous computing in the wild
Ethnography has become a staple feature of IT research over the last twenty years, shaping our understanding of the social character of computing systems and informing their design in a wide variety of settings. The emergence of ubiquitous computing raises new challenges for ethnography however, distributing interaction across a burgeoning array of small, mobile devices and online environments which exploit invisible sensing systems. Understanding interaction requires ethnographers to reconcile interactions that are, for example, distributed across devices on the street with online interactions in order to assemble coherent understandings of the social character and purchase of ubiquitous computing systems. We draw upon four recent studies to show how ethnographers are replaying system recordings of interaction alongside existing resources such as video recordings to do this and identify key challenges that need to be met to support ethnographic study of ubiquitous computing in the wild
Wireless Software Synchronization of Multiple Distributed Cameras
We present a method for precisely time-synchronizing the capture of image
sequences from a collection of smartphone cameras connected over WiFi. Our
method is entirely software-based, has only modest hardware requirements, and
achieves an accuracy of less than 250 microseconds on unmodified commodity
hardware. It does not use image content and synchronizes cameras prior to
capture. The algorithm operates in two stages. In the first stage, we designate
one device as the leader and synchronize each client device's clock to it by
estimating network delay. Once clocks are synchronized, the second stage
initiates continuous image streaming, estimates the relative phase of image
timestamps between each client and the leader, and shifts the streams into
alignment. We quantitatively validate our results on a multi-camera rig imaging
a high-precision LED array and qualitatively demonstrate significant
improvements to multi-view stereo depth estimation and stitching of dynamic
scenes. We release as open source 'libsoftwaresync', an Android implementation
of our system, to inspire new types of collective capture applications.Comment: Main: 9 pages, 10 figures. Supplemental: 3 pages, 5 figure
Mobile collaborative video
The emergence of pico projectors as a part of future mobile devices presents unique opportunities for collaborative settings, especially in entertainment applications, such as video playback. By aggregating pico projectors from several users, it is possible to enhance resolution, brightness, or frame rate. In this paper we present a camera-based methodology for the alignment and synchronization of multiple projectors. The approach does not require any complicated ad hoc network setup among the mobile devices. A prototype system has been set up and used to test the proposed techniques
A Modular Approach for Synchronized Wireless Multimodal Multisensor Data Acquisition in Highly Dynamic Social Settings
Existing data acquisition literature for human behavior research provides
wired solutions, mainly for controlled laboratory setups. In uncontrolled
free-standing conversation settings, where participants are free to walk
around, these solutions are unsuitable. While wireless solutions are employed
in the broadcasting industry, they can be prohibitively expensive. In this
work, we propose a modular and cost-effective wireless approach for
synchronized multisensor data acquisition of social human behavior. Our core
idea involves a cost-accuracy trade-off by using Network Time Protocol (NTP) as
a source reference for all sensors. While commonly used as a reference in
ubiquitous computing, NTP is widely considered to be insufficiently accurate as
a reference for video applications, where Precision Time Protocol (PTP) or
Global Positioning System (GPS) based references are preferred. We argue and
show, however, that the latency introduced by using NTP as a source reference
is adequate for human behavior research, and the subsequent cost and modularity
benefits are a desirable trade-off for applications in this domain. We also
describe one instantiation of the approach deployed in a real-world experiment
to demonstrate the practicality of our setup in-the-wild.Comment: 9 pages, 8 figures, Proceedings of the 28th ACM International
Conference on Multimedia (MM '20), October 12--16, 2020, Seattle, WA, USA.
First two authors contributed equall
Sensor Drive Mobile application for health awareness The SSURE (Software System for User Running Evaluation) app for Android: a design that wonât let you down
There has been a significant increase in the number of mobile applications concerned with health awareness due to the increase in the number of people who are concerned about their health and the raise in the number of people using smartphone/tablet devices. The development of applications related to health and exercise has become popular both in industry and academia. In this project we focus on development of a mobile application that can capture a userâs running movement and modify an audio file so that there is a synchronisation between the beats of the music and the kinetic data (cadence or Steps per Minute/SPM) to motivate and guide their exercise. Our approach applies time-frequency analysis to obtain the SPM value by using the Lomb Periodogram technique that can effectively process unevenly sampling data, which is a feature of the data captured from the built-in accelerometer sensor on a smartphone/tablet device. In order to process the time-stretched audio file that is adjusted with the running information, the Phase Vocoder technique was used to transform the sound to different speed without changing the pitch. Its sophisticated frequency-domain sound processing suits our projectâs objective. To guide the implementation of these algorithms, several Software Engineering techniques have been used to manage our project. The Agile Development Lifecycle (SDLC) technique known as SCRUM was used throughout the development process in the design, testing, and implementation phases. This technique allowed us to change the plan if it was necessary, so it suited our project which was dealing with a new technology to be implemented within a short and limited timespan. Finally, we presented our evaluation to determine the accuracy of the results from our approaches and to assess the quality of our application. The results of evaluation showed that our approaches for the functional requirements were effective and gave us accurate response. However the non-functional requirements still needed to be improved and it was found that a new mobile-oriented approach for software metrics is needed if we wanted to achieve our goals fully
Software-Defined Lighting.
For much of the past century, indoor lighting has been based on incandescent or gas-discharge technology. But, with LED lighting experiencing a 20x/decade increase in flux density, 10x/decade decrease in cost, and linear improvements in luminous efficiency, solid-state lighting is finally cost-competitive with the status quo. As a result, LED lighting is projected to reach over 70% market penetration by 2030. This dissertation claims that solid-state lightingâs real potential has been barely explored, that now is the time to explore it, and that new lighting platforms and applications can drive lighting far beyond its roots as an illumination technology. Scaling laws make solid-state lighting competitive with conventional lighting, but two key features make solid-state lighting an enabler for many new applications: the high switching speeds possible using LEDs and the color palettes realizable with Red-Green-Blue-White (RGBW) multi-chip assemblies.
For this dissertation, we have explored the post-illumination potential of LED lighting in applications as diverse as visible light communications, indoor positioning, smart dust time synchronization, and embedded device configuration, with an eventual eye toward supporting all of them using a shared lighting infrastructure under a unified system architecture that provides software-control over lighting. To explore the space of software-defined lighting (SDL), we design a compact, flexible, and networked SDL platform to allow researchers to rapidly test new ideas. Using this platform, we demonstrate the viability of several applications, including multi-luminaire synchronized communication to a photodiode receiver, communication to mobile phone cameras, and indoor positioning using unmodified mobile phones. We show that all these applications and many other potential applications can be simultaneously supported by a single lighting infrastructure under software control.PhDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111482/1/samkuo_1.pd
Inter-Destination Multimedia Synchronization; Schemes, Use Cases and Standardization
Traditionally, the media consumption model
has been a passive and isolated activity. However, the
advent of media streaming technologies, interactive social
applications, and synchronous communications, as well as
the convergence between these three developments, point
to an evolution towards dynamic shared media experiences.
In this new model, geographically distributed groups of
consumers, independently of their location and the nature
of their end-devices, can be immersed in a common virtual
networked environment in which they can share multimedia
services, interact and collaborate in real-time within
the context of simultaneous media content consumption. In
most of these multimedia services and applications, apart
from the well-known intra and inter-stream synchronization
techniques that are important inside the consumers
playout devices, also the synchronization of the playout
processes between several distributed receivers, known as
multipoint, group or Inter-destination multimedia synchronization
(IDMS), becomes essential. Due to the
increasing popularity of social networking, this type of
multimedia synchronization has gained in popularity in
recent years. Although Social TV is perhaps the most
prominent use case in which IDMS is useful, in this paper
we present up to 19 use cases for IDMS, each one having
its own synchronization requirements. Different approaches
used in the (recent) past by researchers to achieve
IDMS are described and compared. As further proof of the
significance of IDMS nowadays, relevant organizations
(such as ETSI TISPAN and IETF AVTCORE Group)
efforts on IDMS standardization (in which authors have
been and are participating actively), defining architectures
and protocols, are summarized.This work has been financed, partially, by Universitat Politecnica de Valencia (UPV), under its R&D Support Program in PAID-05-11-002-331 Project and in PAID-01-10, and by TNO, under its Future Internet Use Research & Innovation Program. The authors also want to thank Kevin Gross for providing some of the use cases included in Sect. 1.2.Montagud, M.; Boronat Segui, F.; Stokking, H.; Van Brandenburg, R. (2012). Inter-Destination Multimedia Synchronization; Schemes, Use Cases and Standardization. Multimedia Systems. 18(6):459-482. https://doi.org/10.1007/s00530-012-0278-9S459482186Kernchen, R., Meissner, S., Moessner, K., Cesar, P., Vaishnavi, I., Boussard, M., Hesselman, C.: Intelligent multimedia presentation in ubiquitous multidevice scenarios. IEEE Multimedia 17(2), 52â63 (2010)Vaishnavi, I., Cesar, P., Bulterman, D., Friedrich, O., Gunkel, S., Geerts, D.: From IPTV to synchronous shared experiences challenges in design: distributed media synchronization. Signal Process Image Commun 26(7), 370â377 (2011)Geerts, D., Vaishnavi, I., Mekuria, R., Van Deventer, O., Cesar, P.: Are we in sync?: synchronization requirements for watching on-line video together, CHI â11, New York, USA (2011)Boronat, F., Lloret, J., GarcĂa, M.: Multimedia group and inter-stream synchronization techniques: a comparative study. Inf. Syst. 34(1), 108â131 (2009)Chen, M.: A low-latency lip-synchronized videoconferencing system. In: SIGCHI Conference on Human Factors in Computing Systems, CHIâ03, ACM, pp. 464â471, New York (2003)Ishibashi, Y., Tasaka, S., Ogawa, H.: Media synchronization quality of reactive control schemes. IEICE Trans. Commun. E86-B(10), 3103â3113 (2003)Ademoye, O.A., Ghinea, G.: Synchronization of olfaction-enhanced multimedia. IEEE Trans. Multimedia 11(3), 561â565 (2009)Cesar, P., Bulterman, D.C.A., Jansen, J., Geerts, D., Knoche, H., Seager, W.: Fragment, tag, enrich, and send: enhancing social sharing of video. ACM Trans. Multimedia Comput. Commun. Appl. 5(3), Article 19, 27 pages (2009)Van Deventer, M.O., Stokking, H., Niamut, O.A., Walraven, F.A., Klos, V.B.: Advanced Interactive Television Service Require Synchronization, IWSSIP 2008. Bratislava, June (2008)Premchaiswadi, W., Tungkasthan, A., Jongsawat, N.: Enhancing learning systems by using virtual interactive classrooms and web-based collaborative work. In: Proceedings of the IEEE Education Engineering Conference (EDUCON 2010), pp. 1531â1537. Madrid, Spain (2010)Diot, C., Gautier, L.: A distributed architecture for multiplayer interactive applications on the internet. IEEE Netw 13(4), 6â15 (1999)Mauve, M., Vogel, J., Hilt, V., Effelsberg, W.: Local-lag and timewarp: providing consistency for replicated continuous applications. IEEE Trans. Multimedia 6(1), 45â57 (2004)Hosoya, K., Ishibashi, Y., Sugawara, S., Psannis, K.E.: Group synchronization control considering difference of conversation roles. In: IEEE 13th International Symposium on Consumer Electronics, ISCE â09, pp. 948â952 (2009)Roccetti, M., Ferretti, S., Palazzi, C.: The brave new world of multiplayer online games: synchronization issues with smart solution. In: 11th IEEE Symposium on Object Oriented Real-Time Distributed Computing (ISORC), pp. 587â592 (2008)Ott, D.E., Mayer-Patel, K.: An open architecture for transport-level protocol coordination in distributed multimedia applications. ACM Trans. Multimedia Comput. Commun. Appl. 3(3), 17 (2007)Boronat, F., Montagud, M., Guerri, J.C.: Multimedia group synchronization approach for one-way cluster-to-cluster applications. In: IEEE 34th Conference on Local Computer Networks, LCN 2009, pp. 177â184, ZĂźrich (2009)Boronat, F., Montagud, M., Vidal, V.: Smooth control of adaptive media playout to acquire IDMS in cluster-based applications. In: IEEE LCN 2011, pp. 617â625, Bonn (2011)Huang, Z., Wu, W., Nahrstedt, K., Rivas, R., Arefin, A.: SyncCast: synchronized dissemination in multi-site interactive 3D tele-immersion. In: Proceedings of MMSys, USA (2011)Kim, S.-J., Kuester, F., Kim, K.: A global timestamp-based approach for enhanced data consistency and fairness in collaborative virtual environments. ACM/Springer Multimedia Syst. J. 10(3), 220â229 (2005)Schooler, E.: Distributed music: a foray into networked performance. In: International Network Music Festival, Santa Monica, CA (1993)Miyashita, Y., Ishibashi, Y., Fukushima, N., Sugawara, S., Psannis K.E.: QoE assessment of group synchronization in networked chorus with voice and video. In: Proceedings of IEEE TENCONâ11, pp. 393â397 (2011)Hesselman, C., Abbadessa, D., Van Der Beek, W., et al.: Sharing enriched multimedia experiences across heterogeneous network infrastructures. IEEE Commun. Mag. 48(6), 54â65 (2010)Montpetit, M., Klym, N., Mirlacher, T.: The future of IPTVâConnected, mobile, personal and social. Multimedia Tools Appl J 53(3), 519â532 (2011)Cesar, P., Bulterman, D.C.A., Jansen, J.: Leveraging the user impact: an architecture for secondary screens usage in an interactive television environment. ACM/Springer Multimedia Syst. 15(3), 127â142 (2009)Lukosch, S.: Transparent latecomer support for synchronous groupware. In: Proceedings of 9th International Workshop on Groupware (CRIWG), Grenoble, France, pp. 26â41 (2003)Steinmetz, R.: Human perception of jitter and media synchronization. IEEE J. Sel. Areas Commun. 14(1), 61â72 (1996)Stokking, H., Van Deventer, M.O., Niamut, O.A., Walraven, F.A., Mekuria, R.N.: IPTV inter-destination synchronization: a network-based approach, ICINâ2010, Berlin (2010)Mekuria, R.N.: Inter-destination media synchronization for TV broadcasts, Master Thesis, Faculty of Electrical Engineering, Mathematics and Computer Science, Department of Network architecture and Services, Delft University of Technology (2011)Pitt Ian, CS2511: Usability engineering lecture notes, localisation of sound sources. http://web.archive.org/web/20100410235208/http:/www.cs.ucc.ie/~ianp/CS2511/HAP.htmlNielsen, J.: Response times: the three important limits. http://www.useit.com/papers/responsetime.html (1994)ITU-T Rec G. 1010: End-User Multimedia QoS Categories. International Telecommunication Union, Geneva (2001)Biersack, E., Geyer, W.: Synchronized delivery and playout of distributed stored multimedia streams. ACM/Springer Multimedia Syst 7(1), 70â90 (1999)Xie, Y., Liu, C., Lee, M.J., Saadawi, T.N.: Adaptive multimedia synchronization in a teleconference system. ACM/Springer Multimedia Syst. 7(4), 326â337 (1999)Laoutaris, N., Stavrakakis, I.: Intrastream synchronization for continuous media streams: a survey of playout schedulers. IEEE Netw. Mag. 16(3), 30â40 (2002)Ishibashi, Y., Tsuji, A., Tasaka, S.: A group synchronization mechanism for stored media in multicast communications. In: Proceedings of the INFOCOM â97, Washington (1997)Ishibashi, Y., Tasaka, S.: A group synchronization mechanism for live media in multicast communications. IEEE GLOBECOMâ97, pp. 746â752 (1997)Boronat, F., Guerri, J.C., Lloret, J.: An RTP/RTCP based approach for multimedia group and inter-stream synchronization. Multimedia Tools Appl. J. 40(2), 285â319 (2008)Ishibashi, I., Tasaka, S.: A distributed control scheme for group synchronization in multicast communications. In: Proceedings of International Symposium Communications, Kaohsiung, Taiwan, pp. 317â323 (1999)Lu, Y., Fallica, B., Kuipers, F.A., Kooij, R.E., Van Mieghem, P.: Assessing the quality of experience of SopCast. Int. J. Internet Protoc. Technol 4(1), 11â19 (2009)Shamma, D.A., Bastea-Forte, M., Joubert, N., Liu, Y.: Enhancing online personal connections through synchronized sharing of online video, ACM CHIâ08 Extended Abstracts, Florence (2008)Ishibashi, Y., Tasaka, S.: A distributed control scheme for causality and media synchronization in networked multimedia games. In: Proceedings of 11th International Conference on Computer Communications and Networks, pp. 144â149, Miami, USA (2002)Ishibashi, Y., Tomaru, K., Tasaka, S., Inazumi, K.: Group synchronization in networked virtual environments. In: Proceedings of the 38th IEEE International Conference on Communications, pp. 885â890, Alaska, USA (2003)Tasaka, S., Ishibashi, Y., Hayashi, M.: Interâdestination synchronization quality in an integrated wired and wireless network with handover. IEEE GLOBECOM 2, 1560â1565 (2002)Kurokawa, Y., Ishibashi, Y., Asano, T.: Group synchronization control in a remote haptic drawing system. In: Proceedings of IEEE International Conference on Multimedia and Expo, pp. 572â575, Beijing, China (2007)Hashimoto, T., Ishibashi, Y.: Group Synchronization Control over Haptic Media in a Networked Real-Time Game with Collaborative Work, Netgamesâ06, Singapore (2006)Nunome, T., Tasaka, S.: Inter-destination synchronization quality in a multicast mobile ad hoc network. In: Proceedings of IEEE 16th International Symposium on Personal, Indoor and Mobile Radio Communications, pp. 1366â1370, Berlin, Germany (2005)Brandenburg, R., van Stokking, H., Van Deventer, M.O., Boronat, F., Montagud, M., Gross, K.: RTCP for inter-destination media synchronization, draft-brandenburg-avtcore-rtcp-for-idms-03.txt. In: IETF Audio/Video Transport Core Maintenance Working Group, Internet Draft, March 9 (2012)ETSI TS 181 016 V3.3.1 (2009-07) Telecommunications and Internet converged Services and Protocols for Advanced Networking (TISPAN); Service Layer Requirements to integrate NGN Services and IPTVETSI TS 182 027 V3.5.1 (2011-03) Telecommunications and Internet converged Services and Protocols for Advanced Networking (TISPAN); IPTV Architecture; IPTV functions supported by the IMS subsystemETSI TS 183 063 V3.5.2 (2011-03) Telecommunications and Internet converged Services and Protocols for Advanced Networking (TISPAN); IMS-based IPTV stage 3 specificationBrandenburg van, R., et al.: RTCP XR Block Type for inter-destination media synchronization, draft-brandenburg-avt-rtcp-for-idms-00.txt. In: IETF Audio/Video Transport Working Group, Internet Draft, Sept 24, 2010Williams, A., et al.: RTP Clock Source Signalling, draft-williams-avtcore-clksrc-00. In: IETF Audio/Video Transport Working Group, Internet Draft, February 28, 201
BatVision: Learning to See 3D Spatial Layout with Two Ears
Many species have evolved advanced non-visual perception while artificial
systems fall behind. Radar and ultrasound complement camera-based vision but
they are often too costly and complex to set up for very limited information
gain. In nature, sound is used effectively by bats, dolphins, whales, and
humans for navigation and communication. However, it is unclear how to best
harness sound for machine perception. Inspired by bats' echolocation mechanism,
we design a low-cost BatVision system that is capable of seeing the 3D spatial
layout of space ahead by just listening with two ears. Our system emits short
chirps from a speaker and records returning echoes through microphones in an
artificial human pinnae pair. During training, we additionally use a stereo
camera to capture color images for calculating scene depths. We train a model
to predict depth maps and even grayscale images from the sound alone. During
testing, our trained BatVision provides surprisingly good predictions of 2D
visual scenes from two 1D audio signals. Such a sound to vision system would
benefit robot navigation and machine vision, especially in low-light or
no-light conditions. Our code and data are publicly available
- âŚ