2,050 research outputs found
Fine-grained Activities of People Worldwide
Every day, humans perform many closely related activities that involve subtle
discriminative motions, such as putting on a shirt vs. putting on a jacket, or
shaking hands vs. giving a high five. Activity recognition by ethical visual AI
could provide insights into our patterns of daily life, however existing
activity recognition datasets do not capture the massive diversity of these
human activities around the world. To address this limitation, we introduce
Collector, a free mobile app to record video while simultaneously annotating
objects and activities of consented subjects. This new data collection platform
was used to curate the Consented Activities of People (CAP) dataset, the first
large-scale, fine-grained activity dataset of people worldwide. The CAP dataset
contains 1.45M video clips of 512 fine grained activity labels of daily life,
collected by 780 subjects in 33 countries. We provide activity classification
and activity detection benchmarks for this dataset, and analyze baseline
results to gain insight into how people around with world perform common
activities. The dataset, benchmarks, evaluation tools, public leaderboards and
mobile apps are available for use at visym.github.io/cap
Unconventional TV Detection using Mobile Devices
Recent studies show that the TV viewing experience is changing giving the
rise of trends like "multi-screen viewing" and "connected viewers". These
trends describe TV viewers that use mobile devices (e.g. tablets and smart
phones) while watching TV. In this paper, we exploit the context information
available from the ubiquitous mobile devices to detect the presence of TVs and
track the media being viewed. Our approach leverages the array of sensors
available in modern mobile devices, e.g. cameras and microphones, to detect the
location of TV sets, their state (ON or OFF), and the channels they are
currently tuned to. We present the feasibility of the proposed sensing
technique using our implementation on Android phones with different realistic
scenarios. Our results show that in a controlled environment a detection
accuracy of 0.978 F-measure could be achieved.Comment: 4 pages, 14 figure
Sensing motion using spectral and spatial analysis of WLAN RSSI
In this paper we present how motion sensing can be obtained just by observing the WLAN radio signal strength and its fluctuations. The temporal, spectral and spatial characteristics of WLAN signal are analyzed. Our analysis
confirms our claim that âsignal strength from access points appear to jump around more vigorously when the device is moving compared to when it is still and the number of detectable access points vary considerably while the user is on the moveâ. Using this observation, we present a novel motion detection algorithm, Spectrally Spread Motion Detection (SpecSMD) based on the spectral analysis of
WLAN signalâs RSSI. To benchmark the proposed algorithm, we used Spatially Spread Motion Detection (SpatSMD), which is inspired by the recent work of Sohn et al. Both algorithms were evaluated by carrying out extensive measurements
in a diverse set of conditions (indoors in different buildings and outdoors - city center, parking lot, university campus etc.,) and tested against the same
data sets. The 94% average classification accuracy of the proposed SpecSMD is outperforming the accuracy of SpatSMD (accuracy 87%). The motion detection algorithms presented in this paper provide ubiquitous methods for deriving the
state of the user. The algorithms can be implemented and run on a commodity device with WLAN capability without the need of any additional hardware support
Design and recognition of microgestures for always-available input
Gestural user interfaces for computing devices most commonly require the user to have at least one hand free to interact with the device, for example, moving a mouse, touching a screen, or performing mid-air gestures. Consequently, users find it difficult to operate computing devices while holding or manipulating everyday objects. This limits the users from interacting with the digital world during a significant portion of their everyday activities, such as, using tools in the kitchen or workshop, carrying items, or workout with sports equipment. This thesis pushes the boundaries towards the bigger goal of enabling always-available input. Microgestures have been recognized for their potential to facilitate direct and subtle interactions. However, it remains an open question how to interact using gestures with computing devices when both of the userâs hands are occupied holding everyday objects. We take a holistic approach and focus on three core contributions: i) To understand end-users preferences, we present an empirical analysis of usersâ choice of microgestures when holding objects of diverse geometries. Instead of designing a gesture set for a specific object or geometry and to identify gestures that generalize, this thesis leverages the taxonomy of grasp types established from prior research. ii) We tackle the critical problem of avoiding false activation by introducing a novel gestural input concept that leverages a single-finger movement, which stands out from everyday finger motions during holding and manipulating objects. Through a data-driven approach, we also systematically validate the conceptâs robustness with different everyday actions. iii) While full sensor coverage on the userâs hand would allow detailed hand-object interaction, minimal instrumentation is desirable for real-world use. This thesis addresses the problem of identifying sparse sensor layouts. We present the first rapid computational method, along with a GUI-based design tool that enables iterative design based on the designerâs high-level requirements. Furthermore, we demonstrate that minimal form-factor devices, like smart rings, can be used to effectively detect microgestures in hands-free and busy scenarios. Overall, the presented findings will serve as both conceptual and technical foundations for enabling interaction with computing devices wherever and whenever users need them.Benutzerschnittstellen fĂŒr ComputergerĂ€te auf Basis von Gesten erfordern fĂŒr eine Interaktion meist mindestens eine freie Hand, z.B. um eine Maus zu bewegen, einen Bildschirm zu berĂŒhren oder Gesten in der Luft auszufĂŒhren. Daher ist es fĂŒr Nutzer schwierig, GerĂ€te zu bedienen, wĂ€hrend sie GegenstĂ€nde halten oder manipulieren. Dies schrĂ€nkt die Interaktion mit der digitalen Welt wĂ€hrend eines GroĂteils ihrer alltĂ€glichen AktivitĂ€ten ein, etwa wenn sie KĂŒchengerĂ€te oder Werkzeug verwenden, GegenstĂ€nde tragen oder mit SportgerĂ€ten trainieren. Diese Arbeit erforscht neue Wege in Richtung des gröĂeren Ziels, immer verfĂŒgbare Eingaben zu ermöglichen. Das Potential von Mikrogesten fĂŒr die Erleichterung von direkten und feinen Interaktionen wurde bereits erkannt. Die Frage, wie der Nutzer mit GerĂ€ten interagiert, wenn beide HĂ€nde mit dem Halten von GegenstĂ€nden belegt sind, bleibt jedoch offen. Wir verfolgen einen ganzheitlichen Ansatz und konzentrieren uns auf drei KernbeitrĂ€ge: i) Um die PrĂ€ferenzen der Endnutzer zu verstehen, prĂ€sentieren wir eine empirische Analyse der Wahl von Mikrogesten beim Halten von Objekte mit diversen Geometrien. Anstatt einen Satz an Gesten fĂŒr ein bestimmtes Objekt oder eine bestimmte Geometrie zu entwerfen, nutzt diese Arbeit die aus frĂŒheren Forschungen stammenden Taxonomien an Griff-Typen. ii) Wir adressieren das Problem falscher Aktivierungen durch ein neuartiges Eingabekonzept, das die sich von alltĂ€glichen Fingerbewegungen abhebende Bewegung eines einzelnen Fingers nutzt. Durch einen datengesteuerten Ansatz validieren wir zudem systematisch die Robustheit des Konzepts bei diversen alltĂ€glichen Aktionen. iii) Auch wenn eine vollstĂ€ndige Sensorabdeckung an der Hand des Nutzers eine detaillierte Hand-Objekt-Interaktion ermöglichen wĂŒrde, ist eine minimale Ausstattung fĂŒr den Einsatz in der realen Welt wĂŒnschenswert. Diese Arbeit befasst sich mit der Identifizierung reduzierter Sensoranordnungen. Wir prĂ€sentieren die erste, schnelle Berechnungsmethode in einem GUI-basierten Designtool, das iteratives Design basierend auf den Anforderungen des Designers ermöglicht. Wir zeigen zudem, dass GerĂ€te mit minimalem Formfaktor wie smarte Ringe fĂŒr die Erkennung von Mikrogesten verwendet werden können. Insgesamt dienen die vorgestellten Ergebnisse sowohl als konzeptionelle als auch als technische Grundlage fĂŒr die Realisierung von Interaktion mit ComputergerĂ€ten wo und wann immer Nutzer sie benötigen.Bosch Researc
A Storm in an IoT Cup: The Emergence of Cyber-Physical Social Machines
The concept of social machines is increasingly being used to characterise
various socio-cognitive spaces on the Web. Social machines are human
collectives using networked digital technology which initiate real-world
processes and activities including human communication, interactions and
knowledge creation. As such, they continuously emerge and fade on the Web. The
relationship between humans and machines is made more complex by the adoption
of Internet of Things (IoT) sensors and devices. The scale, automation,
continuous sensing, and actuation capabilities of these devices add an extra
dimension to the relationship between humans and machines making it difficult
to understand their evolution at either the systemic or the conceptual level.
This article describes these new socio-technical systems, which we term
Cyber-Physical Social Machines, through different exemplars, and considers the
associated challenges of security and privacy.Comment: 14 pages, 4 figure
- âŠ