350 research outputs found
It's Just My History Isn't It? Understanding smart journaling practices
Smart journals are both an emerging class of lifelogging applications and novel digital possessions, which are used to create and curate a personal record of one's life. Through an in-depth interview study of analogue and digital journaling practices, and by drawing on a wide range of research around 'technologies of memory', we address fundamental questions about how people manage and value digital records of the past. Appreciating journaling as deeply idiographic, we map a broad range of user practices and motivations and use this understanding to ground four design considerations: recognizing the motivation to account for one's life; supporting the authoring of a unique perspective and finding a place for passive tracking as a chronicle. Finally, we argue that smart journals signal a maturing orientation to issues of digital archiving
Image-based Recommendations on Styles and Substitutes
Humans inevitably develop a sense of the relationships between objects, some
of which are based on their appearance. Some pairs of objects might be seen as
being alternatives to each other (such as two pairs of jeans), while others may
be seen as being complementary (such as a pair of jeans and a matching shirt).
This information guides many of the choices that people make, from buying
clothes to their interactions with each other. We seek here to model this human
sense of the relationships between objects based on their appearance. Our
approach is not based on fine-grained modeling of user annotations but rather
on capturing the largest dataset possible and developing a scalable method for
uncovering human notions of the visual relationships within. We cast this as a
network inference problem defined on graphs of related images, and provide a
large-scale dataset for the training and evaluation of the same. The system we
develop is capable of recommending which clothes and accessories will go well
together (and which will not), amongst a host of other applications.Comment: 11 pages, 10 figures, SIGIR 201
AccessLens: Auto-detecting Inaccessibility of Everyday Objects
In our increasingly diverse society, everyday physical interfaces often
present barriers, impacting individuals across various contexts. This
oversight, from small cabinet knobs to identical wall switches that can pose
different contextual challenges, highlights an imperative need for solutions.
Leveraging low-cost 3D-printed augmentations such as knob magnifiers and
tactile labels seems promising, yet the process of discovering unrecognized
barriers remains challenging because disability is context-dependent. We
introduce AccessLens, an end-to-end system designed to identify inaccessible
interfaces in daily objects, and recommend 3D-printable augmentations for
accessibility enhancement. Our approach involves training a detector using the
novel AccessDB dataset designed to automatically recognize 21 distinct
Inaccessibility Classes (e.g., bar-small and round-rotate) within 6 common
object categories (e.g., handle and knob). AccessMeta serves as a robust way to
build a comprehensive dictionary linking these accessibility classes to
open-source 3D augmentation designs. Experiments demonstrate our detector's
performance in detecting inaccessible objects.Comment: CHI202
Understanding everyday experiences of reminiscence for people living with blindness: Practices, tensions and probing new design possibilities
There is growing attention in the HCI community on how technology could be designed to support experiences of reminiscence on past life experiences. Yet, this research has largely overlooked people living with blindness. I present a study that aims to understand everyday experiences of reminiscence for people living with blindness. I conducted a qualitative study with 9 participants living with blindness to understand their personal routines, wishes and desires, and challenges and tensions regarding the experience of reminiscence. Findings are interpreted to discuss new possibilities that offer starting points for future design initiatives and openings for collaboration aimed at creating technology to better support the practices of capturing, sharing, and reflecting on significant memories of the past
Expanding Data Imaginaries in Urban Planning:Foregrounding lived experience and community voices in studies of cities with participatory and digital visual methods
“Expanding Data Imaginaries in Urban Planning” synthesizes more than three years of industrial research conducted within Gehl and the Techno–Anthropology Lab at Aalborg University. Through practical experiments with social media images, digital photovoice, and participatory mapmaking, the project explores how visual materials created by citizens can be used within a digital and participatory methodology to reconfigure the empirical ground of data-driven urbanism. Drawing on a data feminist framework, the project uses visual research to elevate community voices and situate urban issues in lived experiences. As a Science and Technology Studies project, the PhD also utilizes its industrial position as an opportunity to study Gehl’s practices up close, unpacking collectively held narratives and visions that form a particular “data imaginary” and contribute to the production and perpetuation of the role of data in urban planning. The dissertation identifies seven epistemological commitments that shape the data imaginary at Gehl and act as discursive closures within their practice. To illustrate how planners might expand on these, the dissertation uses its own data experiments as speculative demonstrations of how to make alternative modes of knowing cities possible through participatory and digital visual methods
Positioning Commuters And Shoppers Through Sensing And Correlation
Positioning is a basic and important need in many scenarios of human daily activities. With position information, multifarious services could be vitalized to benefit all kinds of users, from individuals to organizations. Through positioning, people are able to obtain not only geo-location but also time related information. By aggregating position information from individuals, organizations could derive statistical knowledge about group behaviors, such as traffic, business, event, etc.
Although enormous effort has been invested in positioning related academic and industrial work, there are still many holes to be filled. This dissertation proposes solutions to address the need of positioning in people’s daily life from two aspects: transportation and shopping. All the solutions are smart-device-based (e.g. smartphone, smartwatch), which could potentially benefit most users considering the prevalence of smart devices.
In positioning relevant activities, the components and their movement information could be sensed by different entities from diverse perspectives. The mechanisms presented in this dissertation treat the information collected from one perspective as reference and match it against the data collected from other perspectives to acquire absolute or relative position, in spatial as well as temporal dimension.
For transportation, both driver and passenger oriented solutions are proposed. To help drivers improve safety and ease the tension from driving, two correlated systems, OmniView [1] and DriverTalk [2], are provided. These systems infer the relative positions of the vehicles moving together by matching the appearance images of the vehicles seen by each other, which help drivers maintain safe distance from surrounding vehicles and also give them opportunities to precisely convey driving related messages to targeted peer drivers.
To improve bus-riding experience for passengers of public transit systems, a system named RideSense [3] is developed. This system correlates the sensor traces collected by both passengers’ smart devices and reference devices in buses to position passengers’ bus-riding, spatially and temporally. With this system, passengers could be billed without any explicit interaction with conventional ticketing facilities in bus system, which makes the transportation system more efficient.
For shopping activities, AutoLabel [4, 5] comes into play, which could position customers with regard to stores. AutoLabel constructs a mapping between WiFi vectors and semantic names of stores through correlating the text decorated inside stores with those on stores’ websites. Later, through WiFi scanning and a lookup in the mapping, customers’ smart devices could automatically recognize the semantic names of the stores they are in or nearby. Therefore, AutoLabel-enabled smart device serves as a bridge for the information flow between business owners and customers, which could benefit both sides
Event Based Media Indexing
Multimedia data, being multidimensional by its nature, requires appropriate approaches for its organizing and sorting. The growing number of sensors for capturing the environmental conditions in the moment of media creation enriches data with context-awareness. This unveils enormous potential for eventcentred multimedia processing paradigm. The essence of this paradigm lies in using events as the primary means for multimedia integration, indexing and management.
Events have the ability to semantically encode relationships of different informational modalities. These modalities can include, but are not limited to: time, space, involved agents and objects. As a consequence, media processing based on events facilitates information perception by humans. This, in turn, decreases the individual’s effort for annotation and organization processes. Moreover events can be used for reconstruction of missing data and for information enrichment.
The spatio-temporal component of events is a key to contextual analysis. A variety of techniques have recently been presented to leverage contextual information for event-based analysis in multimedia. The content-based approach has demonstrated its weakness in the field of event analysis, especially for the event detection task. However content-based media analysis is important for object detection and recognition and can therefore play a role which is complementary to that of event-driven context recognition.
The main contribution of the thesis lies in the investigation of a new eventbased paradigm for multimedia integration, indexing and management. In this dissertation we propose i) a novel model for event based multimedia representation, ii) a robust approach for mining events from multimedia and iii) exploitation of detected events for data reconstruction and knowledge enrichment
Seeing the City Digitally
This book explores what's happening to ways of seeing urban spaces in the contemporary moment, when so many of the technologies through which cities are visualised are digital. Cities have always been pictured, in many media and for many different purposes. This edited collection explores how that picturing is changing in an era of digital visual culture. Analogue visual technologies like film cameras were understood as creating some sort of a trace of the real city. Digital visual technologies, in contrast, harvest and process digital data to create images that are constantly refreshed, modified and circulated. Each of the chapters in this volume examines a different example of this processual visuality is reconfiguring the spatial and temporal organisation of urban life
- …