1,444 research outputs found

    Towards memory supporting personal information management tools

    Get PDF
    In this article we discuss re-retrieving personal information objects and relate the task to recovering from lapse(s) in memory. We propose that fundamentally it is lapses in memory that impede users from successfully re-finding the information they need. Our hypothesis is that by learning more about memory lapses in non-computing contexts and how people cope and recover from these lapses, we can better inform the design of PIM tools and improve the user's ability to re-access and re-use objects. We describe a diary study that investigates the everyday memory problems of 25 people from a wide range of backgrounds. Based on the findings, we present a series of principles that we hypothesize will improve the design of personal information management tools. This hypothesis is validated by an evaluation of a tool for managing personal photographs, which was designed with respect to our findings. The evaluation suggests that users' performance when re-finding objects can be improved by building personal information management tools to support characteristics of human memory

    Developing Accessible Collection and Presentation Methods for Observational Data

    Get PDF
    The processes of collecting, cleaning, and presenting data are critical in ensuring the proper analysis of data at a later date. An opportunity exists to enhance the data collection and presentation process for those who are not data scientists – such as healthcare professionals and businesspeople interested in using data to help them make decisions. In this work, creating an observational data collection and presentation tool is investigated, with a focus on developing a tool prioritizing user-friendliness and context preservation of the data collected. This aim is achieved via the integration of three approaches to data collection and presentation.In the first approach, the collection of observational data is structured and carried out via a trichotomous, tailored, sub-branching scoring (TTSS) system. The system allows for deep levels of data collection while enabling data to be summarized quickly by a user via collapsing details. The system is evaluated against the stated requirements of usability and extensibility, proving the latter by providing examples of various evaluations created using the TTSS framework.Next, this approach is integrated with automated data collection via mobile device sensors, to facilitate the efficient completion of the assessment. Results are presented from a system used to combine the capture of complex data from the built environment and compare the results of the data collection, including how the system uses quantitative measures specifically. This approach is evaluated against other solutions for obtaining data about the accessibility of a built environment, and several assessments taken in the field are compared to illustrate the system’s flexibility. The extension of the system for automated data capture is also discussed.Finally, the use of accessibility information for data context preservation is integrated. This approach is evaluated via investigation of how accessible media entries improve the quality of search for an archival website. Human-generated accessibility information is compared to computer-generated accessibility information, as well as simple reliance on titles/metadata. This is followed by a discussion of how improved accessibility can benefit the understanding of gathered observational data’s context

    Introduction: Ways of Machine Seeing

    Get PDF
    How do machines, and, in particular, computational technologies, change the way we see the world? This special issue brings together researchers from a wide range of disciplines to explore the entanglement of machines and their ways of seeing from new critical perspectives. This 'editorial' is for a special issue of AI & Society, which includes contributions from: María Jesús Schultz Abarca, Peter Bell, Tobias Blanke, Benjamin Bratton, Claudio Celis Bueno, Kate Crawford, Iain Emsley, Abelardo Gil-Fournier, Daniel Chávez Heras, Vladan Joler, Nicolas Malevé, Lev Manovich, Nicholas Mirzoeff, Perle Møhl, Bruno Moreschi, Fabian Offert, Trevor Paglan, Jussi Parikka, Luciana Parisi, Matteo Pasquinelli, Gabriel Pereira, Carloalberto Treccani, Rebecca Uliasz, and Manuel van der Veen

    Term-driven E-Commerce

    Get PDF
    Die Arbeit nimmt sich der textuellen Dimension des E-Commerce an. Grundlegende Hypothese ist die textuelle Gebundenheit von Information und Transaktion im Bereich des elektronischen Handels. Überall dort, wo Produkte und Dienstleistungen angeboten, nachgefragt, wahrgenommen und bewertet werden, kommen natürlichsprachige Ausdrücke zum Einsatz. Daraus resultiert ist zum einen, wie bedeutsam es ist, die Varianz textueller Beschreibungen im E-Commerce zu erfassen, zum anderen können die umfangreichen textuellen Ressourcen, die bei E-Commerce-Interaktionen anfallen, im Hinblick auf ein besseres Verständnis natürlicher Sprache herangezogen werden

    Grasp-sensitive surfaces

    Get PDF
    Grasping objects with our hands allows us to skillfully move and manipulate them. Hand-held tools further extend our capabilities by adapting precision, power, and shape of our hands to the task at hand. Some of these tools, such as mobile phones or computer mice, already incorporate information processing capabilities. Many other tools may be augmented with small, energy-efficient digital sensors and processors. This allows for graspable objects to learn about the user grasping them - and supporting the user's goals. For example, the way we grasp a mobile phone might indicate whether we want to take a photo or call a friend with it - and thus serve as a shortcut to that action. A power drill might sense whether the user is grasping it firmly enough and refuse to turn on if this is not the case. And a computer mouse could distinguish between intentional and unintentional movement and ignore the latter. This dissertation gives an overview of grasp sensing for human-computer interaction, focusing on technologies for building grasp-sensitive surfaces and challenges in designing grasp-sensitive user interfaces. It comprises three major contributions: a comprehensive review of existing research on human grasping and grasp sensing, a detailed description of three novel prototyping tools for grasp-sensitive surfaces, and a framework for analyzing and designing grasp interaction: For nearly a century, scientists have analyzed human grasping. My literature review gives an overview of definitions, classifications, and models of human grasping. A small number of studies have investigated grasping in everyday situations. They found a much greater diversity of grasps than described by existing taxonomies. This diversity makes it difficult to directly associate certain grasps with users' goals. In order to structure related work and own research, I formalize a generic workflow for grasp sensing. It comprises *capturing* of sensor values, *identifying* the associated grasp, and *interpreting* the meaning of the grasp. A comprehensive overview of related work shows that implementation of grasp-sensitive surfaces is still hard, researchers often are not aware of related work from other disciplines, and intuitive grasp interaction has not yet received much attention. In order to address the first issue, I developed three novel sensor technologies designed for grasp-sensitive surfaces. These mitigate one or more limitations of traditional sensing techniques: **HandSense** uses four strategically positioned capacitive sensors for detecting and classifying grasp patterns on mobile phones. The use of custom-built high-resolution sensors allows detecting proximity and avoids the need to cover the whole device surface with sensors. User tests showed a recognition rate of 81%, comparable to that of a system with 72 binary sensors. **FlyEye** uses optical fiber bundles connected to a camera for detecting touch and proximity on arbitrarily shaped surfaces. It allows rapid prototyping of touch- and grasp-sensitive objects and requires only very limited electronics knowledge. For FlyEye I developed a *relative calibration* algorithm that allows determining the locations of groups of sensors whose arrangement is not known. **TDRtouch** extends Time Domain Reflectometry (TDR), a technique traditionally used for inspecting cable faults, for touch and grasp sensing. TDRtouch is able to locate touches along a wire, allowing designers to rapidly prototype and implement modular, extremely thin, and flexible grasp-sensitive surfaces. I summarize how these technologies cater to different requirements and significantly expand the design space for grasp-sensitive objects. Furthermore, I discuss challenges for making sense of raw grasp information and categorize interactions. Traditional application scenarios for grasp sensing use only the grasp sensor's data, and only for mode-switching. I argue that data from grasp sensors is part of the general usage context and should be only used in combination with other context information. For analyzing and discussing the possible meanings of grasp types, I created the GRASP model. It describes five categories of influencing factors that determine how we grasp an object: *Goal* -- what we want to do with the object, *Relationship* -- what we know and feel about the object we want to grasp, *Anatomy* -- hand shape and learned movement patterns, *Setting* -- surrounding and environmental conditions, and *Properties* -- texture, shape, weight, and other intrinsics of the object I conclude the dissertation with a discussion of upcoming challenges in grasp sensing and grasp interaction, and provide suggestions for implementing robust and usable grasp interaction.Die Fähigkeit, Gegenstände mit unseren Händen zu greifen, erlaubt uns, diese vielfältig zu manipulieren. Werkzeuge erweitern unsere Fähigkeiten noch, indem sie Genauigkeit, Kraft und Form unserer Hände an die Aufgabe anpassen. Digitale Werkzeuge, beispielsweise Mobiltelefone oder Computermäuse, erlauben uns auch, die Fähigkeiten unseres Gehirns und unserer Sinnesorgane zu erweitern. Diese Geräte verfügen bereits über Sensoren und Recheneinheiten. Aber auch viele andere Werkzeuge und Objekte lassen sich mit winzigen, effizienten Sensoren und Recheneinheiten erweitern. Dies erlaubt greifbaren Objekten, mehr über den Benutzer zu erfahren, der sie greift - und ermöglicht es, ihn bei der Erreichung seines Ziels zu unterstützen. Zum Beispiel könnte die Art und Weise, in der wir ein Mobiltelefon halten, verraten, ob wir ein Foto aufnehmen oder einen Freund anrufen wollen - und damit als Shortcut für diese Aktionen dienen. Eine Bohrmaschine könnte erkennen, ob der Benutzer sie auch wirklich sicher hält und den Dienst verweigern, falls dem nicht so ist. Und eine Computermaus könnte zwischen absichtlichen und unabsichtlichen Mausbewegungen unterscheiden und letztere ignorieren. Diese Dissertation gibt einen Überblick über Grifferkennung (*grasp sensing*) für die Mensch-Maschine-Interaktion, mit einem Fokus auf Technologien zur Implementierung griffempfindlicher Oberflächen und auf Herausforderungen beim Design griffempfindlicher Benutzerschnittstellen. Sie umfasst drei primäre Beiträge zum wissenschaftlichen Forschungsstand: einen umfassenden Überblick über die bisherige Forschung zu menschlichem Greifen und Grifferkennung, eine detaillierte Beschreibung dreier neuer Prototyping-Werkzeuge für griffempfindliche Oberflächen und ein Framework für Analyse und Design von griff-basierter Interaktion (*grasp interaction*). Seit nahezu einem Jahrhundert erforschen Wissenschaftler menschliches Greifen. Mein Überblick über den Forschungsstand beschreibt Definitionen, Klassifikationen und Modelle menschlichen Greifens. In einigen wenigen Studien wurde bisher Greifen in alltäglichen Situationen untersucht. Diese fanden eine deutlich größere Diversität in den Griffmuster als in existierenden Taxonomien beschreibbar. Diese Diversität erschwert es, bestimmten Griffmustern eine Absicht des Benutzers zuzuordnen. Um verwandte Arbeiten und eigene Forschungsergebnisse zu strukturieren, formalisiere ich einen allgemeinen Ablauf der Grifferkennung. Dieser besteht aus dem *Erfassen* von Sensorwerten, der *Identifizierung* der damit verknüpften Griffe und der *Interpretation* der Bedeutung des Griffes. In einem umfassenden Überblick über verwandte Arbeiten zeige ich, dass die Implementierung von griffempfindlichen Oberflächen immer noch ein herausforderndes Problem ist, dass Forscher regelmäßig keine Ahnung von verwandten Arbeiten in benachbarten Forschungsfeldern haben, und dass intuitive Griffinteraktion bislang wenig Aufmerksamkeit erhalten hat. Um das erstgenannte Problem zu lösen, habe ich drei neuartige Sensortechniken für griffempfindliche Oberflächen entwickelt. Diese mindern jeweils eine oder mehrere Schwächen traditioneller Sensortechniken: **HandSense** verwendet vier strategisch positionierte kapazitive Sensoren um Griffmuster zu erkennen. Durch die Verwendung von selbst entwickelten, hochauflösenden Sensoren ist es möglich, schon die Annäherung an das Objekt zu erkennen. Außerdem muss nicht die komplette Oberfläche des Objekts mit Sensoren bedeckt werden. Benutzertests ergaben eine Erkennungsrate, die vergleichbar mit einem System mit 72 binären Sensoren ist. **FlyEye** verwendet Lichtwellenleiterbündel, die an eine Kamera angeschlossen werden, um Annäherung und Berührung auf beliebig geformten Oberflächen zu erkennen. Es ermöglicht auch Designern mit begrenzter Elektronikerfahrung das Rapid Prototyping von berührungs- und griffempfindlichen Objekten. Für FlyEye entwickelte ich einen *relative-calibration*-Algorithmus, der verwendet werden kann um Gruppen von Sensoren, deren Anordnung unbekannt ist, semi-automatisch anzuordnen. **TDRtouch** erweitert Time Domain Reflectometry (TDR), eine Technik die üblicherweise zur Analyse von Kabelbeschädigungen eingesetzt wird. TDRtouch erlaubt es, Berührungen entlang eines Drahtes zu lokalisieren. Dies ermöglicht es, schnell modulare, extrem dünne und flexible griffempfindliche Oberflächen zu entwickeln. Ich beschreibe, wie diese Techniken verschiedene Anforderungen erfüllen und den *design space* für griffempfindliche Objekte deutlich erweitern. Desweiteren bespreche ich die Herausforderungen beim Verstehen von Griffinformationen und stelle eine Einteilung von Interaktionsmöglichkeiten vor. Bisherige Anwendungsbeispiele für die Grifferkennung nutzen nur Daten der Griffsensoren und beschränken sich auf Moduswechsel. Ich argumentiere, dass diese Sensordaten Teil des allgemeinen Benutzungskontexts sind und nur in Kombination mit anderer Kontextinformation verwendet werden sollten. Um die möglichen Bedeutungen von Griffarten analysieren und diskutieren zu können, entwickelte ich das GRASP-Modell. Dieses beschreibt fünf Kategorien von Einflussfaktoren, die bestimmen wie wir ein Objekt greifen: *Goal* -- das Ziel, das wir mit dem Griff erreichen wollen, *Relationship* -- das Verhältnis zum Objekt, *Anatomy* -- Handform und Bewegungsmuster, *Setting* -- Umgebungsfaktoren und *Properties* -- Eigenschaften des Objekts, wie Oberflächenbeschaffenheit, Form oder Gewicht. Ich schließe mit einer Besprechung neuer Herausforderungen bei der Grifferkennung und Griffinteraktion und mache Vorschläge zur Entwicklung von zuverlässiger und benutzbarer Griffinteraktion

    Listening to Museums: Sounds as objects of culture and curatorial care

    Full text link
    This practice-based project begins with an exploration of the acoustic environments of a variety of contemporary museums via field recording and sound mapping. Through a critical listening practice, this mapping leads to a central question: can sounds act as objects analogous to physical objects within museum practice – and if so, what is at stake in creating a museum that only exhibits sounds?Given the interest in collection and protection of intangible culture within contemporary museum practice, as well as the evolving anthropological view of sound as an object of human culture, this project suggests that a re-definition of Pierre Shaeffer’s oft-debated term ‘sound object’ within the context of museum practice may be of use in re-imagining how sounds might be able to function within traditionally object-based museum exhibition practices. Furthermore, the longstanding notion of ‘soundmarks’ – sounds that reoccur within local communities which help to define their unique cultural identity – is explored as a means by which post-industrial sounds such as traffic signals for the visually impaired and those made by public transport, may be considered deserving of protection by museum practitioners.These ideas are then tested via creative practice by establishing an experimental curatorial project, The Museum of Portable Sound (MOPS), an institution dedicated to collecting, preserving, and exhibiting sounds as objects of culture and human agency. MOPS displays sounds, collected via the author’s field recording practice, as museological objects that, like the physical objects described by Stephen Greenblatt, ‘resonate’ with the outside world – but also with each other, via their careful selection and sequencing that calls back to the mix tape culture of the late twentieth century.The unconventional form of MOPS – digital audio files on a single mobile phone accompanied by a museum ‘map’ and Gallery Guide – emphasizes social connections between the virtual and the physical. The project presents a viable format via which sounds may be displayed as culture while also interrogating what a museum can be in the twenty first centur

    Understanding Novice Users\u27 Help-seeking Behavior in Getting Started with Digital Libraries: Influence of Learning Styles

    Get PDF
    Users\u27 information needs have to be fulfilled by providing a well-designed system. However, end users usually encounter various problems when interacting with information retrieval (IR) systems and it is even more so for novice users. The most common problem reported from previous research is that novice users do not know how to get started even though most IR systems contain help mechanisms. There is a deep gap between the system\u27s help function and the user\u27s need. In order to fill the gap and provide a better interacting environment, it is necessary to have a clearer picture of the problem and understand what the novice users\u27 behaviors are in using IR systems. The purpose of this study is to identify novice users\u27 help-seeking behaviors while they get started with digital libraries and how their learning styles lead to these behaviors. While a novice user is engaged in the process of interacting with an IR system, he/she may easily encounter problematic situations and require some kind of help in the search process. Novice users need to learn how to use a new IR environment by interacting with help features to fulfill their searching needs. However, many research studies have demonstrated that the existing help systems in IR systems cannot fully satisfy users\u27 needs. In addition to the system side problems, users\u27 characteristics, such as preference in using help, also play major roles in the decision of using system help. When viewing help-seeking as a learning activity, learning style is an influential factor that would lead to different help-seeking behaviors. Learning style deeply influences how students process information in learning activities, including learning performance, learning strategy, and learning preferences. Existing research does not seem to consider learning style and help-seeking together; therefore, the aim of this study is to explore the effects of learning styles on help-seeking interactions in the information seeking and searching environment. The study took place in an academic setting, and recruited 60 participants representing students from different education levels and disciplines. Data were collected by different methods, including pre-questionnaire, cognitive preference questionnaire, think-aloud protocol, transaction log, and interview. Both qualitative and quantitative approaches were employed to analyze data in the study. Qualitative methods were first applied to explore novice users\u27 help-seeking approaches as well as to illustrate how learning styles lead to these approaches. Quantitative methods were followed to test whether or not learning style would affect help-seeking behaviors and approaches. Results of this study highlight two findings. First, this study identifies eight types of help features used by novice users with different learning styles. The quantitative evidence also verifies the effect of learning styles on help-seeking interactions with help features. Based on the foundation of the analysis of help features, the study further identified fifteen help-seeking approaches applied by users with different learning styles in digital libraries. The broad triangulation approach assumed in this study not only enables the illustration of novice users\u27 diversified help-seeking approaches but also explores and confirms the relationships between different dimensions of learning styles and help-seeking behaviors. The results also suggest that the designs and delivery of IR systems, including digital libraries, need to support different learning styles by offering more engaging processing layouts, diversified input formats, as well as easy-to-perceive and easy-to-understand modes of help features

    Content-aware : investigating tools, character & user behavior

    Get PDF
    Content—Aware serves as a platform for investigating structure, corruption, and visual interference in the context of present-day technologies. I use fragmentation, movement, repetition, and abstraction to interrogate current methods and tools for engaging with the built environment, here broadly conceived as the material, spatial, and cultural products of human labor. Physical and graphic spaces become grounds for testing visual hypotheses. By testing images and usurping image-making technologies, I challenge the fidelity of vision and representation. Rooted in active curiosity and a willingness to fully engage, I collaborate with digital tools, play with their edges, and build perceptual portholes. Through documentation and curation of visual experience, I expose and challenge a capitalist image infrastructure. I create, collect, and process images using smartphone cameras, screen recordings, and applications such as Shrub and Photoshop. These devices and programs, which have the capacity to produce visual smoothness and polish, also inherently engender repetition and fragmentation. The same set of tools used to perfect images is easily reoriented towards visual destabilization. Projects presented here are not meant to serve as literal translations, but rather as symbols or variables in experimental graphic communication strategies. Employing these strategies, I reveal the frames and tools through which we view the world. By exploring and exploiting the limitations of manmade technologies, I reveal the breadth of our human relationships with them, including those of creators, directors, users, and recipients

    Letting things speak: a case study in the reconfiguring of a South African institutional object collection

    Get PDF
    In this thesis I examine the University of Cape Town (UCT) Manuscripts and Archives Department object collection, providing insights into the origins of the collection and its status within the archive. Central to the project was my application of a set of creative and affective strategies as a response to the collection, that culminated in a body of artwork entitled Slantways, shown at the Centre for African Studies (CAS) Gallery at UCT in 2014.The collection of about 200 slightly shabby, mismatched artefacts was assembled by R.F.M. Immelman, University Librarian from 1940 until 1970, who welcomed donations of any material he felt would be of value to future scholars. Since subsequent custodians have accorded these things, with their taint of South Africa's colonial past, rather less status, for many years they held an anomalous position within the archive, devalued and marginalised, yet still well-cared for. The thesis explores the ways in which an interlinked series of oblique or slantways conceptual and methodological strategies can unsettle conventional understandings of these archival things, the history with which they are associated, and the archive that houses them. I show how such an unsettling facilitates a complex and subtle range of understandings of the artefacts themselves, and reveals the constructed and contingent nature of the archive, as well as its biases, lacunae and limitations in ways that conventional approaches focusing on its evidentiary function allow to remain hidden. This set of slantways strategies includes the use of a cross-medial creative approach, and my focus on an a-typical, marginalised and taxonomy-free collection. Also important is the incorporation of my visual impairment as avital influence on my artwork, leading to an emphasis both on unusual forms of seeing and on the senses of smell, touch and hearing. Furthermore, my choice to follow a resolutely thing-centred approach led me to engage very closely with the artefacts' materiality, and subsequently with their actancy as archival things, which in turn influenced my conceptual and creative choices
    • …
    corecore