9 research outputs found

    Constructing the space of visual attention

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Architecture, 2012.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Page 180 blank. Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (p. 168-171).This thesis explores the nature of a human experience in space through a primary inquiry into vision. This inquiry begins by questioning the existing methods and instruments employed to capture and represent a human experience of space. While existing qualitative and quantitative methods and instruments -- from "subjective" interviews to "objective" photographic documentation -- may lead to insight in the study of a human experience in space, we argue that they are inherently limited with respect to physiological realities. As one moves about the world, one believes to see the world as continuous and fully resolved. However, this is not how human vision is currently understood to function on a physiological level. If we want to understand how humans visually construct a space, then we must examine patterns of visual attention on a physiological level. In order to inquire into patterns of visual attention in three dimensional space, we need to develop new instruments and new methods of representation. The instruments we require, directly address the physiological realities of vision, and the methods of representation seek to situate the human subject within a space of their own construction. In order to achieve this goal we have developed PUPIL, a custom set of hardware and software instruments, that capture the subject's eye movements. Using PUPIL, we have conducted a series of trials from proof of concept -- demonstrating the capabilities of our instruments -- to critical inquiry of the relationship between a human subject and a space. We have developed software to visualize this unique spatial experience, and have posed open questions based on the initial findings of our trials. This thesis aims to contribute to spatial design disciplines, by providing a new way to capture and represent a human experience of space.by Moritz Philipp Kassner [and] William Rhoades Patera.S.M

    Adversarial Attacks on Classifiers for Eye-based User Modelling

    Full text link
    An ever-growing body of work has demonstrated the rich information content available in eye movements for user modelling, e.g. for predicting users' activities, cognitive processes, or even personality traits. We show that state-of-the-art classifiers for eye-based user modelling are highly vulnerable to adversarial examples: small artificial perturbations in gaze input that can dramatically change a classifier's predictions. We generate these adversarial examples using the Fast Gradient Sign Method (FGSM) that linearises the gradient to find suitable perturbations. On the sample task of eye-based document type recognition we study the success of different adversarial attack scenarios: with and without knowledge about classifier gradients (white-box vs. black-box) as well as with and without targeting the attack to a specific class, In addition, we demonstrate the feasibility of defending against adversarial attacks by adding adversarial examples to a classifier's training data.Comment: 9 pages, 7 figure

    BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

    Full text link
    Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License

    Pupil

    No full text
    Commercial head-mounted eye trackers provide useful features to customers in industry and research but are expensive and rely on closed source hardware and software. This limits the application areas and use of mobile eye tracking to expert users and inhibits user-driven development, customisation, and extension. In this paper we present Pupil -- an accessible, affordable, and extensible open source platform for mobile eye tracking and gaze-based interaction. Pupil comprises 1) a light-weight headset with high-resolution cameras, 2) an open source software framework for mobile eye tracking, as well as 3) a graphical user interface (GUI) to playback and visualize video and gaze data. Pupil features high-resolution scene and eye cameras for monocular and binocular gaze estimation. The software and GUI are platform-independent and include state-of-the-art algorithms for real-time pupil detection and tracking, calibration, and accurate gaze estimation. Results of a performance evaluation show that Pupil can provide an average gaze estimation accuracy of 0.6 degree of visual angle (0.08 degree precision) with a latency of the processing pipeline of only 0.045 seconds.Comment: 10 pages, 9 figure

    Literatur-Verzeichnis

    No full text

    BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

    No full text
    Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License

    BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

    No full text
    Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License

    Growth of epitaxial graphene: Theory and experiment

    No full text
    corecore