572 research outputs found

    Representing 3D Space in Working Memory: Spatial Images from Vision, Hearing, Touch, and Language

    Get PDF
    The chapter deals with a form of transient spatial representation referred to as a spatial image. Like a percept, it is externalized, scaled to the environment, and can appear in any direction about the observer. It transcends the concept of modality, as it can be based on inputs from the three spatial senses, from language, and from long-term memory. Evidence is presented that supports each of the claimed properties of the spatial image, showing that it is quite different from a visual image. Much of the evidence presented is based on spatial updating. A major concern is whether spatial images from different input modalities are functionally equivalent— that once instantiated in working memory, the spatial images from different modalities have the same functional characteristics with respect to subsequent processing, such as that involved in spatial updating. Going further, the research provides some evidence that spatial images are amodal (i.e., do not retain modality-specific features)

    Object Action Complexes as an Interface for Planning and Robot Control

    Get PDF
    Abstract — Much prior work in integrating high-level artificial intelligence planning technology with low-level robotic control has foundered on the significant representational differences between these two areas of research. We discuss a proposed solution to this representational discontinuity in the form of object-action complexes (OACs). The pairing of actions and objects in a single interface representation captures the needs of both reasoning levels, and will enable machine learning of high-level action representations from low-level control representations. I. Introduction and Background The different representations that are effective for continuous control of robotic systems and the discrete symbolic AI presents a significant challenge for integrating AI planning research and robotics. These areas of research should be abl

    Stereoscopic viewing, roughness and gloss perception

    Get PDF
    This thesis presents a novel investigation into the effect stereoscopic vision has upon the strength of perceived gloss on rough surfaces. We demonstrate that in certain cases disparity is necessary for accurate judgements of gloss strength. We first detail the process we used to create a two-level taxonomy of property terms, which helped to inform the early direction of this work, before presenting the eleven words which we found categorised the property space. This shaped careful examination of the relevant literature, leading us to conclude that most studies into roughness, gloss, and stereoscopic vision have been performed with unrealistic surfaces and physically inaccurate lighting models. To improve on the stimuli used in these earlier studies, advanced offline rendering techniques were employed to create images of complex, naturalistic, and realistically glossy 1/fβ noise surfaces. These images were rendered using multi-bounce path tracing to account for interreflections and soft shadows, with a reflectance model which observed all common light phenomena. Using these images in a series of psychophysical experiments, we first show that random phase spectra can alter the strength of perceived gloss. These results are presented alongside pairs of the surfaces tested which have similar levels of perceptual gloss. These surface pairs are then used to conclude that naïve observers consistently underestimate how glossy a surface is without the correct surface and highlight disparity, but only on the rougher surfaces presented

    Tactile mesh saliency

    Get PDF
    While the concept of visual saliency has been previously explored in the areas of mesh and image processing, saliency detection also applies to other sensory stimuli. In this paper, we explore the problem of tactile mesh saliency, where we define salient points on a virtual mesh as those that a human is more likely to grasp, press, or touch if the mesh were a real-world object. We solve the problem of taking as input a 3D mesh and computing the relative tactile saliency of every mesh vertex. Since it is difficult to manually define a tactile saliency measure, we introduce a crowdsourcing and learning framework. It is typically easy for humans to provide relative rankings of saliency between vertices rather than absolute values. We thereby collect crowdsourced data of such relative rankings and take a learning-to-rank approach. We develop a new formulation to combine deep learning and learning-to-rank methods to compute a tactile saliency measure. We demonstrate our framework with a variety of 3D meshes and various applications including material suggestion for rendering and fabricatio

    Tactual perception: a review of experimental variables and procedures

    Get PDF
    This paper reviews literature on tactual perception. Throughout this review we will highlight some of the most relevant variables in touch literature: interaction between touch and other senses; type of stimuli, from abstract stimuli such as vibrations, to two- and three-dimensional stimuli, also considering concrete stimuli such as the relation between familiar and unfamiliar stimuli or the haptic perception of faces; type of participants, separating studies with blind participants, studies with children and adults, and an analysis of sex differences in performance; and finally, type of tactile exploration, considering conditions of active and passive touch, the relevance of movement in touch and the relation between exploration and time. This review intends to present an organised overview of the main variables in touch experiments, attending to the main findings described in literature, to guide the design of future works on tactual perception and memory.This work was funded by the Portuguese “Foundation for Science and Technology” through PhD scholarship SFRH/BD/35918/2007

    A white paper: NASA virtual environment research, applications, and technology

    Get PDF
    Research support for Virtual Environment technology development has been a part of NASA's human factors research program since 1985. Under the auspices of the Office of Aeronautics and Space Technology (OAST), initial funding was provided to the Aerospace Human Factors Research Division, Ames Research Center, which resulted in the origination of this technology. Since 1985, other Centers have begun using and developing this technology. At each research and space flight center, NASA missions have been major drivers of the technology. This White Paper was the joint effort of all the Centers which have been involved in the development of technology and its applications to their unique missions. Appendix A is the list of those who have worked to prepare the document, directed by Dr. Cynthia H. Null, Ames Research Center, and Dr. James P. Jenkins, NASA Headquarters. This White Paper describes the technology and its applications in NASA Centers (Chapters 1, 2 and 3), the potential roles it can take in NASA (Chapters 4 and 5), and a roadmap of the next 5 years (FY 1994-1998). The audience for this White Paper consists of managers, engineers, scientists and the general public with an interest in Virtual Environment technology. Those who read the paper will determine whether this roadmap, or others, are to be followed

    Communication of Digital Material Appearance Based on Human Perception

    Get PDF
    Im alltägliche Leben begegnen wir digitalen Materialien in einer Vielzahl von Situationen wie beispielsweise bei Computerspielen, Filmen, Reklamewänden in zB U-Bahn Stationen oder beim Online-Kauf von Kleidungen. Während einige dieser Materialien durch digitale Modelle repräsentiert werden, welche das Aussehen einer bestimmten Oberfläche in Abhängigkeit des Materials der Fläche sowie den Beleuchtungsbedingungen beschreiben, basieren andere digitale Darstellungen auf der simplen Verwendung von Fotos der realen Materialien, was zB bei Online-Shopping häufig verwendet wird. Die Verwendung von computer-generierten Materialien ist im Vergleich zu einzelnen Fotos besonders vorteilhaft, da diese realistische Erfahrungen im Rahmen von virtuellen Szenarien, kooperativem Produkt-Design, Marketing während der prototypischen Entwicklungsphase oder der Ausstellung von Möbeln oder Accesoires in spezifischen Umgebungen erlauben. Während mittels aktueller Digitalisierungsmethoden bereits eine beeindruckende Reproduktionsqualität erzielt wird, wird eine hochpräzise photorealistische digitale Reproduktion von Materialien für die große Vielfalt von Materialtypen nicht erreicht. Daher verwenden viele Materialkataloge immer noch Fotos oder sogar physikalische Materialproben um ihre Kollektionen zu repräsentieren. Ein wichtiger Grund für diese Lücke in der Genauigkeit des Aussehens von digitalen zu echten Materialien liegt darin, dass die Zusammenhänge zwischen physikalischen Materialeigenschaften und der vom Menschen wahrgenommenen visuellen Qualität noch weitgehend unbekannt sind. Die im Rahmen dieser Arbeit durchgeführten Untersuchungen adressieren diesen Aspekt. Zu diesem Zweck werden etablierte digitalie Materialmodellen bezüglich ihrer Eignung zur Kommunikation von physikalischen und sujektiven Materialeigenschaften untersucht, wobei Beobachtungen darauf hinweisen, dass ein Teil der fühlbaren/haptischen Informationen wie z.B. Materialstärke oder Härtegrad aufgrund der dem Modell anhaftenden geometrische Abstraktion verloren gehen. Folglich wird im Rahmen der Arbeit das Zusammenspiel der verschiedenen Sinneswahrnehmungen (mit Fokus auf die visuellen und akustischen Modalitäten) untersucht um festzustellen, welche Informationen während des Digitalisierungsprozesses verloren gehen. Es zeigt sich, dass insbesondere akustische Informationen in Kombination mit der visuellen Wahrnehmung die Einschätzung fühlbarer Materialeigenschaften erleichtert. Eines der Defizite bei der Analyse des Aussehens von Materialien ist der Mangel bezüglich sich an der Wahnehmung richtenden Metriken die eine Beantwortung von Fragen wie z.B. "Sind die Materialien A und B sich ähnlicher als die Materialien C und D?" erlauben, wie sie in vielen Anwendungen der Computergrafik auftreten. Daher widmen sich die im Rahmen dieser Arbeit durchgeführten Studien auch dem Vergleich von unterschiedlichen Materialrepräsentationen im Hinblick auf. Zu diesem Zweck wird eine Methodik zur Berechnung der wahrgenommenen paarweisen Ähnlichkeit von Material-Texturen eingeführt, welche auf der Verwendung von Textursyntheseverfahren beruht und sich an der Idee/dem Begriff der geradenoch-wahrnehmbaren Unterschiede orientiert. Der vorgeschlagene Ansatz erlaubt das Überwinden einiger Probleme zuvor veröffentlichter Methoden zur Bestimmung der Änhlichkeit von Texturen und führt zu sinnvollen/plausiblen Distanzen von Materialprobem. Zusammenfassend führen die im Rahmen dieser Dissertation dargestellten Inhalte/Verfahren zu einem tieferen Verständnis bezüglich der menschlichen Wahnehmung von digitalen bzw. realen Materialien über unterschiedliche Sinne, einem besseren Verständnis bzgl. der Bewertung der Ähnlichkeit von Texturen durch die Entwicklung einer neuen perzeptuellen Metrik und liefern grundlegende Einsichten für zukünftige Untersuchungen im Bereich der Perzeption von digitalen Materialien.In daily life, we encounter digital materials and interact with them in numerous situations, for instance when we play computer games, watch a movie, see billboard in the metro station or buy new clothes online. While some of these virtual materials are given by computational models that describe the appearance of a particular surface based on its material and the illumination conditions, some others are presented as simple digital photographs of real materials, as is usually the case for material samples from online retailing stores. The utilization of computer-generated materials entails significant advantages over plain images as they allow realistic experiences in virtual scenarios, cooperative product design, advertising in prototype phase or exhibition of furniture and wearables in specific environments. However, even though exceptional material reproduction quality has been achieved in the domain of computer graphics, current technology is still far away from highly accurate photo-realistic virtual material reproductions for the wide range of existing categories and, for this reason, many material catalogs still use pictures or even physical material samples to illustrate their collections. An important reason for this gap between digital and real material appearance is that the connections between physical material characteristics and the visual quality perceived by humans are far from well-understood. Our investigations intend to shed some light in this direction. Concretely, we explore the ability of state-of-the-art digital material models in communicating physical and subjective material qualities, observing that part of the tactile/haptic information (eg thickness, hardness) is missing due to the geometric abstractions intrinsic to the model. Consequently, in order to account for the information deteriorated during the digitization process, we investigate the interplay between different sensing modalities (vision and hearing) and discover that particular sound cues, in combination with visual information, facilitate the estimation of such tactile material qualities. One of the shortcomings when studying material appearance is the lack of perceptually-derived metrics able to answer questions like "are materials A and B more similar than C and D?", which arise in many computer graphics applications. In the absence of such metrics, our studies compare different appearance models in terms of how capable are they to depict/transmit a collection of meaningful perceptual qualities. To address this problem, we introduce a methodology to compute the perceived pairwise similarity between textures from material samples that makes use of patch-based texture synthesis algorithms and is inspired on the notion of Just-Noticeable Differences. Our technique is able to overcome some of the issues posed by previous texture similarity collection methods and produces meaningful distances between samples. In summary, with the contents presented in this thesis we are able to delve deeply in how humans perceive digital and real materials through different senses, acquire a better understanding of texture similarity by developing a perceptually-based metric and provide a groundwork for further investigations in the perception of digital materials

    The design of personal ambient displays

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1999.Includes bibliographical references (leaves 58-59).The goal of this thesis is to investigate the design of personal ambient displays. These are small, physical devices worn to display information to a person in a subtle, persistent, and private manner. They can be small enough to be carried in a pocket, worn as a watch, or even adorned like jewelry. In my implementations, information is displayed solely through tactile modalities such as thermal change (heating and cooling), movement (shifting and vibration), and change of shape (expanding, contracting, and deformation). Using a tactile display allows information to be kept private and reduces the chance of overloading primary visual and auditory activities. The display can remain ambient, transmitting information in the background of a person's perception through simple, physical means. The specific focus of this thesis is to create a number of these tactile displays, to identify and implement applications they can serve, and to evaluate aspects of their effectiveness. I have created a group of small, wireless objects that can warm up and cool down or gently move or shift. Users can reconfigure each display so that information sources like stock data or the activity of people on the internet are mapped to these different tactile modalities. Furthermore, in this thesis I consider the implications that human perception have on the design of these displays and examine potential application areas for further implementations.Craig Alexander Wisneski.S.M

    Spatial auditory display for acoustics and music collections

    Get PDF
    PhDThis thesis explores how audio can be better incorporated into how people access information and does so by developing approaches for creating three-dimensional audio environments with low processing demands. This is done by investigating three research questions. Mobile applications have processor and memory requirements that restrict the number of concurrent static or moving sound sources that can be rendered with binaural audio. Is there a more e cient approach that is as perceptually accurate as the traditional method? This thesis concludes that virtual Ambisonics is an ef cient and accurate means to render a binaural auditory display consisting of noise signals placed on the horizontal plane without head tracking. Virtual Ambisonics is then more e cient than convolution of HRTFs if more than two sound sources are concurrently rendered or if movement of the sources or head tracking is implemented. Complex acoustics models require signi cant amounts of memory and processing. If the memory and processor loads for a model are too large for a particular device, that model cannot be interactive in real-time. What steps can be taken to allow a complex room model to be interactive by using less memory and decreasing the computational load? This thesis presents a new reverberation model based on hybrid reverberation which uses a collection of B-format IRs. A new metric for determining the mixing time of a room is developed and interpolation between early re ections is investigated. Though hybrid reverberation typically uses a recursive lter such as a FDN for the late reverberation, an average late reverberation tail is instead synthesised for convolution reverberation. Commercial interfaces for music search and discovery use little aural information even though the information being sought is audio. How can audio be used in interfaces for music search and discovery? This thesis looks at 20 interfaces and determines that several themes emerge from past interfaces. These include using a two or three-dimensional space to explore a music collection, allowing concurrent playback of multiple sources, and tools such as auras to control how much information is presented. A new interface, the amblr, is developed because virtual two-dimensional spaces populated by music have been a common approach, but not yet a perfected one. The amblr is also interpreted as an art installation which was visited by approximately 1000 people over 5 days. The installation maps the virtual space created by the amblr to a physical space
    corecore