36 research outputs found

    The glare effect and the perception of luminosity

    Get PDF
    none2noThe impression of self-luminosity in the glare effect was studied in two experiments. In experiment 1 the target (CS) was set to the highest luminance of the field and subjects were asked to adjust the luminance ramp of the inducers (R) against five backgrounds (B) to the point where they began to see CS as self-luminous. It was found that there is a linear relationship between background and luminance ramp. Another group of subjects carried out the same task in experiment 2, but this time CS and R were linked together so that CS would always have the same luminance as the highest luminance level of R, as adjustments were performed. It was found that: (i) adjustments were always lower than the highest luminance available; (ii) the linear relationship between background and luminance ramp was confirmed; (iii) observers reported a compelling impression of self-luminous grays. Data are discussed in relation to Bonato and Gilchrist's model for the perception of luminosity. The authors advance the hypothesis that luminance ramps are used at an early stage of encoding for the perception of luminosity.openZavagno, D; Caputo, GIOVANNI BATTISTAZavagno, D; Caputo, GIOVANNI BATTIST

    Spatial Aggregation: Theory and Applications

    Full text link
    Visual thinking plays an important role in scientific reasoning. Based on the research in automating diverse reasoning tasks about dynamical systems, nonlinear controllers, kinematic mechanisms, and fluid motion, we have identified a style of visual thinking, imagistic reasoning. Imagistic reasoning organizes computations around image-like, analogue representations so that perceptual and symbolic operations can be brought to bear to infer structure and behavior. Programs incorporating imagistic reasoning have been shown to perform at an expert level in domains that defy current analytic or numerical methods. We have developed a computational paradigm, spatial aggregation, to unify the description of a class of imagistic problem solvers. A program written in this paradigm has the following properties. It takes a continuous field and optional objective functions as input, and produces high-level descriptions of structure, behavior, or control actions. It computes a multi-layer of intermediate representations, called spatial aggregates, by forming equivalence classes and adjacency relations. It employs a small set of generic operators such as aggregation, classification, and localization to perform bidirectional mapping between the information-rich field and successively more abstract spatial aggregates. It uses a data structure, the neighborhood graph, as a common interface to modularize computations. To illustrate our theory, we describe the computational structure of three implemented problem solvers -- KAM, MAPS, and HIPAIR --- in terms of the spatial aggregation generic operators by mixing and matching a library of commonly used routines.Comment: See http://www.jair.org/ for any accompanying file

    Visual routines and attention

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (leaves 90-93).by Satyajit Rao.Ph.D

    Perceived internal depth in rotating and translating objects

    Get PDF
    Previous research has indicated that observers use differences between velocities and ratios of velocities to judge the depth within a moving object, although depth cannot in general be determined from these quantities. In four experiments we examined the relative effects of velocity difference and velocity ratio on judged depth within a transparent object that was rotating about a vertical axis and translating horizontally, examined the effects of the velocity difference for pure rotations and pure translations, and examined the effect of the velocity difference for objects that varied in simulated internal depth. Both the velocity difference and the velocity ratio affected judged depth, with difference having the larger effect. The effect of velocity difference was greater for pure rotations than for pure translations. Simulated depth did not affect judged depth unless there was a corresponding change in the projected width of the object. Observers appear to use the velocity difference, the velocity ratio, and the projected width of the object heuristically to judge internal object depth, rather than using image information from which relative depth could potentially be recovered

    Matching algorithms for handling three dimensional molecular co-ordinate data.

    Get PDF

    Technology Made Legible: A Cultural Study of Software as a Form of Writing in the Theories and Practices of Software Engineering

    Get PDF
    My dissertation proposes an analytical framework for the cultural understanding of the group of technologies commonly referred to as 'new' or 'digital'. I aim at dispelling what the philosopher Bernard Stiegler calls the 'deep opacity' that still surrounds new technologies, and that constitutes one of the main obstacles in their conceptualization today. I argue that such a critical intervention is essential if we are to take new technologies seriously, and if we are to engage with them on both the cultural and the political level. I understand new technologies as technologies based on software. I therefore suggest that a complex understanding of technologies, and of their role in contemporary culture and society, requires, as a preliminary step, an investigation of how software works. This involves going beyond studying the intertwined processes of its production, reception and consumption - processes that typically constitute the focus of media and cultural studies. Instead, I propose a way of accessing the ever present but allegedly invisible codes and languages that constitute software. I thus reformulate the problem of understanding software-based technologies as a problem of making software legible. I build my analysis on the concept of software advanced by Software Engineering, a technical discipline born in the late 1960s that defines software development as an advanced writing technique and software as a text. This conception of software enables me to analyse it through a number of reading strategies. I draw on the philosophical framework of deconstruction as formulated by Jacques Derrida in order to identify the conceptual structures underlying software and hence 'demystify' the opacity of new technologies. Ultimately, I argue that a deconstructive reading of software enables us to recognize the constitutive, if unacknowledged, role of technology in the formation of both the human and academic knowledge. This reading leads to a self-reflexive interrogation of the media and cultural studies' approach to technology and enhances our capacity to engage with new technologies without separating our cultural understanding from our political practices
    corecore