84 research outputs found

    Mind the Gap: Investigating Toddlers’ Sensitivity to Contact Relations in Predictive Events

    Get PDF
    Toddlers readily learn predictive relations between events (e.g., that event A predicts event B). However, they intervene on A to try to cause B only in a few contexts: When a dispositional agent initiates the event or when the event is described with causal language. The current studies look at whether toddlers’ failures are due merely to the difficulty of initiating interventions or to more general constraints on the kinds of events they represent as causal. Toddlers saw a block slide towards a base, but an occluder prevented them from seeing whether the block contacted the base; after the block disappeared behind the occluder, a toy connected to the base did or did not activate. We hypothesized that if toddlers construed the events as causal, they would be sensitive to the contact relations between the participants in the predictive event. In Experiment 1, the block either moved spontaneously (no dispositional agent) or emerged already in motion (a dispositional agent was potentially present). Toddlers were sensitive to the contact relations only when a dispositional agent was potentially present. Experiment 2 confirmed that toddlers inferred a hidden agent was present when the block emerged in motion. In Experiment 3, the block moved spontaneously, but the events were described either with non-causal (“here’s my block”) or causal (“the block can make it go”) language. Toddlers were sensitive to the contact relations only when given causal language. These findings suggest that dispositional agency and causal language facilitate toddlers’ ability to represent causal relationships.John Templeton Foundation (#12667)James S. McDonnell Foundation (Causal Learning Collaborative Initiative)National Science Foundation (U.S.) (Career Award (# 0744213

    Automatic Prediction of Facial Trait Judgments: Appearance vs. Structural Models

    Get PDF
    Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions

    Local Dimensionality Reduction for Non-Parametric Regression

    Get PDF
    Locally-weighted regression is a computationally-efficient technique for non-linear regression. However, for high-dimensional data, this technique becomes numerically brittle and computationally too expensive if many local models need to be maintained simultaneously. Thus, local linear dimensionality reduction combined with locally-weighted regression seems to be a promising solution. In this context, we review linear dimensionalityreduction methods, compare their performance on non-parametric locally-linear regression, and discuss their ability to extend to incremental learning. The considered methods belong to the following three groups: (1) reducing dimensionality only on the input data, (2) modeling the joint input-output data distribution, and (3) optimizing the correlation between projection directions and output data. Group 1 contains principal component regression (PCR); group 2 contains principal component analysis (PCA) in joint input and output space, factor analysis, and probabilistic PCA; and group 3 contains reduced rank regression (RRR) and partial least squares (PLS) regression. Among the tested methods, only group 3 managed to achieve robust performance even for a non-optimal number of components (factors or projection directions). In contrast, group 1 and 2 failed for fewer components since these methods rely on the correct estimate of the true intrinsic dimensionality. In group 3, PLS is the only method for which a computationally-efficient incremental implementation exists

    I-POMDP: An infomax model of eye movement

    No full text
    Abstract—Modeling eye-movements during search is important for building intelligent robotic vision systems, and for understanding how humans select relevant information and structure behavior in real time. Previous models of visual search (VS) rely on the idea of “saliency maps ” which indicate likely locations for targets of interest. In these models the eyes move to locations with maximum saliency. This approach has several drawbacks: (1) It assumes that oculomotor control is a greedy process, i.e., every eye movement is planned as if no further eye movements would be possible after it. (2) It does not account for temporal dynamics and how information is integrated as over time. (3) It does not provide a formal basis to understand how optimal search should vary as a function of the operating characteristics of the visual system. To address these limitations, we reformulate the problem of VS as an Information-gathering Partially Observable Markov Decision Process (I-POMDP). We find that the optimal control law depends heavily on the Foveal-Peripheral Operating Characteristic (FPOC) of the visual system. I
    • 

    corecore