768 research outputs found

    Underspecified beta reduction

    Get PDF
    For ambiguous sentences, traditional semantics construction produces large numbers of higher-order formulas,which must then be beta-reduced individually. Underspecified versions can produce compact descriptions of all readings, but it is not known how to perform beta reduction on these descriptions. We show how to do this using beta reduction constraints in the constraint language for lambda-structures (CLLS)

    Beta reduction constraints

    Get PDF
    The constraint language for lambda structures (CLLS) can model lambda terms that are known only partially. In this paper, we introduce beta reduction constraints to describe beta reduction steps between partially known lambda terms. We show that beta reduction constraints can be expressed in an extension of CLLS by group parallelism. We then extend a known semi-decision procedure for CLLS to also deal with group parallelism and thus with beta-reduction constraints

    Processing underspecified semantic representations in the constraint language for lambda structures

    Get PDF
    The constraint language for lambda structures (CLLS) is an expressive language of tree descriptions which combines dominance constraints with powerful parallelism and binding constraints. CLLS was introduced as a uniform framework for defining underspecified semantics representations of natural language sentences, covering scope, ellipsis, and anaphora. This article presents saturation-based algorithms for processing the complete language of CLLS. It also gives an overview of previous results on questions of processing and complexity.Liegt nicht vor

    Investigations of model validity using residuals

    Get PDF
    Procedures for assessing model adequacy have been investigated. Since any detection of model misspecification usually begins with an examination of a set of sample residuals resulting from a fitted model, residual predictors have been examined. A necessary and sufficient condition for a residual predictor to have zero expectation and a specified covariance matrix has been obtained. Within this class of residual predictors, the one that minimizes the expected sum of squared prediction errors was found;Towards the detection of model misspecification, it was shown that a set of predicted residuals can be transformed to a set of independent Beta variables. Since the expected values of the Beta variables have a known ordering under the null hypothesis, each ordering of a set of Beta values can be ranked from most likely to least likely based upon the probability of each possible ordering. Thus, if incorrect model specification causes an extreme ordering to appear, then in principle it can be detected by calculating the extremeness of the ordering;The particular type of misspecification caused by the choice of an inappropriate degree in polynomial regression has been investigated, and a new procedure, based upon the Durbin-Watson d-statistic, has been proposed for determining the appropriate polynomial degree. It was shown that the d-statistic can be transformed to another statistic F(,d), whose distribution, for the case of polynomial regression, differs from a central F-distribution only by quantities on the order of 1/n(\u271). In the context of selecting the proper polynomial degree, the power of the F(,d)-test was compared to that of the forward selection F-test through examination of the probability limits of the two test Statistics and Probability; This study showed that the F(,d)-test appears to be the more sensitive to underspecification. The probability limit of the Durbin-Watson d-statistic was also examined to derive a relationship between the amount of autocorrelation among the true residuals which would be needed to produce the same probability limit as that produced by an omitted variable. Finally, an example was given where the F(,d)-test did detect the real need for a quadratic term which the usual forward selection F-test failed to detect

    Research and development at ORNL/CESAR towards cooperating robotic systems for hazardous environments

    Get PDF
    One of the frontiers in intelligent machine research is the understanding of how constructive cooperation among multiple autonomous agents can be effected. The effort at the Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) focuses on two problem areas: (1) cooperation by multiple mobile robots in dynamic, incompletely known environments; and (2) cooperating robotic manipulators. Particular emphasis is placed on experimental evaluation of research and developments using the CESAR robot system testbeds, including three mobile robots, and a seven-axis, kinematically redundant mobile manipulator. This paper summarizes initial results of research addressing the decoupling of position and force control for two manipulators holding a common object, and the path planning for multiple robots in a common workspace

    SpanDrop: Simple and Effective Counterfactual Learning for Long Sequences

    Full text link
    Distilling supervision signal from a long sequence to make predictions is a challenging task in machine learning, especially when not all elements in the input sequence contribute equally to the desired output. In this paper, we propose SpanDrop, a simple and effective data augmentation technique that helps models identify the true supervision signal in a long sequence with very few examples. By directly manipulating the input sequence, SpanDrop randomly ablates parts of the sequence at a time and ask the model to perform the same task to emulate counterfactual learning and achieve input attribution. Based on theoretical analysis of its properties, we also propose a variant of SpanDrop based on the beta-Bernoulli distribution, which yields diverse augmented sequences while providing a learning objective that is more consistent with the original dataset. We demonstrate the effectiveness of SpanDrop on a set of carefully designed toy tasks, as well as various natural language processing tasks that require reasoning over long sequences to arrive at the correct answer, and show that it helps models improve performance both when data is scarce and abundant.Comment: Peng Qi and Guangtao Wang contributed equall

    Context-driven natural language interpretation

    Get PDF
    corecore