98,063 research outputs found

    Five Surprisingly Simple Complexities

    Get PDF
    AbstractWe describe five fairly formidable looking expressions that turn out to be rather simple. They are furthermore connected by a chain of implications

    Effects of semantic and syntactic complexities and aspectual class on past tense production

    Get PDF
    This paper reports results from a series of experiments that investigated whether semantic and/or syntactic complexity influences young Dutch children’s production of past tense forms. The constructions used in the three experiments were (i) simple sentences (the Simple Sentence Experiment), (ii) complex sentences with CP complements (the Complement Clause Experiment) and (iii) complex sentences with relative clauses (the Relative Clause Experiment). The stimuli involved both atelic and telic predicates. The goal of this paper is to address the following questions. Q1. Does semantic complexity regarding temporal anchoring influence the types of errors that children make in the experiments? For example, do children make certain types of errors when a past tense has to be anchored to the Utterance Time (UT), as compared to when it has to be anchored to the matrix topic time (TT)? Q2. Do different syntactic positions influence children’s performance on past-tense production? Do children perform better in the Simple Sentence Experiment compared to complex sentences involving two finite clauses (the Complement Clause Experiment and the Relative Clause Experiment)? In complex sentence trials, do children perform differently when the CPs are complements vs. when the CPs are adjunct clauses? (Lebeaux 1990, 2000) Q3. Do Dutch children make more errors with certain types of predicate (such as atelic predicates)? Alternatively, do children produce a certain type of error with a certain type of predicates (such as producing a perfect aspect with punctual predicates)? Bronckart and Sinclair (1973), for example, found that until the age of 6, French children showed a tendency to use passé composé with perfective events and simple present with imperfective events; we will investigate whether or not the equivalent of this is observed in Dutch

    Time's Barbed Arrow: Irreversibility, Crypticity, and Stored Information

    Full text link
    We show why the amount of information communicated between the past and future--the excess entropy--is not in general the amount of information stored in the present--the statistical complexity. This is a puzzle, and a long-standing one, since the latter is what is required for optimal prediction, but the former describes observed behavior. We layout a classification scheme for dynamical systems and stochastic processes that determines when these two quantities are the same or different. We do this by developing closed-form expressions for the excess entropy in terms of optimal causal predictors and retrodictors--the epsilon-machines of computational mechanics. A process's causal irreversibility and crypticity are key determining properties.Comment: 4 pages, 2 figure

    Learning Complexity-Aware Cascades for Deep Pedestrian Detection

    Full text link
    The design of complexity-aware cascaded detectors, combining features of very different complexities, is considered. A new cascade design procedure is introduced, by formulating cascade learning as the Lagrangian optimization of a risk that accounts for both accuracy and complexity. A boosting algorithm, denoted as complexity aware cascade training (CompACT), is then derived to solve this optimization. CompACT cascades are shown to seek an optimal trade-off between accuracy and complexity by pushing features of higher complexity to the later cascade stages, where only a few difficult candidate patches remain to be classified. This enables the use of features of vastly different complexities in a single detector. In result, the feature pool can be expanded to features previously impractical for cascade design, such as the responses of a deep convolutional neural network (CNN). This is demonstrated through the design of a pedestrian detector with a pool of features whose complexities span orders of magnitude. The resulting cascade generalizes the combination of a CNN with an object proposal mechanism: rather than a pre-processing stage, CompACT cascades seamlessly integrate CNNs in their stages. This enables state of the art performance on the Caltech and KITTI datasets, at fairly fast speeds

    Input Fast-Forwarding for Better Deep Learning

    Full text link
    This paper introduces a new architectural framework, known as input fast-forwarding, that can enhance the performance of deep networks. The main idea is to incorporate a parallel path that sends representations of input values forward to deeper network layers. This scheme is substantially different from "deep supervision" in which the loss layer is re-introduced to earlier layers. The parallel path provided by fast-forwarding enhances the training process in two ways. First, it enables the individual layers to combine higher-level information (from the standard processing path) with lower-level information (from the fast-forward path). Second, this new architecture reduces the problem of vanishing gradients substantially because the fast-forwarding path provides a shorter route for gradient backpropagation. In order to evaluate the utility of the proposed technique, a Fast-Forward Network (FFNet), with 20 convolutional layers along with parallel fast-forward paths, has been created and tested. The paper presents empirical results that demonstrate improved learning capacity of FFNet due to fast-forwarding, as compared to GoogLeNet (with deep supervision) and CaffeNet, which are 4x and 18x larger in size, respectively. All of the source code and deep learning models described in this paper will be made available to the entire research communityComment: Accepted in the 14th International Conference on Image Analysis and Recognition (ICIAR) 2017, Montreal, Canad
    • …
    corecore