42,752 research outputs found

    A Neural Model of Biased Oscillations in Aplysia Head-Waving Behavior

    Full text link
    A long-term bias in the exploratory head-waving behavior of Aplysia can be induced using bright lights as an aversive stimulus: coupling onset of the lights with head movements to one side results in a bias away from that side (Cook & Carew, 1986). This bias has been interpreted as a form of operant conditioning, and has previously been simulated with a neural network model based on associative synaptic facilitation (Raymond, Baxter, Buonomano, & Byrne, 1992). In this article we simulate the head-waving behavior using a recurrent gated dipole, a nonlinear dynamical neural model that has previously been used to explain various data including oscillatory behavior in biological pacemakers. Within the recurrent gated dipole, two channels operate antagonistically to generate oscillations, which drive the side-to-side head waving. The frequency of oscillations depends on transmitter mobilization dynamics, which exhibit both short- and long-term adaptation. We assume that light onset results in a nonspecific increase in arousal to both channels of the dipole. Repeated pairing of arousal increments with activation of one channel (the "reinforced" channel) of the dipole leads to a bias in transmitter dynamics, which causes the oscillation to last a shorter time on the reinforced channel than on the non-reinforced channel. Our model provides a parsimonious explanation of the observed behavior, and it avoids some of the unexpected results obtained with the Raymond et al. model. In addition, our model makes predictions concerning the rate of onset and extinction of the biases, and it suggests new lines of experimentation to test the nature of the head-waving behavior.Office of Naval Research (N00014-92-J-4015, N00014-91-J-4100, N0014-92-J-1309); Air Force Office of Scientific Research (F49620-92-J-0499); A.P. Sloan Foundation (BR-3122

    Skeleton-aided Articulated Motion Generation

    Full text link
    This work make the first attempt to generate articulated human motion sequence from a single image. On the one hand, we utilize paired inputs including human skeleton information as motion embedding and a single human image as appearance reference, to generate novel motion frames, based on the conditional GAN infrastructure. On the other hand, a triplet loss is employed to pursue appearance-smoothness between consecutive frames. As the proposed framework is capable of jointly exploiting the image appearance space and articulated/kinematic motion space, it generates realistic articulated motion sequence, in contrast to most previous video generation methods which yield blurred motion effects. We test our model on two human action datasets including KTH and Human3.6M, and the proposed framework generates very promising results on both datasets.Comment: ACM MM 201

    Chaoskampf

    Full text link
    Excerpt: The road to Camp On High was a two-lane highway that snaked uneasily up the side of Cedar Mountain. Quinn sat in the back of the van, next to a window that looked out into empty space. Somehow, the other kids were sleeping through this, three neat rows of lolling heads, ear buds dangling. Earlier in the ride, sturdy evergreens had covered the mountainside, jutting upward and waiting to catch the van that would, any second-Quinn was convinced-careen over the edge. But by now the trees had grown fragile and sparse, exposing gashes of red-orange rock and promising nothing

    Early Recognition of Human Activities from First-Person Videos Using Onset Representations

    Full text link
    In this paper, we propose a methodology for early recognition of human activities from videos taken with a first-person viewpoint. Early recognition, which is also known as activity prediction, is an ability to infer an ongoing activity at its early stage. We present an algorithm to perform recognition of activities targeted at the camera from streaming videos, making the system to predict intended activities of the interacting person and avoid harmful events before they actually happen. We introduce the novel concept of 'onset' that efficiently summarizes pre-activity observations, and design an approach to consider event history in addition to ongoing video observation for early first-person recognition of activities. We propose to represent onset using cascade histograms of time series gradients, and we describe a novel algorithmic setup to take advantage of onset for early recognition of activities. The experimental results clearly illustrate that the proposed concept of onset enables better/earlier recognition of human activities from first-person videos

    On the Status of Highly Entropic Objects

    Full text link
    It has been proposed that the entropy of any object must satisfy fundamental (holographic or Bekenstein) bounds set by the object's size and perhaps its energy. However, most discussions of these bounds have ignored the possibility that objects violating the putative bounds could themselves become important components of Hawking radiation. We show that this possibility cannot a priori be neglected in existing derivations of the bounds. Thus this effect could potentially invalidate these derivations; but it might also lead to observational evidence for the bounds themselves.Comment: 6 pages, RevTex, a few editorial change
    corecore