32,682 research outputs found

    Invariant visual object recognition : biologically plausible approaches

    Get PDF
    Key properties of inferior temporal cortex neurons are described, and then, the biological plausibility of two leading approaches to invariant visual object recognition in the ventral visual system is assessed to investigate whether they account for these properties. Experiment 1 shows that VisNet performs object classification with random exemplars comparably to HMAX, except that the final layer C neurons of HMAX have a very non-sparse representation (unlike that in the brain) that provides little information in the single-neuron responses about the object class. Experiment 2 shows that VisNet forms invariant representations when trained with different views of each object, whereas HMAX performs poorly when assessed with a biologically plausible pattern association network, as HMAX has no mechanism to learn view invariance. Experiment 3 shows that VisNet neurons do not respond to scrambled images of faces, and thus encode shape information. HMAX neurons responded with similarly high rates to the unscrambled and scrambled faces, indicating that low-level features including texture may be relevant to HMAX performance. Experiment 4 shows that VisNet can learn to recognize objects even when the view provided by the object changes catastrophically as it transforms, whereas HMAX has no learning mechanism in its S-C hierarchy that provides for view-invariant learning. This highlights some requirements for the neurobiological mechanisms of high-level vision, and how some different approaches perform, in order to help understand the fundamental underlying principles of invariant visual object recognition in the ventral visual strea

    Correlation between tunneling magnetoresistance and magnetization in dipolar coupled nanoparticle arrays

    Full text link
    The tunneling magnetoresistance (TMR) of a hexagonal array of dipolar coupled anisotropic magnetic nanoparticles is studied using a resistor network model and a realistic micromagnetic configuration obtained by Monte Carlo simulations. Analysis of the field-dependent TMR and the corresponding magnetization curve shows that dipolar interactions suppress the maximum TMR effect, increase or decrease the field-sensitivity depending on the direction of applied field and introduce strong dependence of the TMR on the direction of the applied magnetic field. For off-plane magnetic fields, maximum values in the TMR signal are associated with the critical field for irreversible rotation of the magnetization. This behavior is more pronounced in strongly interacting systems (magnetically soft), while for weakly interacting systems (magnetically hard) the maximum of TMR (Hmax) occurs below the coercive field (Hc), in contrast to the situation for non-interacting nanoparticles or in-plane fields (Hmax=Hc). The relation of our simulations to recent TMR measurements in self-assembled Co nanoparticle arrays is discussed.Comment: 21 pages, 8 figures, submitted to Physical Review

    The Effects of Acute Lipopolysaccharide Induced Inflammation on Spinal Cord Excitability

    Get PDF
    Peripheral inflammation alters the excitability of dorsal horn interneurons and increases flexor reflex strength (Dubner & Ruda, 1992); however, its effect on the spinal stretch reflex is not well understood. The stretch reflex is a muscle contraction in response to muscle stretch. We hypothesize that the acute inflammation caused by an injection of lipopolysaccharide (LPS) will cause an increase in spinal cord excitability. To test this hypothesis, we measured Hoffman’s (H) reflex, the electric analog of the stretch reflex in adult mice receiving an injection of LPS (.5mg/kg) or saline (200μl). Adult male and female mice (C57Bl/6) were anesthetized; then, the sciatic nerve was exposed and stimulated at current strengths from H-wave threshold (T) to 8T (20 x 0.1 ms pulses at 0.1 Hz). Recording electrodes were placed in the foot. We measured the maximum M wave amplitude (Mmax), maximum H wave amplitude (Hmax) and latencies of both waves. We compared the ratio of the maximal H wave over the maximal M wave (Hmax/Mmax), which reports the percentage of motor neurons activated by electrical stimulation of Group Ia muscle sensory neurons. Increased spinal cord excitability would be reflected in a larger Hmax/Mmax. We found that LPS-induced inflammation does not alter the Hmax/Mmax. While we found no evidence of changes in spinal cord excitability, inflammation could be altering Group Ia muscle spindle afferent responses to stretch. Future studies will test whether stretch reflex strength is altered by inflammation

    Distribution of wave heights in Phitti Creek during southwest monsoon

    Get PDF
    A waverider buoy was deployed in Phitti Creek (24°33'N; 67°03'E) for wave measurements during April-July 1986. Using Tucker's method wave records were calculated in terms of significant wave height (Hs) and Maximum Wave Height (Hmax). For each parameter weekly mean and standard deviation values were also computed for statistical analysis. For Hs the lowest mean value of 0.8m and for Hmax the lowest mean value of 1.51m were observed in the fourth week of April whereas the highest mean value observed for Hs was 3.02m and for Hmax was 4.94m in the fourth week of June, 1986

    A simple parameter-free and adaptive approach to optimization under a minimal local smoothness assumption

    Get PDF
    We study the problem of optimizing a function under a \emph{budgeted number of evaluations}. We only assume that the function is \emph{locally} smooth around one of its global optima. The difficulty of optimization is measured in terms of 1) the amount of \emph{noise} bb of the function evaluation and 2) the local smoothness, dd, of the function. A smaller dd results in smaller optimization error. We come with a new, simple, and parameter-free approach. First, for all values of bb and dd, this approach recovers at least the state-of-the-art regret guarantees. Second, our approach additionally obtains these results while being \textit{agnostic} to the values of both bb and dd. This leads to the first algorithm that naturally adapts to an \textit{unknown} range of noise bb and leads to significant improvements in a moderate and low-noise regime. Third, our approach also obtains a remarkable improvement over the state-of-the-art SOO algorithm when the noise is very low which includes the case of optimization under deterministic feedback (b=0b=0). There, under our minimal local smoothness assumption, this improvement is of exponential magnitude and holds for a class of functions that covers the vast majority of functions that practitioners optimize (d=0d=0). We show that our algorithmic improvement is borne out in experiments as we empirically show faster convergence on common benchmarks
    corecore