37 research outputs found

    Handlungsoptionen für den Klimaschutz in der deutschen Agrar- und Forstwirtschaft

    Full text link
    Gegenstand des Berichts ist die Rolle der UNFCCC-Quellgruppen Landwirtschaft sowie Landnutzung, Landnutzungsänderung und Forstwirtschaft (land use, land use change and forestry, LULUCF) bei der künftigen Reduzierung von THG-Emissionen in Deutschland. In Kapitel 2 werden Stand und Entwicklung der THG-Emissionen dieser Quellgruppen anhand der Daten aus der nationalen Emissionsberichterstattung dargestellt. Die Weiterentwicklung der klimaschutzpolitischen Rahmenbedingungen wird in Kapitel 3 nachgezeichnet, wobei die für Landwirtschaft und LULUCF relevanten Aspekte näher beleuchtet werden. Aufbauend auf einen Überblick über klimaschutzpolitische Aktivitäten von Bund und Ländern in der Land-, Forst- und Holzwirtschaft (Kapitel 4) werden in Kapitel 5 konkrete Maßnahmenoptionen beschrieben und bewertet. Der Bericht schließt mit Empfehlungen zur Umsetzung. Da für den Forst- und Holzbereich bereits ausgearbeitete Strategien vorliegen, besteht Handlungsbedarf besonders bezüglich der Frage, wie die Landwirtschaft künftig in nationale Klimaschutzziele eingebunden werden soll

    Paradoxical Evidence Integration in Rapid Decision Processes

    Get PDF
    Decisions about noisy stimuli require evidence integration over time. Traditionally, evidence integration and decision making are described as a one-stage process: a decision is made when evidence for the presence of a stimulus crosses a threshold. Here, we show that one-stage models cannot explain psychophysical experiments on feature fusion, where two visual stimuli are presented in rapid succession. Paradoxically, the second stimulus biases decisions more strongly than the first one, contrary to predictions of one-stage models and intuition. We present a two-stage model where sensory information is integrated and buffered before it is fed into a drift diffusion process. The model is tested in a series of psychophysical experiments and explains both accuracy and reaction time distributions

    The temporal dynamics of visual feature integration.

    Full text link
    Vision is dynamic. After their onset, visual stimuli undergo a complex cascade of processes before awareness is reached. Even after more than a century of research, the timing of these processes is still largely unknown. In particular, how the brain determines what visual features belong together and therefore have to be integrated remains an enigma. Previously, it was found that the processing of stimuli significantly outlasts its actual presentation. Such stimulus persistence is the basis for feature integration over time. Here, I show that – contrary to what is often assumed – simple visual features are integrated over a surprisingly long period of time before awareness is reached. Further, I show that this integration is mandatory and human observers have no access to the original simple visual features

    Dynamics of Visual Feature Integration and Decision Making

    Full text link
    The human brain analyzes a visual object first by basic feature detectors. These features are integrated in subsequent stages of the visual hierarchy. Generally it is assumed that the information about these basic features is lost once the information is send to the next stage in the visual hierarchy. To investigate the time course of feature integration I used transcranial magnetic stimulation (TMS) and the feature fusion paradigm. In feature fusion, two stimuli that differ in one feature are presented in rapid succession such that they are not perceived individually but as one single stimulus only. The fused percept is an integration of the features of both stimuli. Here, I show that, first, the original feature information persists in the visual system for a surprisingly long time. Second, the neural representations of the features interact when the two are integrated into a fused percept, but not when they are perceptually separated. Third, the "window of integration" within which features can be integrated spans about 100 ms and forth, the integration process precedes not only consciousness but also decision making

    The dynamics of perceptual decision making

    Full text link
    Decisions about noisy stimuli require evidence integration over time. Traditionally, evidence integration and decision making are described as a one-stage process: a decision is made when evidence for the presence of a stimulus crosses a threshold. However, this model is incompatible with psychophysical experiments on feature fusion, where two visual stimuli are presented in rapid succession. Paradoxically, the second stimulus biases decisions more strongly than the first one, contrary to predictions of one-stage models and intuition. This can only be explained using a two-stage model where sensory information is integrated and buffered before it is fed into a drift diffusion process. I will present a series of psychophysical experiments to test both, accuracy and reaction time distributions predicted by the model

    When transcranial magnetic stimulation (TMS) modulates feature integration

    Full text link
    How the brain integrates visual information across time into coherent percepts is an open question. Here, we presented two verniers with opposite offset directions one after the other. A vernier consists of two vertical bars that are horizontally offset. When the two verniers are separated by a blank screen (interstimulus interval, ISI), the two verniers are perceived either as two separate entities or as one vernier with the offset moving from one side to the other depending on the ISI. In both cases, their offsets can be reported independently. Transcranial magnet stimulation (TMS) over the occipital cortex does not interfere with the offset discrimination of either vernier. When a grating, instead of the ISI, is presented, the two verniers are not perceived separately anymore, but as ‘one’ vernier with ‘one’ fused vernier offset. TMS strongly modulates the percept of the fused vernier offset even though the spatio-temporal position of the verniers is identical in the ISI and grating conditions. We suggest that the grating suppresses the termination signal of the first vernier and the onset signal of the second vernier. As a consequence, perception of the individual verniers is suppressed. Neural representations of the vernier and second vernier inhibit each other, which renders them vulnerable to TMS for at least 300 ms, even though stimulus presentation was only 100 ms. Our data suggest that stimulus features can be flexibly integrated in the occipital cortex, mediated by neural interactions with outlast stimulus presentations by far

    Investigation of feature integration and feature separation by TMS

    Full text link
    How the brain integrates visual information across time into coherent percepts is an open question. Here, we investigated this integration using a feature fusion paradigm. In feature fusion two stimuli are presented in immediate succession. The stimuli are not perceived individually but as one fused stimulus. For example, a red and a green disc presented in rapid succession are seen as a yellow disc. It has been shown that feature fusion can be modulated by transcranial magnet stimulation (TMS) over occipital cortex for a surprisingly long duration of 400ms, suggesting that neural representations interact for this duration. If fusion were always to take place, fusion would lead to considerable smear. Hence, the question arises under which conditions stimuli fuse. Our current results suggest that stimulus transients prevent fusion because transients signal the presence of different objects. TMS cannot modulate this process. Suppressing the transients with a mask leads to the fusion of the stimuli which can be modulated by occipital TMS. Our results suggest that long lasting feature integration in occipital cortex occurs only when feature belong'' to one object, but not when the very same features presented at the very same spatio-temporal location "belonging'' to different objects.

    Feature integration but not feature separation can be modulated by TMS

    Full text link
    How the brain integrates visual information across time into coherent percepts is an open question. Here, we investigated this integration using a feature fusion paradigm. In feature fusion, two stimuli are presented in immediate succession. The stimuli are not perceived individually but as one fused stimulus. For example, a left and a right offset Vernier presented in rapid succession are perceived as one Vernier that is nearly aligned. It has been shown that feature fusion can be modulated by transcranial magnet stimulation (TMS) over the occipital cortex for a surprisingly long duration of 400 ms, showing that neural representations interact for this duration. Here, we show that these Vernier stimuli are perceived individually if separated by interstimulus interval (ISI) of only 20 ms. TMS has no effect on the perception of either of the two Verniers. If instead of an ISI a pattern mask is presented for 20 ms, Verniers fuse again and fusion can be modulated by TMS. Our results suggest that TMS can affect Vernier representations only when features are bound into one object, but not when the very same features belong to different objects. [Johannes Rüter is supported by the Swiss National Science Foundation

    Two Psychophysical Channels of Whisker Deflection in Rats Align with Two Neuronal Classes of Primary Afferents

    Full text link
    The rat whisker system has evolved into in an excellent model system for sensory processing from the periphery to cortical stages. However, to elucidate how sensory processing finally relates to percepts, methods to assess psychophysical performance pertaining to precise stimulus kinematics are needed. Here, we present a head-fixed, behaving rat preparation that allowed us to measure detectability of a single whisker deflection as a function of amplitude and peak velocity. We found that velocity thresholds for detection of small amplitude stimuli (>3°) were considerably higher than for detection of large-amplitude stimuli (<3°). This finding suggests the existence of two psychophysical channels mediating detection of whisker deflection: one channel exhibiting high amplitude and low velocity thresholds (W1), and the other channel exhibiting high velocity and low amplitude thresholds (W2). The correspondence of W1 to slowly adapting (SA) and W2 to rapidly adapting (RA) neuronal classes in the trigeminal ganglion was revealed in acute neurophysiological experiments. Neurometric plots of SA and RA cells were closely aligned to psychophysical performance in the corresponding W1 and W2 parameter ranges. Interestingly, neurometric data of SA cells fit the behavior best if it was based on a short time window integrating action potentials during the initial phasic response, in contrast to integrating across the tonic portion of the response. This suggests that detection performance in both channels is based on the assessment of very few spikes in their corresponding groups of primary afferents
    corecore