41 research outputs found

    Three-Dimensional, Tomographic Super-Resolution Fluorescence Imaging of Serially Sectioned Thick Samples

    Get PDF
    Three-dimensional fluorescence imaging of thick tissue samples with near-molecular resolution remains a fundamental challenge in the life sciences. To tackle this, we developed tomoSTORM, an approach combining single-molecule localization-based super-resolution microscopy with array tomography of structurally intact brain tissue. Consecutive sections organized in a ribbon were serially imaged with a lateral resolution of 28 nm and an axial resolution of 40 nm in tissue volumes of up to 50 µm×50 µm×2.5 µm. Using targeted expression of membrane bound (m)GFP and immunohistochemistry at the calyx of Held, a model synapse for central glutamatergic neurotransmission, we delineated the course of the membrane and fine-structure of mitochondria. This method allows multiplexed super-resolution imaging in large tissue volumes with a resolution three orders of magnitude better than confocal microscopy

    How many motoric body representations can we grasp?

    Get PDF
    At present there is a debate on the number of body representations in the brain. The most commonly used dichotomy is based on the body image, thought to underlie perception and proven to be susceptible to bodily illusions, versus the body schema, hypothesized to guide actions and so far proven to be robust against bodily illusions. In this rubber hand illusion study we investigated the susceptibility of the body schema by manipulating the amount of stimulation on the rubber hand and the participant’s hand, adjusting the postural configuration of the hand, and investigating a grasping rather than a pointing response. Observed results showed for the first time altered grasping responses as a consequence of the grip aperture of the rubber hand. This illusion-sensitive motor response challenges one of the foundations on which the dichotomy is based, and addresses the importance of illusion induction versus type of response when investigating body representations

    Manipulable Objects Facilitate Cross-Modal Integration in Peripersonal Space

    Get PDF
    Previous studies have shown that tool use often modifies one's peripersonal space – i.e. the space directly surrounding our body. Given our profound experience with manipulable objects (e.g. a toothbrush, a comb or a teapot) in the present study we hypothesized that the observation of pictures representing manipulable objects would result in a remapping of peripersonal space as well. Subjects were required to report the location of vibrotactile stimuli delivered to the right hand, while ignoring visual distractors superimposed on pictures representing everyday objects. Pictures could represent objects that were of high manipulability (e.g. a cell phone), medium manipulability (e.g. a soap dispenser) and low manipulability (e.g. a computer screen). In the first experiment, when subjects attended to the action associated with the objects, a strong cross-modal congruency effect (CCE) was observed for pictures representing medium and high manipulability objects, reflected in faster reaction times if the vibrotactile stimulus and the visual distractor were in the same location, whereas no CCE was observed for low manipulability objects. This finding was replicated in a second experiment in which subjects attended to the visual properties of the objects. These findings suggest that the observation of manipulable objects facilitates cross-modal integration in peripersonal space

    Fast multicolor 3D imaging using aberration-corrected multifocus microscopy

    No full text
    Conventional acquisition of three-dimensional (3D) microscopy data requires sequential z scanning and is often too slow to capture biological events. We report an aberration-corrected multifocus microscopy method capable of producing an instant focal stack of nine 2D images. Appended to an epifluorescence microscope, the multifocus system enables high-resolution 3D imaging in multiple colors with single-molecule sensitivity, at speeds limited by the camera readout time of a single image
    corecore