87 research outputs found

    Parallel integral projection transform for straight electrode localization in 3-D ultrasound images

    Get PDF
    In surgical practice, small metallic instruments are frequently used to perform various tasks inside the human body. We address the problem of their accurate localization in the tissue. Recent experiments using medical ultrasound have shown that this modality is suitable for real-time visualization of anatomical structures as well as the position of surgical instruments. We propose an image- processing algorithm that permits automatic estimation of the position of a line-segment-shaped object. This method was applied to the localization of a thin metallic electrode in biological tissue. We show that the electrode axis can be found through maximizing the parallel integral projection transform that is a form of the Radon transform. To accelerate this step, hierarchical mesh-grid algorithm is implemented. Once the axis position is known, localization of the electrode tip is performed. The method was tested on simulated images, on ultrasound images of a tissue mimicking phantom containing a metallic electrode, and on real ultrasound images from breast biopsy. The results indicate that the algorithm is robust with respect to variations in electrode position and speckle noise. Localization accuracy is of the order of hundreds of micrometers and is comparable to the ultrasound system axial resolution

    The Spatial and Temporal Deployment of Voluntary Attention across the Visual Field

    Get PDF
    Several studies have addressed the question of the time it takes for attention to shift from one position in space to another. Here we present a behavioural paradigm which offers a direct access to an estimate of voluntary shift time by comparing, in the same task, a situation in which subjects are required to re-engage their attention at the same spatial location with a situation in which they need to shift their attention to another location, all other sensory, cognitive and motor parameters being equal. We show that spatial attention takes on average 55 ms to voluntarily shift from one hemifield to the other and 38 ms to shift within the same hemifield. In addition, we show that across and within hemifields attentional processes are different. In particular, attentional spotlight division appears to be more difficult to operate within than across hemifields

    Spatial and Temporal Dynamics of Attentional Guidance during Inefficient Visual Search

    Get PDF
    Spotting a prey or a predator is crucial in the natural environment and relies on the ability to extract quickly pertinent visual information. The experimental counterpart of this behavior is visual search (VS) where subjects have to identify a target amongst several distractors. In difficult VS tasks, it has been found that the reaction time (RT) is influenced by salience factors, such as the target-distractor similarity, and this finding is usually regarded as evidence for a guidance of attention by preattentive mechanisms. However, the use of RT measurements, a parameter which depends on multiple factors, allows only very indirect inferences about the underlying attentional mechanisms. The purpose of the present study was to determine the influence of salience factors on attentional guidance during VS, by measuring directly attentional allocation. We studied attention allocation by using a dual covert VS task in subjects who had 1) to detect a target amongst different items and 2) to report letters briefly flashed inside those items at different delays. As predicted, we showed that parallel processes guide attention towards the most relevant item by virtue of both goal-directed and stimulus-driven factors, and we demonstrated that this attentional selection is a prerequisite for target detection. In addition, we show that when the target is characterized by two features (conjunction VS), the goal-directed effects of both features are initially combined into a unique salience value, but at a later stage, grouping phenomena interact with the salience computation, and lead to the selection of a whole group of items. These results, in line with Guided Search Theory, show that efficient and rapid preattentive processes guide attention towards the most salient item, allowing to reduce the number of attentional shifts needed to find the target

    Implication fonctionnelle de l'aire intrapariétale latérale (LIP) et du champ oculomoteur frontal (FEF) dans l'attention visuelle sélective chez le macaque

    No full text
    L'attention visuelle sélective est indispensable au traitement des informations visuelles et au guidage du comportement. Cette fonction est assurée par un réseau de structures cérébrales. Durant la thèse, nous avons étudié deux aires du macaque, l'aire intrapariétale latérale (LIP) et le champ oculomoteur frontal (FEF). Ces aires, de par leurs activités neuronales, ont généralement été considérées comme participant à la réalisation de saccades oculaires. Grâce à l'inactivation réversible de LIP et FEF, nous avons mis en évidence des déficits comportementaux différents pour chacune d'elles, principalement attentionnels, et ce même sans mouvements des yeux. Ces déficits sont proches de ceux des patients négligents. Seule l'inactivation du FEF produit des déficits saccadiques. Nos résultats suggèrent que LIP n'a pas de rôle saccadique mais est impliqué dans le contrôle " top-down " de l'attention. Le FEF, en plus de son rôle saccadique, permettrait le déplacement du foyer attentionnel.LYON1-BU.Sciences (692662101) / SudocSudocFranceF

    Contrôle du mouvement du regard (1)

    No full text
    Les mouvements des yeux constituent un mode d’accès privilégié au monde qui nous entoure. Ils permettent, en plaçant les objets d’intérêt dans la partie centrale du champ visuel, d’explorer les scènes visuelles, d’en identifier les composants significatifs et d’acquérir les informations nécessaires pour pouvoir agir sur eux (préhension, évitement…). De nombreuses étapes de traitement se succèdent entre l’arrivée des photons sur la rétine et la contraction des muscles oculaires. Dans cet article, nous étudions la place du cortex pariétal dans cet enchaînement de mécanismes neurophysiologiques. Nous proposons que celui-ci soit impliqué dans la représentation de l’espace et la sélection des objets pertinents dans l’environnement, c’est-à-dire après le traitement visuel perceptif et avant l’élaboration du signal moteur

    Représentation de l'espace et intégration multisensorielle dans l'aire ventrale intrapariétale du primate

    No full text
    Le but de cette thèse est de décrire les mécanismes neuronaux à l'origine des représentations sensorielles, et comprendre comment celles-ci fusionnent afin de produire une perception unifiée de l'espace multimodal. Notre modèle est l'aire ventrale intrapariétale (VIP), site de convergence multisensorielle dédié au traitement des informations spatiales. Notre méthode d'investigation s'appuie sur des enregistrements extracellulaires chez le primate en comportement. Nous démontrons que la représentation multisensorielle de l'espace dans VIP n'exige pas un cadre de référence commun, et qu'il y coexiste plusieurs codages possibles, au niveau unitaire et populationnel. De plus, les neurones de VIP intègrent des entrées de plusieurs modalités sensorielles de manière non linéaire et leurs réponses sont amplifiées ou déprimées en fonction de règles spatio-temporelles précises. Les interactions identifiées au niveau unitaire sont corrélées au comportement de détection multisensorielleLYON1-BU.Sciences (692662101) / SudocSudocFranceF

    Le rôle du cortex pariétal

    No full text
    Les mouvements des yeux constituent un mode d’accès privilégié au monde qui nous entoure. Ils permettent, en plaçant les objets d’intérêt dans la partie centrale du champ visuel, d’explorer les scènes visuelles, d’en identifier les composants significatifs et d’acquérir les informations nécessaires pour pouvoir agir sur eux (préhension, évitement…). De nombreuses étapes de traitement se succèdent entre l’arrivée des photons sur la rétine et la contraction des muscles oculaires. Dans cet article, nous étudions la place du cortex pariétal dans cet enchaînement de mécanismes neurophysiologiques. Nous proposons que celui-ci soit impliqué dans la représentation de l’espace et la sélection des objets pertinents dans l’environnement, c’est-à-dire après le traitement visuel perceptif et avant l’élaboration du signal moteur.Eye movements constitute one of the most basic means of interacting with our environment, allowing to orient to, localize and scrutinize the variety of potentially interesting objects that surround us. In this review we discuss the role of the parietal cortex in the control of saccadic and smooth pursuit eye movements, whose purpose is to rapidly displace the line of gaze and to maintain a moving object on the central retina, respectively. From single cell recording studies in monkey we know that distinct sub-regions of the parietal lobe are implicated in these two kinds of movement. The middle temporal (MT) and medial superior temporal (MST) areas show neuronal activities related to moving visual stimuli and to ocular pursuit. The lateral intraparietal (LIP) area exhibits visual and saccadic neuronal responses. Electrophysiology, which in essence is a correlation method, cannot entirely solve the question of the functional implication of these areas: are they primarily involved in sensory processing, in motor processing, or in some intermediate function? Lesion approaches (reversible or permanent) in the monkey can provide important information in this respect. Lesions of MT or MST produce deficits in the perception of visual motion, which would argue for their possible role in sensory guidance of ocular pursuit rather than in directing motor commands to the eye muscle. Lesions of LIP do not produce specific visual impairments and cause only subtle saccadic deficits. However, recent results have shown the presence of severe deficits in spatial attention tasks. LIP could thus be implicated in the selection of relevant objects in the visual scene and provide a signal for directing the eyes toward these objects. Functional imaging studies in humans confirm the role of the parietal cortex in pursuit, saccadic, and attentional networks, and show a high degree of overlap with monkey data. Parietal lobe lesions in humans also result in behavioral deficits very similar to those that are observed in the monkey. Altogether, these different sources of data consistently point to the involvement of the parietal cortex in the representation of space, at an intermediate stage between vision and action

    Face cells in orbitofrontal cortex represent social categories

    No full text
    International audiencePerceiving social and emotional information from faces is a critical primate skill. For this purpose, primates evolved dedicated cortical architecture, especially in occipitotemporal areas, utilizing face-selective cells. Less understood face-selective neurons are present in the orbitofrontal cortex (OFC) and are our object of study. We examined 179 face-selective cells in the lateral sulcus of the OFC by characterizing their responses to a rich set of photographs of conspecific faces varying in age, gender, and facial expression. Principal component analysis and unsupervised cluster analysis of stimulus space both revealed that face cells encode face dimensions for social categories and emotions. Categories represented strongly were facial expressions (grin and threat versus lip smack), juvenile, and female monkeys. Cluster analyses of a control population of nearby cells lacking face selectivity did not categorize face stimuli in a meaningful way, suggesting that only face-selective cells directly support face categorization in OFC. Time course analyses of face cell activity from stimulus onset showed that faces were discriminated from nonfaces early, followed by within-face categorization for social and emotion content (i.e., young and facial expression). Face cells revealed no response to acoustic stimuli such as vocalizations and were poorly modulated by vocalizations added to faces. Neuronal responses remained stable when paired with positive or negative reinforcement, implying that face cells encode social information but not learned reward value associated to faces. Overall, our results shed light on a substantial role of the OFC in the characterizations of facial information bearing on social and emotional behavior
    corecore