68 research outputs found

    A new algorithm for point spread function subtraction in high-contrast imaging: a demonstration with angular differential imaging

    Full text link
    Direct imaging of exoplanets is limited by bright quasi-static speckles in the point spread function (PSF) of the central star. This limitation can be reduced by subtraction of reference PSF images. We have developed an algorithm to construct an optimized reference PSF image from a set of reference images. This image is built as a linear combination of the reference images available and the coefficients of the combination are optimized inside multiple subsections of the image independently to minimize the residual noise within each subsection. The algorithm developed can be used with many high-contrast imaging observing strategies relying on PSF subtraction, such as angular differential imaging (ADI), roll subtraction, spectral differential imaging, reference star observations, etc. The performance of the algorithm is demonstrated for ADI data. It is shown that for this type of data the new algorithm provides a gain in sensitivity by up to a factor 3 at small separation over the algorithm used in Marois et al. (2006).Comment: 7 pages, 11 figures, to appear in May 10, 2007 issue of Ap

    Working memory encoding delays top-down attention to visual cortex

    Get PDF
    The encoding of information from one event into working memory can delay high-level, central decision-making processes for subsequent events [e.g., Jolicoeur, P., & Dell'Acqua, R. The demonstration of short-term consolidation. Cognitive Psychology, 36, 138-202, 1998, doi:10.1006/cogp.1998.0684]. Working memory, however, is also believed to interfere with the deployment of top-down attention [de Fockert, J. W., Rees, G., Frith, C.D., &Lavie, N. The role ofworking memory in visual selective attention. Science, 291, 1803-1806, 2001, doi:10.1126/science.1056496]. It is, therefore, possible that, in addition to delaying central processes, the engagement of working memory encoding (WME) also postpones perceptual processing as well. Here, we tested this hypothesis with time-resolved fMRI by assessing whether WME serially postpones the action of top-down attention on low-level sensory signals. In three experiments, participants viewed a skeletal rapid serial visual presentation sequence that contained two target items (T1 and T2) separated by either a short (550 msec) or long (1450 msec) SOA. During single-target runs, participants attended and responded only to T1, whereas in dual-target runs, participants attended and responded to both targets. To determine whether T1 processing delayed top-down attentional enhancement of T2, we examined T2 BOLD response in visual cortex by subtracting the single-task waveforms from the dualtask waveforms for each SOA. When the WME demands of T1 were high (Experiments 1 and 3), T2 BOLD response was delayed at the short SOA relative to the long SOA. This was not the case when T1 encoding demands were low (Experiment 2). We conclude that encoding of a stimulus into working memory delays the deployment of attention to subsequent target representations in visual cortex

    Confidence Level and Sensitivity Limits in High Contrast Imaging

    Full text link
    In long adaptive optics corrected exposures, exoplanet detections are currently limited by speckle noise originating from the telescope and instrument optics, and it is expected that such noise will also limit future high-contrast imaging instruments for both ground and space-based telescopes. Previous theoretical analysis have shown that the time intensity variations of a single speckle follows a modified Rician. It is first demonstrated here that for a circular pupil this temporal intensity distribution also represents the speckle spatial intensity distribution at a fix separation from the point spread function center; this fact is demonstrated using numerical simulations for coronagraphic and non-coronagraphic data. The real statistical distribution of the noise needs to be taken into account explicitly when selecting a detection threshold appropriate for some desired confidence level. In this paper, a technique is described to obtain the pixel intensity distribution of an image and its corresponding confidence level as a function of the detection threshold. Using numerical simulations, it is shown that in the presence of speckles noise, a detection threshold up to three times higher is required to obtain a confidence level equivalent to that at 5sigma for Gaussian noise. The technique is then tested using TRIDENT CFHT and angular differential imaging NIRI Gemini adaptive optics data. It is found that the angular differential imaging technique produces quasi-Gaussian residuals, a remarkable result compared to classical adaptive optic imaging. A power-law is finally derived to predict the 1-3*10^-7 confidence level detection threshold when averaging a partially correlated non-Gaussian noise.Comment: 29 pages, 13 figures, accepted to Ap

    Disrupting prefrontal cortex prevents performance gains from sensory-motor training

    Get PDF
    Humans show large and reliable performance impairments when required to make more than one simple decision simultaneously. Such multitasking costs are thought to largely reflect capacity limits in response selection (Welford, 1952; Pashler, 1984, 1994), the information processing stage at which sensory input is mapped to a motor response. Neuroimaging has implicated the left posterior lateral prefrontal cortex (pLPFC) as a key neural substrate of response selection (Dux et al., 2006, 2009; Ivanoff et al., 2009). For example, activity in left pLPFC tracks improvements in response selection efficiency typically observed following training (Dux et al., 2009). To date, however, there has been no causal evidence that pLPFC contributes directly to sensory-motor training effects, or the operations through which training occurs. Moreover, the left hemisphere lateralization of this operation remains controversial (Jiang and Kanwisher, 2003; Sigman and Dehaene, 2008; Verbruggen et al., 2010). We used anodal (excitatory), cathodal (inhibitory), and sham transcranial direct current stimulation (tDCS) to left and right pLPFC and measured participants' performance on high and low response selection load tasks after different amounts of training. Both anodal and cathodal stimulation of the left pLPFC disrupted training effects for the high load condition relative to sham. No disruption was found for the low load and right pLPFC stimulation conditions. The findings implicate the left pLPFC in both response selection and training effects. They also suggest that training improves response selection efficiency by fine-tuning activity in pLPFC relating to sensory-motor translations

    Brain Imaging for Legal Thinkers: A Guide for the Perplexed

    Get PDF
    It has become increasingly common for brain images to be proffered as evidence in criminal and civil litigation. This Article - the collaborative product of scholars in law and neuroscience - provides three things. First, it provides the first introduction, specifically for legal thinkers, to brain imaging. It describes in accessible ways the new techniques and methods that the legal system increasingly encounters. Second, it provides a tutorial on how to read and understand a brain-imaging study. It does this by providing an annotated walk-through of the recently-published work (by three of the authors - Buckholtz, Jones, and Marois) that discovered the brain activity underlying a person\u27s decisions: a) whether to punish someone; and b) how much to punish. The annotation uses the \u27Comment\u27 feature of the Word software to supply contextual and step-by-step commentary on what unfamiliar terms mean, how and why brain imaging experiments are designed as they are, and how to interpret the results. Third, the Article offers some general guidelines about how to avoid misunderstanding brain images in legal contexts and how to identify when others are misusing brain images. The Article is a product of the \u27Law and Neuroscience Project\u27, supported by the MacArthur Foundation

    Accurate Astrometry and Photometry of Saturated and Coronagraphic Point Spread Functions

    Get PDF
    Accurate astrometry and photometry of saturated and coronagraphic point spread functions (PSFs) are fundamental to both ground- and space-based high contrast imaging projects. For ground-based adaptive optics imaging, differential atmospheric refraction and flexure introduce a small drift of the PSF with time, and seeing and sky transmission variations modify the PSF flux distribution. For space-based imaging, vibrations, thermal fluctuations and pointing jitters can modify the PSF core position and flux. These effects need to be corrected to properly combine the images and obtain optimal signal-to-noise ratios, accurate relative astrometry and photometry of detected objects as well as precise detection limits. Usually, one can easily correct for these effects by using the PSF core, but this is impossible when high dynamic range observing techniques are used, like coronagrahy with a non-transmissive occulting mask, or if the stellar PSF core is saturated. We present a new technique that can solve these issues by using off-axis satellite PSFs produced by a periodic amplitude or phase mask conjugated to a pupil plane. It will be shown that these satellite PSFs track precisely the PSF position, its Strehl ratio and its intensity and can thus be used to register and to flux normalize the PSF. A laboratory experiment is also presented to validate the theory. This approach can be easily implemented in existing adaptive optics instruments and should be considered for future extreme adaptive optics coronagraph instruments and in high-contrast imaging space observatories.Comment: 25 pages, 6 figures, accepted for publication in Ap

    The International Deep Planet Survey II: The frequency of directly imaged giant exoplanets with stellar mass

    Full text link
    Radial velocity and transit methods are effective for the study of short orbital period exoplanets but they hardly probe objects at large separations for which direct imaging can be used. We carried out the international deep planet survey of 292 young nearby stars to search for giant exoplanets and determine their frequency. We developed a pipeline for a uniform processing of all the data that we have recorded with NIRC2/Keck II, NIRI/Gemini North, NICI/Gemini South, and NACO/VLT for 14 years. The pipeline first applies cosmetic corrections and then reduces the speckle intensity to enhance the contrast in the images. The main result of the international deep planet survey is the discovery of the HR 8799 exoplanets. We also detected 59 visual multiple systems including 16 new binary stars and 2 new triple stellar systems, as well as 2,279 point-like sources. We used Monte Carlo simulations and the Bayesian theorem to determine that 1.05[+2.80-0.70]% of stars harbor at least one giant planet between 0.5 and 14M_J and between 20 and 300 AU. This result is obtained assuming uniform distributions of planet masses and semi-major axes. If we consider power law distributions as measured for close-in planets instead, the derived frequency is 2.30[+5.95-1.55]%, recalling the strong impact of assumptions on Monte Carlo output distributions. We also find no evidence that the derived frequency depends on the mass of the hosting star, whereas it does for close-in planets. The international deep planet survey provides a database of confirmed background sources that may be useful for other exoplanet direct imaging surveys. It also puts new constraints on the number of stars with at least one giant planet reducing by a factor of two the frequencies derived by almost all previous works.Comment: 83 pages, 13 figures, 15 Tables, accepted in A&

    Sorting Guilty Minds

    Get PDF
    Because punishable guilt requires that bad thoughts accompany bad acts, the Model Penal Code (MPC) typically requires that jurors infer the past mental state of a criminal defendant. More specifically, jurors must sort that mental state into one of four specific categories - purposeful, knowing, reckless, or negligent - which in turn defines the nature of the crime and the extent of the punishment. The MPC therefore assumes that ordinary people naturally sort mental states into these four categories with a high degree of accuracy, or at least can reliably do so when properly instructed. It also assumes that ordinary people will order these categories of mental state, by increasing amount of punishment, in the same severity hierarchy that the MPC prescribes. The MPC, now turning 50 years old, has previously escaped the scrutiny of comprehensive empirical research on these assumptions underlying its culpability architecture. Our new empirical studies, reported here, find that most of the mens rea assumptions embedded in the MPC are reasonably accurate as a behavioral matter. Even without the aid of the MPC definitions, subjects were able to regularly and accurately distinguish among purposeful, negligent, and blameless conduct. Nevertheless, our subjects failed to distinguish reliably between knowing and reckless conduct. This failure can have significant sentencing consequences in some types of crimes, especially homicide

    Amodal processing in human prefrontal cortex

    Get PDF
    Information enters the cortex via modality-specific sensory regions, whereas actions are produced by modality-specific motor regions. Intervening central stages of information processing map sensation to behavior. Humans perform this central processing in a flexible, abstract manner such that sensory information in any modality can lead to response via any motor system. Cognitive theories account for such flexible behavior by positing amodal central information processing (e. g., "central executive," Baddeley and Hitch, 1974; "supervisory attentional system," Norman and Shallice, 1986; "response selection bottleneck," Pashler, 1994). However, the extent to which brain regions embodying central mechanisms of information processing are amodal remains unclear. Here we apply multivariate pattern analysis to functional magnetic resonance imaging (fMRI) data to compare response selection, a cognitive process widely believed to recruit an amodal central resource across sensory and motor modalities. We show that most frontal and parietal cortical areas known to activate across a wide variety of tasks code modality, casting doubt on the notion that these regions embody a central processor devoid of modality representation. Importantly, regions of anterior insula and dorsolateral prefrontal cortex consistently failed to code modality across four experiments. However, these areas code at least one other task dimension, process (instantiated as response selection vs response execution), ensuring that failure to find coding of modality is not driven by insensitivity of multivariate pattern analysis in these regions. We conclude that abstract encoding of information modality is primarily a property of subregions of the prefrontal cortex

    AttentionMNIST: A Mouse-Click Attention Tracking Dataset for Handwritten Numeral and Alphabet Recognition

    Get PDF
    Multiple attention-based models that recognize objects via a sequence of glimpses have reported results on handwritten numeral recognition. However, no attentiontracking data for handwritten numeral or alphabet recognition is available. Availability of such data would allow attention-based models to be evaluated in comparison to human performance. We collect mouse-click attention tracking (mcAT) data from 382 participants trying to recognize handwritten numerals and alphabets (upper and lowercase) from images via sequential sampling. Images from benchmark datasets are presented as stimuli. The collected dataset, called AttentionMNIST, consists of a sequence of sample (mouse click) locations, predicted class label(s) at each sampling, and the duration of each sampling. On average, our participants observe only 12.8% of an image for recognition. We propose a baseline model to predict the location and the class(es) a participant will select at the next sampling. When exposed to the same stimuli and experimental conditions as our participants, a highly-cited attention-based reinforcement model falls short of human e�ciency
    corecore