2,609 research outputs found

    Visualizing plasmon-exciton polaritons at the nanoscale using electron microscopy

    Full text link
    Polaritons are compositional light-matter quasiparticles that have recently enabled remarkable breakthroughs in quantum and nonlinear optics, as well as in material science. Despite the enormous progress, however, a direct nanometer-scale visualization of polaritons has remained an open challenge. Here, we demonstrate that plasmon-exciton polaritons, or plexcitons, generated by a hybrid system composed of an individual silver nanoparticle and a few-layer transition metal dichalcogenide can be spectroscopically mapped with nanometer spatial resolution using electron energy loss spectroscopy in a scanning transmission electron microscope. Our experiments reveal important insights about the coupling process, which have not been reported so far. These include nanoscale variation of Rabi splitting and plasmon-exciton detuning, as well as absorption-dominated extinction signals, which in turn provide the ultimate evidence for the plasmon-exciton hybridization in the strong coupling regime. These findings pioneer new possibilities for in-depth studies of polariton-related phenomena with nanometer spatial resolution

    The formation of IRIS diagnostics. III. Near-ultraviolet Spectra and Images

    Full text link
    The Mg II h&k lines are the prime chromospheric diagnostics of NASA's Interface Region Imaging Spectrograph (IRIS). In the previous papers of this series we used a realistic three-dimensional radiative magnetohydrodynamics model to calculate the h&k lines in detail and investigated how their spectral features relate to the underlying atmosphere. In this work, we employ the same approach to investigate how the h&k diagnostics fare when taking into account the finite resolution of IRIS and different noise levels. In addition, we investigate the diagnostic potential of several other photospheric lines and near-continuum regions present in the near-ultraviolet (NUV) window of IRIS and study the formation of the NUV slit-jaw images. We find that the instrumental resolution of IRIS has a small effect on the quality of the h&k diagnostics; the relations between the spectral features and atmospheric properties are mostly unchanged. The peak separation is the most affected diagnostic, but mainly due to limitations of the simulation. The effects of noise start to be noticeable at a signal-to-noise ratio (S/N) of 20, but we show that with noise filtering one can obtain reliable diagnostics at least down to a S/N of 5. The many photospheric lines present in the NUV window provide velocity information for at least eight distinct photospheric heights. Using line-free regions in the h&k far wings we derive good estimates of photospheric temperature for at least three heights. Both of these diagnostics, in particular the latter, can be obtained even at S/Ns as low as 5.Comment: 16 pages, 13 figures. Accepted for publication in ApJ. Updated version with fixed typos in line list and language edit

    Precision preparation of strings of trapped neutral atoms

    Get PDF
    We have recently demonstrated the creation of regular strings of neutral caesium atoms in a standing wave optical dipole trap using optical tweezers [Y. Miroshnychenko et al., Nature, in press (2006)]. The rearrangement is realized atom-by-atom, extracting an atom and re-inserting it at the desired position with sub-micrometer resolution. We describe our experimental setup and present detailed measurements as well as simple analytical models for the resolution of the extraction process, for the precision of the insertion, and for heating processes. We compare two different methods of insertion, one of which permits the placement of two atoms into one optical micropotential. The theoretical models largely explain our experimental results and allow us to identify the main limiting factors for the precision and efficiency of the manipulations. Strategies for future improvements are discussed.Comment: 25 pages, 18 figure

    Providing stringent star formation rate limits of z\sim2 QSO host galaxies at high angular resolution

    Get PDF
    We present integral field spectrograph (IFS) with laser guide star adaptive optics (LGS-AO) observations of z=2 quasi-stellar objects (QSOs) designed to resolve extended nebular line emission from the host galaxy. Our data was obtained with W. M. Keck and Gemini-North Observatories using OSIRIS and NIFS coupled with the LGS-AO systems. We have conducted a pilot survey of five QSOs, three observed with NIFS+AO and two observed with OSIRIS+AO at an average redshift of z=2.15. We demonstrate that the combination of AO and IFS provides the necessary spatial and spectral resolutions required to separate QSO emission from its host. We present our technique for generating a PSF from the broad-line region of the QSO and performing PSF subtraction of the QSO emission to detect the host galaxy. We detect Hα\alpha and [NII] for two sources, SDSS J1029+6510 and SDSS J0925+06 that have both star formation and extended narrow-line emission. Assuming that the majority of narrow-line Hα\alpha is from star formation, we infer a star formation rate for SDSS J1029+6510 of 78.4 M_\odotyr1^{-1} originating from a compact region that is kinematically offset by 290 - 350 km/s. For SDSS J0925+06 we infer a star formation rate of 29 M_\odotyr1^{-1} distributed over three clumps that are spatially offset by \sim 7 kpc. The null detections on three of the QSOs are used to infer surface brightness limits and we find that at 1.4 kpc distance from the QSO that the un-reddened star formation limit is << 0.3 M_\odotyr1^{-1}kpc2^{-2}. If we assume a typical extinction values for z=2 type-1 QSOs, the dereddened star formation rate for our null detections would be << 0.6 M_\odotyr1^{-1}kpc2^{-2}. These IFS observations indicate that if star formation is present in the host it would have to occur diffusely with significant extinction and not in compact, clumpy regions.Comment: 17 pages, 7 figures, 7 tables, Accepted to Ap

    Molecular Contrast Optical Coherence Tomography: A Review

    Get PDF
    This article reviews the current state of research on the use of molecular contrast agents in optical coherence tomography (OCT) imaging techniques. After a brief discussion of the basic principle of OCT and the importance of incorporating molecular contrast agent usage into this imaging modality, we shall present an overview of the different molecular contrast OCT (MCOCT) methods that have been developed thus far. We will then discuss several important practical issues that define the possible range of contrast agent choice, the design criteria for engineered molecular contrast agent and the implementability of a given MCOCT method for clinical or biological applications. We will conclude by outlining a few areas of pursuit that deserve a greater degree of research and development

    Analysis of the quality of image data acquired by the LANDSAT-4 thematic mapper and multispectral scanners

    Get PDF
    Image products and numeric data were extracted from both TM and MSS data in an effort to evaluate the quality of these data for interpreting major agricultural resources and conditions in California's Central Valley. The utility of TM data appears excellent for meeting most of the inventory objectives of the agricultural resource specialist. These data should be extremely valuable for crop type and area proportion estimation, for updating agricultural land use survey maps at 1:24,000-scale and smaller, for field boundary definition, and for determining the size and location of individual farmsteads

    Dense semantic labeling of sub-decimeter resolution images with convolutional neural networks

    Full text link
    Semantic labeling (or pixel-level land-cover classification) in ultra-high resolution imagery (< 10cm) requires statistical models able to learn high level concepts from spatial data, with large appearance variations. Convolutional Neural Networks (CNNs) achieve this goal by learning discriminatively a hierarchy of representations of increasing abstraction. In this paper we present a CNN-based system relying on an downsample-then-upsample architecture. Specifically, it first learns a rough spatial map of high-level representations by means of convolutions and then learns to upsample them back to the original resolution by deconvolutions. By doing so, the CNN learns to densely label every pixel at the original resolution of the image. This results in many advantages, including i) state-of-the-art numerical accuracy, ii) improved geometric accuracy of predictions and iii) high efficiency at inference time. We test the proposed system on the Vaihingen and Potsdam sub-decimeter resolution datasets, involving semantic labeling of aerial images of 9cm and 5cm resolution, respectively. These datasets are composed by many large and fully annotated tiles allowing an unbiased evaluation of models making use of spatial information. We do so by comparing two standard CNN architectures to the proposed one: standard patch classification, prediction of local label patches by employing only convolutions and full patch labeling by employing deconvolutions. All the systems compare favorably or outperform a state-of-the-art baseline relying on superpixels and powerful appearance descriptors. The proposed full patch labeling CNN outperforms these models by a large margin, also showing a very appealing inference time.Comment: Accepted in IEEE Transactions on Geoscience and Remote Sensing, 201

    Image Understanding at the GRASP Laboratory

    Get PDF
    Research in the GRASP Laboratory has two main themes, parameterized multi-dimensional segmentation and robust decision making under uncertainty. The multi-dimensional approach interweaves segmentation with representation. The data is explained as a best fit in view of parametric primitives. These primitives are based on physical and geometric properties of objects and are limited in number. We use primitives at the volumetric level, the surface level, and the occluding contour level, and combine the results. The robust decision making allows us to combine data from multiple sensors. Sensor measurements have bounds based on the physical limitations of the sensors. We use this information without making a priori assumptions of distributions within the intervals or a priori assumptions of the probability of a given result
    corecore