33,438 research outputs found

    Design, fabrication and evaluation of chalcogenide glass Luneburg lenses for LiNbO3 integrated optical devices

    Get PDF
    Optical waveguide Luneburg lenses of arsenic trisulfide glass are described. The lenses are formed by thermal evaporation of As2S3 through suitably placed masks onto the surface of LiNbO3:Ti indiffused waveguides. The lenses are designed for input apertures up to 1 cm and for speeds of f/5 or better. They are designed to focus the TM sub 0 guided mode of a beam of wavelength, external to the guide, of 633 nm. The refractive index of the As2S3 films and the changes induced in the refractive index by exposure to short wavelength light were measured. Some correlation between film thickness and optical properties was noted. The short wavelength photosensitivity was used to shorten the lens focal length from the as deposited value. Lenses of rectangular shape, as viewed from above the guide, as well as conventional circular Luneburg lenses, were made. Measurements made on the lenses include thickness profile, general optical quality, focal length, quality of focal spot, and effect of ultraviolet irradiation on optical properties

    Traction force microscopy with optimized regularization and automated Bayesian parameter selection for comparing cells

    Full text link
    Adherent cells exert traction forces on to their environment, which allows them to migrate, to maintain tissue integrity, and to form complex multicellular structures. This traction can be measured in a perturbation-free manner with traction force microscopy (TFM). In TFM, traction is usually calculated via the solution of a linear system, which is complicated by undersampled input data, acquisition noise, and large condition numbers for some methods. Therefore, standard TFM algorithms either employ data filtering or regularization. However, these approaches require a manual selection of filter- or regularization parameters and consequently exhibit a substantial degree of subjectiveness. This shortcoming is particularly serious when cells in different conditions are to be compared because optimal noise suppression needs to be adapted for every situation, which invariably results in systematic errors. Here, we systematically test the performance of new methods from computer vision and Bayesian inference for solving the inverse problem in TFM. We compare two classical schemes, L1- and L2-regularization, with three previously untested schemes, namely Elastic Net regularization, Proximal Gradient Lasso, and Proximal Gradient Elastic Net. Overall, we find that Elastic Net regularization, which combines L1 and L2 regularization, outperforms all other methods with regard to accuracy of traction reconstruction. Next, we develop two methods, Bayesian L2 regularization and Advanced Bayesian L2 regularization, for automatic, optimal L2 regularization. Using artificial data and experimental data, we show that these methods enable robust reconstruction of traction without requiring a difficult selection of regularization parameters specifically for each data set. Thus, Bayesian methods can mitigate the considerable uncertainty inherent in comparing cellular traction forces

    The coronagraphic Modal Wavefront Sensor: a hybrid focal-plane sensor for the high-contrast imaging of circumstellar environments

    Get PDF
    The raw coronagraphic performance of current high-contrast imaging instruments is limited by the presence of a quasi-static speckle (QSS) background, resulting from instrumental non-common path errors (NCPEs). Rapid development of efficient speckle subtraction techniques in data reduction has enabled final contrasts of up to 10-6 to be obtained, however it remains preferable to eliminate the underlying NCPEs at the source. In this work we introduce the coronagraphic Modal Wavefront Sensor (cMWS), a new wavefront sensor suitable for real-time NCPE correction. This pupil-plane optic combines the apodizing phase plate coronagraph with a holographic modal wavefront sensor, to provide simultaneous coronagraphic imaging and focal-plane wavefront sensing using the science point spread function. We first characterise the baseline performance of the cMWS via idealised closed-loop simulations, showing that the sensor successfully recovers diffraction-limited coronagraph performance over an effective dynamic range of +/-2.5 radians root-mean-square (RMS) wavefront error within 2-10 iterations. We then present the results of initial on-sky testing at the William Herschel Telescope, and demonstrate that the sensor is able to retrieve injected wavefront aberrations to an accuracy of 10nm RMS under realistic seeing conditions. We also find that the cMWS is capable of real-time broadband measurement of atmospheric wavefront variance at a cadence of 50Hz across an uncorrected telescope sub-aperture. When combined with a suitable closed-loop adaptive optics system, the cMWS holds the potential to deliver an improvement in raw contrast of up to two orders of magnitude over the uncorrected QSS floor. Such a sensor would be eminently suitable for the direct imaging and spectroscopy of exoplanets with both existing and future instruments, including EPICS and METIS for the E-ELT.Comment: 14 pages, 12 figures: accepted for publication in Astronomy & Astrophysic

    Horizontal accuracy assessment of very high resolution Google Earth images in the city of Rome, Italy

    Get PDF
    Google Earth (GE) has recently become the focus of increasing interest and popularity among available online virtual globes used in scientific research projects, due to the free and easily accessed satellite imagery provided with global coverage. Nevertheless, the uses of this service raises several research questions on the quality and uncertainty of spatial data (e.g. positional accuracy, precision, consistency), with implications for potential uses like data collection and validation. This paper aims to analyze the horizontal accuracy of very high resolution (VHR) GE images in the city of Rome (Italy) for the years 2007, 2011, and 2013. The evaluation was conducted by using both Global Positioning System ground truth data and cadastral photogrammetric vertex as independent check points. The validation process includes the comparison of histograms, graph plots, tests of normality, azimuthal direction errors, and the calculation of standard statistical parameters. The results show that GE VHR imageries of Rome have an overall positional accuracy close to 1 m, sufficient for deriving ground truth samples, measurements, and large-scale planimetric maps

    The Sensing Capacity of Sensor Networks

    Full text link
    This paper demonstrates fundamental limits of sensor networks for detection problems where the number of hypotheses is exponentially large. Such problems characterize many important applications including detection and classification of targets in a geographical area using a network of sensors, and detecting complex substances with a chemical sensor array. We refer to such applications as largescale detection problems. Using the insight that these problems share fundamental similarities with the problem of communicating over a noisy channel, we define a quantity called the sensing capacity and lower bound it for a number of sensor network models. The sensing capacity expression differs significantly from the channel capacity due to the fact that a fixed sensor configuration encodes all states of the environment. As a result, codewords are dependent and non-identically distributed. The sensing capacity provides a bound on the minimal number of sensors required to detect the state of an environment to within a desired accuracy. The results differ significantly from classical detection theory, and provide an ntriguing connection between sensor networks and communications. In addition, we discuss the insight that sensing capacity provides for the problem of sensor selection.Comment: Submitted to IEEE Transactions on Information Theory, November 200
    • …
    corecore