6,868 research outputs found

    Dose, exposure time, and resolution in Serial X-ray Crystallography

    Full text link
    The resolution of X-ray diffraction microscopy is limited by the maximum dose that can be delivered prior to sample damage. In the proposed Serial Crystallography method, the damage problem is addressed by distributing the total dose over many identical hydrated macromolecules running continuously in a single-file train across a continuous X-ray beam, and resolution is then limited only by the available molecular and X-ray fluxes and molecular alignment. Orientation of the diffracting molecules is achieved by laser alignment. We evaluate the incident X-ray fluence (energy/area) required to obtain a given resolution from (1) an analytical model, giving the count rate at the maximum scattering angle for a model protein, (2) explicit simulation of diffraction patterns for a GroEL-GroES protein complex, and (3) the frequency cut off of the transfer function following iterative solution of the phase problem, and reconstruction of an electron density map in the projection approximation. These calculations include counting shot noise and multiple starts of the phasing algorithm. The results indicate counting time and the number of proteins needed within the beam at any instant for a given resolution and X-ray flux. We confirm an inverse fourth power dependence of exposure time on resolution, with important implications for all coherent X-ray imaging. We find that multiple single-file protein beams will be needed for sub-nanometer resolution on current third generation synchrotrons, but not on fourth generation designs, where reconstruction of secondary protein structure at a resolution of 0.7 nm should be possible with short exposures.Comment: 19 pages, 7 figures, 1 tabl

    Evaluating and combining digital video shot boundary detection algorithms

    Get PDF
    The development of standards for video encoding coupled with the increased power of computing mean that content-based manipulation of digital video information is now feasible. Shots are a basic structural building block of digital video and the boundaries between shots need to be determined automatically to allow for content-based manipulation. A shot can be thought of as continuous images from one camera at a time. In this paper we examine a variety of automatic techniques for shot boundary detection that we have implemented and evaluated on a baseline of 720,000 frames (8 hours) of broadcast television. This extends our previous work on evaluating a single technique based on comparing colour histograms. A description of each of our three methods currently working is given along with how they are evaluated. It is found that although the different methods have about the same order of magnitude in terms of effectiveness, different shot boundaries are detected by the different methods. We then look at combining the three shot boundary detection methods to produce one output result and the benefits in accuracy and performance that this brought to our system. Each of the methods were changed from using a static threshold value for three unconnected methods to one using three dynamic threshold values for one connected method. In a final summing up we look at the future directions for this work

    Horizontal accuracy assessment of very high resolution Google Earth images in the city of Rome, Italy

    Get PDF
    Google Earth (GE) has recently become the focus of increasing interest and popularity among available online virtual globes used in scientific research projects, due to the free and easily accessed satellite imagery provided with global coverage. Nevertheless, the uses of this service raises several research questions on the quality and uncertainty of spatial data (e.g. positional accuracy, precision, consistency), with implications for potential uses like data collection and validation. This paper aims to analyze the horizontal accuracy of very high resolution (VHR) GE images in the city of Rome (Italy) for the years 2007, 2011, and 2013. The evaluation was conducted by using both Global Positioning System ground truth data and cadastral photogrammetric vertex as independent check points. The validation process includes the comparison of histograms, graph plots, tests of normality, azimuthal direction errors, and the calculation of standard statistical parameters. The results show that GE VHR imageries of Rome have an overall positional accuracy close to 1 m, sufficient for deriving ground truth samples, measurements, and large-scale planimetric maps

    GENFIRE: A generalized Fourier iterative reconstruction algorithm for high-resolution 3D imaging

    Get PDF
    Tomography has made a radical impact on diverse fields ranging from the study of 3D atomic arrangements in matter to the study of human health in medicine. Despite its very diverse applications, the core of tomography remains the same, that is, a mathematical method must be implemented to reconstruct the 3D structure of an object from a number of 2D projections. In many scientific applications, however, the number of projections that can be measured is limited due to geometric constraints, tolerable radiation dose and/or acquisition speed. Thus it becomes an important problem to obtain the best-possible reconstruction from a limited number of projections. Here, we present the mathematical implementation of a tomographic algorithm, termed GENeralized Fourier Iterative REconstruction (GENFIRE). By iterating between real and reciprocal space, GENFIRE searches for a global solution that is concurrently consistent with the measured data and general physical constraints. The algorithm requires minimal human intervention and also incorporates angular refinement to reduce the tilt angle error. We demonstrate that GENFIRE can produce superior results relative to several other popular tomographic reconstruction techniques by numerical simulations, and by experimentally by reconstructing the 3D structure of a porous material and a frozen-hydrated marine cyanobacterium. Equipped with a graphical user interface, GENFIRE is freely available from our website and is expected to find broad applications across different disciplines.Comment: 18 pages, 6 figure

    The IPAC Image Subtraction and Discovery Pipeline for the intermediate Palomar Transient Factory

    Get PDF
    We describe the near real-time transient-source discovery engine for the intermediate Palomar Transient Factory (iPTF), currently in operations at the Infrared Processing and Analysis Center (IPAC), Caltech. We coin this system the IPAC/iPTF Discovery Engine (or IDE). We review the algorithms used for PSF-matching, image subtraction, detection, photometry, and machine-learned (ML) vetting of extracted transient candidates. We also review the performance of our ML classifier. For a limiting signal-to-noise ratio of 4 in relatively unconfused regions, "bogus" candidates from processing artifacts and imperfect image subtractions outnumber real transients by ~ 10:1. This can be considerably higher for image data with inaccurate astrometric and/or PSF-matching solutions. Despite this occasionally high contamination rate, the ML classifier is able to identify real transients with an efficiency (or completeness) of ~ 97% for a maximum tolerable false-positive rate of 1% when classifying raw candidates. All subtraction-image metrics, source features, ML probability-based real-bogus scores, contextual metadata from other surveys, and possible associations with known Solar System objects are stored in a relational database for retrieval by the various science working groups. We review our efforts in mitigating false-positives and our experience in optimizing the overall system in response to the multitude of science projects underway with iPTF.Comment: 66 pages, 21 figures, 7 tables, accepted by PAS

    A numerical procedure for recovering true scattering coefficients from measurements with wide-beam antennas

    Get PDF
    A numerical procedure for estimating the true scattering coefficient, sigma(sup 0), from measurements made using wide-beam antennas. The use of wide-beam antennas results in an inaccurate estimate of sigma(sup 0) if the narrow-beam approximation is used in the retrieval process for sigma(sup 0). To reduce this error, a correction procedure was proposed that estimates the error resulting from the narrow-beam approximation and uses the error to obtain a more accurate estimate of sigma(sup 0). An exponential model was assumed to take into account the variation of sigma(sup 0) with incidence angles, and the model parameters are estimated from measured data. Based on the model and knowledge of the antenna pattern, the procedure calculates the error due to the narrow-beam approximation. The procedure is shown to provide a significant improvement in estimation of sigma(sup 0) obtained with wide-beam antennas. The proposed procedure is also shown insensitive to the assumed sigma(sup 0) model
    corecore