3,373 research outputs found

    h5fortran: object-oriented polymorphic Fortran interface for HDF5 file IO

    Get PDF
    h5fortran provides object-oriented and functional interface to the HDF5 library for Fortran. h5fortran prioritizes ease-of-use, robust self-tests and Fortran 2008 standard syntax for broad compiler, operating system and computing platform support from Raspberry Pi to HPC.https://engrxiv.org/u85s4First author draf

    Belief Semantics of Authorization Logic

    Full text link
    Authorization logics have been used in the theory of computer security to reason about access control decisions. In this work, a formal belief semantics for authorization logics is given. The belief semantics is proved to subsume a standard Kripke semantics. The belief semantics yields a direct representation of principals' beliefs, without resorting to the technical machinery used in Kripke semantics. A proof system is given for the logic; that system is proved sound with respect to the belief and Kripke semantics. The soundness proof for the belief semantics, and for a variant of the Kripke semantics, is mechanized in Coq

    Nexus Authorization Logic (NAL): Logical Results

    Full text link
    Nexus Authorization Logic (NAL) [Schneider et al. 2011] is a logic for reasoning about authorization in distributed systems. A revised version of NAL is given here, including revised syntax, a revised proof theory using localized hypotheses, and a new Kripke semantics. The proof theory is proved sound with respect to the semantics, and that proof is formalized in Coq

    Automatic Estimation of Modulation Transfer Functions

    Full text link
    The modulation transfer function (MTF) is widely used to characterise the performance of optical systems. Measuring it is costly and it is thus rarely available for a given lens specimen. Instead, MTFs based on simulations or, at best, MTFs measured on other specimens of the same lens are used. Fortunately, images recorded through an optical system contain ample information about its MTF, only that it is confounded with the statistics of the images. This work presents a method to estimate the MTF of camera lens systems directly from photographs, without the need for expensive equipment. We use a custom grid display to accurately measure the point response of lenses to acquire ground truth training data. We then use the same lenses to record natural images and employ a data-driven supervised learning approach using a convolutional neural network to estimate the MTF on small image patches, aggregating the information into MTF charts over the entire field of view. It generalises to unseen lenses and can be applied for single photographs, with the performance improving if multiple photographs are available

    PyGEMINI: PyHC community update

    Full text link
    Published versio

    Alfvén waves underlying ionospheric destabilization: ground-based observations

    Get PDF
    During geomagnetic storms, terawatts of power in the million mile-per-hour solar wind pierce the Earth’s magnetosphere. Geomagnetic storms and substorms create transverse magnetic waves known as Alfvén waves. In the auroral acceleration region, Alfvén waves accelerate electrons up to one-tenth the speed of light via wave-particle interactions. These inertial Alfvén wave (IAW) accelerated electrons are imbued with sub-100 meter structure perpendicular to geomagnetic field B. The IAW electric field parallel to B accelerates electrons up to about 10 keV along B. The IAW dispersion relation quantifies the precipitating electron striation observed with high-speed cameras as spatiotemporally dynamic fine structured aurora. A network of tightly synchronized tomographic auroral observatories using model based iterative reconstruction (MBIR) techniques were developed in this dissertation. The TRANSCAR electron penetration model creates a basis set of monoenergetic electron beam eigenprofiles of auroral volume emission rate for the given location and ionospheric conditions. Each eigenprofile consists of nearly 200 broadband line spectra modulated by atmospheric attenuation, bandstop filter and imager quantum efficiency. The L-BFGS-B minimization routine combined with sub-pixel registered electron multiplying CCD video stream at order 10 ms cadence yields estimates of electron differential number flux at the top of the ionosphere. Our automatic data curation algorithm reduces one terabyte/camera/day into accurate MBIR-processed estimates of IAW-driven electron precipitation microstructure. This computer vision structured auroral discrimination algorithm was developed using a multiscale dual-camera system observing a 175 km and 14 km swath of sky simultaneously. This collective behavior algorithm exploits the “swarm” behavior of aurora, detectable even as video SNR approaches zero. A modified version of the algorithm is applied to topside ionospheric radar at Mars and broadcast FM passive radar. The fusion of data from coherent radar backscatter and optical data at order 10 ms cadence confirms and further quantifies the relation of strong Langmuir turbulence and streaming plasma upflows in the ionosphere with the finest spatiotemporal auroral dynamics associated with IAW acceleration. The software programs developed in this dissertation solve the century-old problem of automatically discriminating finely structured aurora from other forms and pushes the observational wave-particle science frontiers forward

    Online Video Deblurring via Dynamic Temporal Blending Network

    Full text link
    State-of-the-art video deblurring methods are capable of removing non-uniform blur caused by unwanted camera shake and/or object motion in dynamic scenes. However, most existing methods are based on batch processing and thus need access to all recorded frames, rendering them computationally demanding and time consuming and thus limiting their practical use. In contrast, we propose an online (sequential) video deblurring method based on a spatio-temporal recurrent network that allows for real-time performance. In particular, we introduce a novel architecture which extends the receptive field while keeping the overall size of the network small to enable fast execution. In doing so, our network is able to remove even large blur caused by strong camera shake and/or fast moving objects. Furthermore, we propose a novel network layer that enforces temporal consistency between consecutive frames by dynamic temporal blending which compares and adaptively (at test time) shares features obtained at different time steps. We show the superiority of the proposed method in an extensive experimental evaluation.Comment: 10 page
    corecore