834 research outputs found

    Robust Optimisation Monte Carlo

    Get PDF
    This paper is on Bayesian inference for parametric statistical models that are defined by a stochastic simulator which specifies how data is generated. Exact sampling is then possible but evaluating the likelihood function is typically prohibitively expensive. Approximate Bayesian Computation (ABC) is a framework to perform approximate inference in such situations. While basic ABC algorithms are widely applicable, they are notoriously slow and much research has focused on increasing their efficiency. Optimisation Monte Carlo (OMC) has recently been proposed as an efficient and embarrassingly parallel method that leverages optimisation to accelerate the inference. In this paper, we demonstrate an important previously unrecognised failure mode of OMC: It generates strongly overconfident approximations by collapsing regions of similar or near-constant likelihood into a single point. We propose an efficient, robust generalisation of OMC that corrects this. It makes fewer assumptions, retains the main benefits of OMC, and can be performed either as post-processing to OMC or as a stand-alone computation. We demonstrate the effectiveness of the proposed Robust OMC on toy examples and tasks in inverse-graphics where we perform Bayesian inference with a complex image renderer.Comment: 8 pages + 6 page appendix; v2: made clarifications, added a second possible algorithm implementation and its results; v3: small clarifications, to be published in AISTATS 202

    Increasing the Fine Structure Visibility of the Hinode SOT Ca II H Filtergrams

    Full text link
    We present the improved so-called Madmax (OMC) operator selecting maxima of convexities computed in multiple directions around each pixel rewritten in MatLab and shown to be very efficient for pattern recognition. The aim of the algorithm is to trace the bright hair-like features (for ex. chromospheric thin jets or spicules) of solar ultimate observations polluted by a noise of different origins. This popular spatial operator uses the second derivative in the optimally selected direction for which its absolute value has a maximum value. Accordingly, it uses the positivity of the resulting intensity signal affected by a superposed noise. The results are illustrated using a test artificially generated image and real SOT (Hinode) images are also used, to make your own choice of the sensitive parameters to use in improving the visibility of images.Comment: 12 pages, 3 figurs, submitted in Solar Physic

    Efficient parallel processing with optical interconnections

    Get PDF
    With the advances in VLSI technology, it is now possible to build chips which can each contain thousands of processors. The efficiency of such chips in executing parallel algorithms heavily depends on the interconnection topology of the processors. It is not possible to build a fully interconnected network of processors with constant fan-in/fan-out using electrical interconnections. Free space optics is a remedy to this limitation. Qualities exclusive to the optical medium are its ability to be directed for propagation in free space and the property that optical channels can cross in space without any interference. In this thesis, we present an electro-optical interconnected architecture named Optical Reconfigurable Mesh (ORM). It is based on an existing optical model of computation. There are two layers in the architecture. The processing layer is a reconfigurable mesh and the deflecting layer contains optical devices to deflect light beams. ORM provides three types of communication mechanisms. The first is for arbitrary planar connections among sets of locally connected processors using the reconfigurable mesh. The second is for arbitrary connections among N of the processors using the electrical buses on the processing layer and N2 fixed passive deflecting units on the deflection layer. The third is for arbitrary connections among any of the N2 processors using the N2 mechanically reconfigurable deflectors in the deflection layer. The third type of communication mechanisms is significantly slower than the other two. Therefore, it is desirable to avoid reconfiguring this type of communication during the execution of the algorithms. Instead, the optical reconfiguration can be done before the execution of each algorithm begins. Determining a right configuration that would be suitable for the entire configuration of a task execution is studied in this thesis. The basic data movements for each of the mechanisms are studied. Finally, to show the power of ORM, we use all three types of communication mechanisms in the first O(logN) time algorithm for finding the convex hulls of all figures in an N x N binary image presented in this thesis

    General method of synthesis by PLIC/FPGA digital devices to perform discrete orthogonal transformations

    Get PDF
    A general method is proposed to synthesize digital devices in order to perform discrete orthogonal transformations (DOT) on programmable logic integrated circuits (PLIC) of FPGA class. The basic and the most "slow" operation during DOT performance is the operation of multiplying by a constant factor (constant) - OMC. To perform DOT digital devices are implemented at the use of the same type of IP-cores, which allow to realize OMC. According to the proposed method, OMC is determined on the basis of picturing set over the elements of the Galois field. Due to the distributed computing of nonlinear polynomial function systems defined over the Galois field in PLIC/FPGA architecture, the reduction in the estimates of time complexity concerning OMC performance is achieved. Each non-linear polynomial function, like OMC, is realized on the basis of the same type of IP-cores according to one of the structural schemes in accordance with the requirements for the device to perform DOT. The use of IP cores significantly reduces the cost of designing a device that implements DOT in the PLIC/FPGA architecture.Keywords: digital signal processing, discrete orthogonal transformations, distributed computing, nonlinear polynomial functions, Galois fields, FPGAs, digital device

    Quality assurance of CT scanning for industrial applications

    Get PDF

    Development of a stereovision-based technique to measure the spread patterns of granular fertilizer spreaders

    Get PDF
    Centrifugal fertilizer spreaders are by far the most commonly used granular fertilizer spreader type in Europe. Their spread pattern however is error-prone, potentially leading to an undesired distribution of particles in the field and losses out of the field, which is often caused by poor calibration of the spreader for the specific fertilizer used. Due to the large environmental impact of fertilizer use, it is important to optimize the spreading process and minimize these errors. Spreader calibrations can be performed by using collection trays to determine the (field) spread pattern, but this is very time-consuming and expensive for the farmer and hence not common practice. Therefore, we developed an innovative multi-camera system to predict the spread pattern in a fast and accurate way, independent of the spreader configuration. Using high-speed stereovision, ejection parameters of particles leaving the spreader vanes were determined relative to a coordinate system associated with the spreader. The landing positions and subsequent spread patterns were determined using a ballistic model incorporating the effect of tractor motion and wind. Experiments were conducted with a commercial spreader and showed a high repeatability. The results were transformed to one spatial dimension to enable comparison with transverse spread patterns determined in the field and showed similar results

    Data Acquisition, Management, and Analysis in Support of the Audiology and Hearing Conservation and the Orbital Debris Program Office

    Get PDF
    My internship at Johnson Space Center, Houston TX comprised of working simultaneously in the Space Life Science Directorate (Clinical Services Branch, SD3) in Audiology and Hearing Conservation and in the Astromaterials Research and Exploration Sciences Directorate in the Orbital Debris Program Office (KX). The purpose of the project done to support the Audiology and Hearing Conservation Clinic (AuHCon) is to organize and analyze auditory test data that has been obtained from tests conducted onboard the International Space Station (ISS) and in Johnson Space Center's clinic. Astronauts undergo a special type of auditory test called an On-Orbit Hearing Assessment (OOHA), which monitors hearing function while crewmembers are exposed to noise and microgravity during long-duration spaceflight. Data needed to be formatted to assist the Audiologist in studying, analyzing and reporting OOHA results from all ISS missions, with comparison to conventional preflight and post-flight audiometric test results of crewmembers. Orbital debris is the #1 threat to manned spacecraft; therefore NASA is investing in different measurement techniques to acquire information on orbital debris. These measurements are taken with telescopes in different parts of the world to acquire brightness variations over time, from which size, rotation rates and material information can be determined for orbital debris. Currently many assumptions are taken to resolve size and material from observed brightness, therefore a laboratory (Optical Measurement Center) is used to simulate the space environment and acquire information of known targets suited to best model the orbital debris population. In the Orbital Debris Program Office (ODPO) telescopic data were acquired and analyzed to better assess the orbital debris population
    corecore