2,284 research outputs found

    Ultrafast optical ranging using microresonator soliton frequency combs

    Get PDF
    Light detection and ranging (LIDAR) is critical to many fields in science and industry. Over the last decade, optical frequency combs were shown to offer unique advantages in optical ranging, in particular when it comes to fast distance acquisition with high accuracy. However, current comb-based concepts are not suited for emerging high-volume applications such as drone navigation or autonomous driving. These applications critically rely on LIDAR systems that are not only accurate and fast, but also compact, robust, and amenable to cost-efficient mass-production. Here we show that integrated dissipative Kerr-soliton (DKS) comb sources provide a route to chip-scale LIDAR systems that combine sub-wavelength accuracy and unprecedented acquisition speed with the opportunity to exploit advanced photonic integration concepts for wafer-scale mass production. In our experiments, we use a pair of free-running DKS combs, each providing more than 100 carriers for massively parallel synthetic-wavelength interferometry. We demonstrate dual-comb distance measurements with record-low Allan deviations down to 12 nm at averaging times of 14 ÎĽ\mus as well as ultrafast ranging at unprecedented measurement rates of up to 100 MHz. We prove the viability of our technique by sampling the naturally scattering surface of air-gun projectiles flying at 150 m/s (Mach 0.47). Combining integrated dual-comb LIDAR engines with chip-scale nanophotonic phased arrays, the approach could allow widespread use of compact ultrafast ranging systems in emerging mass applications.Comment: 9 pages, 3 figures, Supplementary information is attached in 'Ancillary files

    Dwarfs on Accelerators: Enhancing OpenCL Benchmarking for Heterogeneous Computing Architectures

    Full text link
    For reasons of both performance and energy efficiency, high-performance computing (HPC) hardware is becoming increasingly heterogeneous. The OpenCL framework supports portable programming across a wide range of computing devices and is gaining influence in programming next-generation accelerators. To characterize the performance of these devices across a range of applications requires a diverse, portable and configurable benchmark suite, and OpenCL is an attractive programming model for this purpose. We present an extended and enhanced version of the OpenDwarfs OpenCL benchmark suite, with a strong focus placed on the robustness of applications, curation of additional benchmarks with an increased emphasis on correctness of results and choice of problem size. Preliminary results and analysis are reported for eight benchmark codes on a diverse set of architectures -- three Intel CPUs, five Nvidia GPUs, six AMD GPUs and a Xeon Phi.Comment: 10 pages, 5 figure

    Standardised convolutional filtering for radiomics

    Full text link
    The Image Biomarker Standardisation Initiative (IBSI) aims to improve reproducibility of radiomics studies by standardising the computational process of extracting image biomarkers (features) from images. We have previously established reference values for 169 commonly used features, created a standard radiomics image processing scheme, and developed reporting guidelines for radiomic studies. However, several aspects are not standardised. Here we present a preliminary version of a reference manual on the use of convolutional image filters in radiomics. Filters, such as wavelets or Laplacian of Gaussian filters, play an important part in emphasising specific image characteristics such as edges and blobs. Features derived from filter response maps have been found to be poorly reproducible. This reference manual forms the basis of ongoing work on standardising convolutional filters in radiomics, and will be updated as this work progresses.Comment: 62 pages. For additional information see https://theibsi.github.io

    nbodykit: an open-source, massively parallel toolkit for large-scale structure

    Get PDF
    We present nbodykit, an open-source, massively parallel Python toolkit for analyzing large-scale structure (LSS) data. Using Python bindings of the Message Passing Interface (MPI), we provide parallel implementations of many commonly used algorithms in LSS. nbodykit is both an interactive and scalable piece of scientific software, performing well in a supercomputing environment while still taking advantage of the interactive tools provided by the Python ecosystem. Existing functionality includes estimators of the power spectrum, 2 and 3-point correlation functions, a Friends-of-Friends grouping algorithm, mock catalog creation via the halo occupation distribution technique, and approximate N-body simulations via the FastPM scheme. The package also provides a set of distributed data containers, insulated from the algorithms themselves, that enable nbodykit to provide a unified treatment of both simulation and observational data sets. nbodykit can be easily deployed in a high performance computing environment, overcoming some of the traditional difficulties of using Python on supercomputers. We provide performance benchmarks illustrating the scalability of the software. The modular, component-based approach of nbodykit allows researchers to easily build complex applications using its tools. The package is extensively documented at http://nbodykit.readthedocs.io, which also includes an interactive set of example recipes for new users to explore. As open-source software, we hope nbodykit provides a common framework for the community to use and develop in confronting the analysis challenges of future LSS surveys.Comment: 18 pages, 7 figures. Feedback very welcome. Code available at https://github.com/bccp/nbodykit and for documentation, see http://nbodykit.readthedocs.i

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    SpectroMap: Peak detection algorithm for audio fingerprinting

    Full text link
    We present SpectroMap, an open source GitHub repository for audio fingerprinting written in Python programming language. It is composed of a peak search algorithm that extracts topological prominences from a spectrogram via time-frequency bands. In this paper, we introduce the algorithm functioning with two experimental applications in a high-quality urban sound dataset and environmental audio recordings to describe how it works and how effective it is in handling the input data.Comment: 7 pages, 3 figure
    • …
    corecore