7,582 research outputs found
Laboratory and telescope demonstration of the TP3-WFS for the adaptive optics segment of AOLI
AOLI (Adaptive Optics Lucky Imager) is a state-of-art instrument that combines adaptive optics (AO) and lucky imaging (LI) with the objective of obtaining diffraction limited images in visible wavelength at mid- and big-size ground-based telescopes. The key innovation of AOLI is the development and use of the new TP3-WFS (Two Pupil Plane PositionsWavefront Sensor). The TP3-WFS, working in visible band, represents an advance over classical wavefront sensors such as the Shack-Hartmann WFS (SH-WFS) because it can theoretically use fainter natural reference stars, which would ultimately provide better sky coverages to AO instruments using this newer sensor. This paper describes the software, algorithms and procedures that enabled AOLI to become the first astronomical instrument performing real-time adaptive optics corrections
in a telescope with this new type of WFS, including the first control-related
results at the William Herschel Telescope (WHT)This work was supported by the Spanish Ministry of Economy under the projects AYA2011-29024, ESP2014-56869-C2-2-P, ESP2015-69020-C2-2-R and DPI2015-66458-C2-2-R, by project 15345/PI/10 from the Fundación Séneca, by the Spanish Ministry of Education under the grant FPU12/05573, by project ST/K002368/1 from the Science and Technology Facilities Council and by ERDF funds from the European Commission. The results presented in this paper are based on observations made with the William Herschel Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias. Special thanks go to Lara Monteagudo and Marcos Pellejero for their timely contributions
Deep Cytometry: Deep learning with Real-time Inference in Cell Sorting and Flow Cytometry
Deep learning has achieved spectacular performance in image and speech
recognition and synthesis. It outperforms other machine learning algorithms in
problems where large amounts of data are available. In the area of measurement
technology, instruments based on the photonic time stretch have established
record real-time measurement throughput in spectroscopy, optical coherence
tomography, and imaging flow cytometry. These extreme-throughput instruments
generate approximately 1 Tbit/s of continuous measurement data and have led to
the discovery of rare phenomena in nonlinear and complex systems as well as new
types of biomedical instruments. Owing to the abundance of data they generate,
time-stretch instruments are a natural fit to deep learning classification.
Previously we had shown that high-throughput label-free cell classification
with high accuracy can be achieved through a combination of time-stretch
microscopy, image processing and feature extraction, followed by deep learning
for finding cancer cells in the blood. Such a technology holds promise for
early detection of primary cancer or metastasis. Here we describe a new deep
learning pipeline, which entirely avoids the slow and computationally costly
signal processing and feature extraction steps by a convolutional neural
network that directly operates on the measured signals. The improvement in
computational efficiency enables low-latency inference and makes this pipeline
suitable for cell sorting via deep learning. Our neural network takes less than
a few milliseconds to classify the cells, fast enough to provide a decision to
a cell sorter for real-time separation of individual target cells. We
demonstrate the applicability of our new method in the classification of OT-II
white blood cells and SW-480 epithelial cancer cells with more than 95%
accuracy in a label-free fashion
Hardware-accelerated interactive data visualization for neuroscience in Python.
Large datasets are becoming more and more common in science, particularly in neuroscience where experimental techniques are rapidly evolving. Obtaining interpretable results from raw data can sometimes be done automatically; however, there are numerous situations where there is a need, at all processing stages, to visualize the data in an interactive way. This enables the scientist to gain intuition, discover unexpected patterns, and find guidance about subsequent analysis steps. Existing visualization tools mostly focus on static publication-quality figures and do not support interactive visualization of large datasets. While working on Python software for visualization of neurophysiological data, we developed techniques to leverage the computational power of modern graphics cards for high-performance interactive data visualization. We were able to achieve very high performance despite the interpreted and dynamic nature of Python, by using state-of-the-art, fast libraries such as NumPy, PyOpenGL, and PyTables. We present applications of these methods to visualization of neurophysiological data. We believe our tools will be useful in a broad range of domains, in neuroscience and beyond, where there is an increasing need for scalable and fast interactive visualization
Recommended from our members
Optical coherence tomography measurements of biological fluid flows with picolitre spatial localization
This paper was presented at the 4th Micro and Nano Flows Conference (MNF2014), which was held at University College, London, UK. The conference was organised by Brunel University and supported by the Italian Union of Thermofluiddynamics, IPEM, the Process Intensification Network, the Institution of Mechanical Engineers, the Heat Transfer Society, HEXAG - the Heat Exchange Action Group, and the Energy Institute, ASME Press, LCN London Centre for Nanotechnology, UCL University College London, UCL Engineering, the International NanoScience Community, www.nanopaprika.eu.Interest in studying the human and animal microcirculation has burgeoned in recent years. In part
this has been driven by recent advances in volumetric microscopy modalities, which allow the study of the
3-D morphology of the microcirculation without the limitations of 2-D intra-vital microscopy. In this paper
we highlight the power of optical coherence tomography (OCT) to image the normal and pathological
microcirculation with picolitre voxel sizes. Both Doppler and speckle-variance methods are employed to
characterize complex rheological flows both in-vitro and in-vivo. GPU accelerated image registration
methods are demonstrated in order to mitigate problems of bulk tissue motion in methods based on speckle
decorrelation. In-vivo images of the human nailfold microcirculation are shown
Adaptive Real Time Imaging Synthesis Telescopes
The digital revolution is transforming astronomy from a data-starved to a
data-submerged science. Instruments such as the Atacama Large Millimeter Array
(ALMA), the Large Synoptic Survey Telescope (LSST), and the Square Kilometer
Array (SKA) will measure their accumulated data in petabytes. The capacity to
produce enormous volumes of data must be matched with the computing power to
process that data and produce meaningful results. In addition to handling huge
data rates, we need adaptive calibration and beamforming to handle atmospheric
fluctuations and radio frequency interference, and to provide a user
environment which makes the full power of large telescope arrays accessible to
both expert and non-expert users. Delayed calibration and analysis limit the
science which can be done. To make the best use of both telescope and human
resources we must reduce the burden of data reduction.
Our instrumentation comprises of a flexible correlator, beam former and
imager with digital signal processing closely coupled with a computing cluster.
This instrumentation will be highly accessible to scientists, engineers, and
students for research and development of real-time processing algorithms, and
will tap into the pool of talented and innovative students and visiting
scientists from engineering, computing, and astronomy backgrounds.
Adaptive real-time imaging will transform radio astronomy by providing
real-time feedback to observers. Calibration of the data is made in close to
real time using a model of the sky brightness distribution. The derived
calibration parameters are fed back into the imagers and beam formers. The
regions imaged are used to update and improve the a-priori model, which becomes
the final calibrated image by the time the observations are complete
Inviwo -- A Visualization System with Usage Abstraction Levels
The complexity of today's visualization applications demands specific
visualization systems tailored for the development of these applications.
Frequently, such systems utilize levels of abstraction to improve the
application development process, for instance by providing a data flow network
editor. Unfortunately, these abstractions result in several issues, which need
to be circumvented through an abstraction-centered system design. Often, a high
level of abstraction hides low level details, which makes it difficult to
directly access the underlying computing platform, which would be important to
achieve an optimal performance. Therefore, we propose a layer structure
developed for modern and sustainable visualization systems allowing developers
to interact with all contained abstraction levels. We refer to this interaction
capabilities as usage abstraction levels, since we target application
developers with various levels of experience. We formulate the requirements for
such a system, derive the desired architecture, and present how the concepts
have been exemplary realized within the Inviwo visualization system.
Furthermore, we address several specific challenges that arise during the
realization of such a layered architecture, such as communication between
different computing platforms, performance centered encapsulation, as well as
layer-independent development by supporting cross layer documentation and
debugging capabilities
- …