15,826 research outputs found

    The Quest for the Most Spherical Bubble

    Get PDF
    We describe a recently realized experiment producing the most spherical cavitation bubbles today. The bubbles grow inside a liquid from a point-plasma generated by a nanosecond laser pulse. Unlike in previous studies, the laser is focussed by a parabolic mirror, resulting in a plasma of unprecedented symmetry. The ensuing bubbles are sufficiently spherical that the hydrostatic pressure gradient caused by gravity becomes the dominant source of asymmetry in the collapse and rebound of the cavitation bubbles. To avoid this natural source of asymmetry, the whole experiment is therefore performed in microgravity conditions (ESA, 53rd and 56th parabolic flight campaign). Cavitation bubbles were observed in microgravity (~0g), where their collapse and rebound remain spherical, and in normal gravity (1g) to hyper-gravity (1.8g), where a gravity-driven jet appears. Here, we describe the experimental setup and technical results, and overview the science data. A selection of high-quality shadowgraphy movies and time-resolved pressure data is published online.Comment: 18 pages, 14 figures, 1 tabl

    Apparatus to control and visualize the impact of a high-energy laser pulse on a liquid target

    Get PDF
    We present an experimental apparatus to control and visualize the response of a liquid target to a laser-induced vaporization. We use a millimeter-sized drop as target and present two liquid-dye solutions that allow a variation of the absorption coefficient of the laser light in the drop by seven orders of magnitude. The excitation source is a Q-switched Nd:YAG laser at its frequency-doubled wavelength emitting nanosecond pulses with energy densities above the local vaporization threshold. The absorption of the laser energy leads to a large-scale liquid motion at timescales that are separated by several orders of magnitude, which we spatiotemporally resolve by a combination of ultra-high-speed and stroboscopic high-resolution imaging in two orthogonal views. Surprisingly, the large-scale liquid motion at upon laser impact is completely controlled by the spatial energy distribution obtained by a precise beam-shaping technique. The apparatus demonstrates the potential for accurate and quantitative studies of laser-matter interactions.Comment: Submitted to Review of Scientific Instrument

    A General Framework for Flexible Multi-Cue Photometric Point Cloud Registration

    Get PDF
    The ability to build maps is a key functionality for the majority of mobile robots. A central ingredient to most mapping systems is the registration or alignment of the recorded sensor data. In this paper, we present a general methodology for photometric registration that can deal with multiple different cues. We provide examples for registering RGBD as well as 3D LIDAR data. In contrast to popular point cloud registration approaches such as ICP our method does not rely on explicit data association and exploits multiple modalities such as raw range and image data streams. Color, depth, and normal information are handled in an uniform manner and the registration is obtained by minimizing the pixel-wise difference between two multi-channel images. We developed a flexible and general framework and implemented our approach inside that framework. We also released our implementation as open source C++ code. The experiments show that our approach allows for an accurate registration of the sensor data without requiring an explicit data association or model-specific adaptations to datasets or sensors. Our approach exploits the different cues in a natural and consistent way and the registration can be done at framerate for a typical range or imaging sensor.Comment: 8 page

    High-speed imaging in fluids

    Get PDF
    High-speed imaging is in popular demand for a broad range of experiments in fluids. It allows for a detailed visualization of the event under study by acquiring a series of image frames captured at high temporal and spatial resolution. This review covers high-speed imaging basics, by defining criteria for high-speed imaging experiments in fluids and to give rule-of-thumbs for a series of cases. It also considers stroboscopic imaging, triggering and illumination, and scaling issues. It provides guidelines for testing and calibration. Ultra high-speed imaging at frame rates exceeding 1 million frames per second is reviewed, and the combination of conventional experiments in fluids techniques with high-speed imaging techniques are discussed. The review is concluded with a high-speed imaging chart, which summarizes criteria for temporal scale and spatial scale and which facilitates the selection of a high-speed imaging system for the applicatio

    Femto-photography: capturing and visualizing the propagation of light

    Get PDF
    We present femto-photography, a novel imaging technique to capture and visualize the propagation of light. With an effective exposure time of 1.85 picoseconds (ps) per frame, we reconstruct movies of ultrafast events at an equivalent resolution of about one half trillion frames per second. Because cameras with this shutter speed do not exist, we re-purpose modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak sensor, in which the time of arrival of light from the scene is coded in one of the sensor's spatial dimensions. We introduce reconstruction methods that allow us to visualize the propagation of femtosecond light pulses through macroscopic scenes; at such fast resolution, we must consider the notion of time-unwarping between the camera's and the world's space-time coordinate systems to take into account effects associated with the finite speed of light. We apply our femto-photography technique to visualizations of very different scenes, which allow us to observe the rich dynamics of time-resolved light transport effects, including scattering, specular reflections, diffuse interreflections, diffraction, caustics, and subsurface scattering. Our work has potential applications in artistic, educational, and scientific visualizations; industrial imaging to analyze material properties; and medical imaging to reconstruct subsurface elements. In addition, our time-resolved technique may motivate new forms of computational photography.MIT Media Lab ConsortiumLincoln LaboratoryMassachusetts Institute of Technology. Institute for Soldier NanotechnologiesAlfred P. Sloan Foundation (Research Fellowship)United States. Defense Advanced Research Projects Agency (Young Faculty Award

    3D ShapeNets: A Deep Representation for Volumetric Shapes

    Full text link
    3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.Comment: to be appeared in CVPR 201
    corecore