534 research outputs found
INFORMATION TECHNOLOGY FOR NEXT-GENERATION OF SURGICAL ENVIRONMENTS
Minimally invasive surgeries (MIS) are fundamentally constrained by image quality,access to the operative field, and the visualization environment on which thesurgeon relies for real-time information. Although invasive access benefits the patient,it also leads to more challenging procedures, which require better skills andtraining. Endoscopic surgeries rely heavily on 2D interfaces, introducing additionalchallenges due to the loss of depth perception, the lack of 3-Dimensional imaging,and the reduction of degrees of freedom.By using state-of-the-art technology within a distributed computational architecture,it is possible to incorporate multiple sensors, hybrid display devices, and3D visualization algorithms within a exible surgical environment. Such environmentscan assist the surgeon with valuable information that goes far beyond what iscurrently available. In this thesis, we will discuss how 3D visualization and reconstruction,stereo displays, high-resolution display devices, and tracking techniques arekey elements in the next-generation of surgical environments
Fast Radiometric Compensation for Nonlinear Projectors
Radiometric compensation can be accomplished on nonlinearprojector-camera systems through the use of pixelwise lookup ta-bles. Existing methods are both computationally and memory inten-sive. Such methods are impractical to be implemented for currenthigh-end projector technology. In this paper, a novel computation-ally efficient method for nonlinear radiometric compensation of pro-jectors is proposed. The compensation accuracy of the proposedmethod is assessed with the use of a spectroradiometer. Experi-mental results show both the effectiveness of the method and thereduction in compensation time compared to a recent state-of-the-art method
Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect
Recently, the new Kinect One has been issued by Microsoft, providing the next
generation of real-time range sensing devices based on the Time-of-Flight (ToF)
principle. As the first Kinect version was using a structured light approach,
one would expect various differences in the characteristics of the range data
delivered by both devices. This paper presents a detailed and in-depth
comparison between both devices. In order to conduct the comparison, we propose
a framework of seven different experimental setups, which is a generic basis
for evaluating range cameras such as Kinect. The experiments have been designed
with the goal to capture individual effects of the Kinect devices as isolatedly
as possible and in a way, that they can also be adopted, in order to apply them
to any other range sensing device. The overall goal of this paper is to provide
a solid insight into the pros and cons of either device. Thus, scientists that
are interested in using Kinect range sensing cameras in their specific
application scenario can directly assess the expected, specific benefits and
potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and
Image Understanding (CVIU
COMMERCIALIZATION AND OPTIMIZATION OF THE PIXEL ROUTER
The Pixel Router was developed at the University of Kentucky with the intent of supporting multi-projector displays by combining the scalability of commercial software solutions with the flexibility of commercial hardware solutions. This custom hardware solution uses a Look Up Table for an arbitrary input to output pixel mapping, but suffers from high memory latencies due to random SDRAM accesses. In order for this device to achieve marketability, the image interpolation method needed improvement as well. The previous design used the nearest neighbor interpolation method, which produces poor looking results but requires the least amount of memory accesses. A cache was implemented to support bilinear interpolation to simultaneously increase the output frame rate and image quality. A number of software simulations were conducted to test and refine the cache design, and these results were verified by testing the implementation on hardware. The frame rate was improved by a factor of 6 versus bilinear interpolation on the previous design, and by as much as 50% versus nearest neighbor on the previous design. The Pixel Router was also certified for FCC conducted and radiated emissions compliance, and potential commercial market areas were explored
THE UNIVERSAL MEDIA BOOK
We explore the integration of projected imagery with a physical book that acts as a tangible interface to multimedia data. Using a camera and projector pair, a tracking framework is presented wherein the 3D position of planar pages are monitored as they are turned back and forth by a user, and data is correctly warped and projected onto each page at interactive rates to provide the user with an intuitive mixed-reality experience. The book pages are blank, so traditional camera-based approaches to tracking physical features on the display surface do not apply. Instead, in each frame, feature points are independently extracted from the camera and projector images, and matched to recover the geometry of the pages in motion. The book can be loaded with multimedia content, including images and videos. In addition, volumetric datasets can be explored by removing a page from the book and using it as a tool to navigate through a virtual 3D volume
Astrometry with the Wide-Field InfraRed Space Telescope
The Wide-Field InfraRed Space Telescope (WFIRST) will be capable of
delivering precise astrometry for faint sources over the enormous field of view
of its main camera, the Wide-Field Imager (WFI). This unprecedented combination
will be transformative for the many scientific questions that require precise
positions, distances, and velocities of stars. We describe the expectations for
the astrometric precision of the WFIRST WFI in different scenarios, illustrate
how a broad range of science cases will see significant advances with such
data, and identify aspects of WFIRST's design where small adjustments could
greatly improve its power as an astrometric instrument.Comment: version accepted to JATI
The Robo-AO-2 facility for rapid visible/near-infrared AO imaging and the demonstration of hybrid techniques
We are building a next-generation laser adaptive optics system, Robo-AO-2,
for the UH 2.2-m telescope that will deliver robotic, diffraction-limited
observations at visible and near-infrared wavelengths in unprecedented numbers.
The superior Maunakea observing site, expanded spectral range and rapid
response to high-priority events represent a significant advance over the
prototype. Robo-AO-2 will include a new reconfigurable natural guide star
sensor for exquisite wavefront correction on bright targets and the
demonstration of potentially transformative hybrid AO techniques that promise
to extend the faintness limit on current and future exoplanet adaptive optics
systems.Comment: 15 page
- …