3,622 research outputs found
Panoramic optical and near-infrared SETI instrument: optical and structural design concepts
We propose a novel instrument design to greatly expand the current optical
and near-infrared SETI search parameter space by monitoring the entire
observable sky during all observable time. This instrument is aimed to search
for technosignatures by means of detecting nano- to micro-second light pulses
that could have been emitted, for instance, for the purpose of interstellar
communications or energy transfer. We present an instrument conceptual design
based upon an assembly of 198 refracting 0.5-m telescopes tessellating two
geodesic domes. This design produces a regular layout of hexagonal collecting
apertures that optimizes the instrument footprint, aperture diameter,
instrument sensitivity and total field-of-view coverage. We also present the
optical performance of some Fresnel lenses envisaged to develop a dedicated
panoramic SETI (PANOSETI) observatory that will dramatically increase sky-area
searched (pi steradians per dome), wavelength range covered, number of stellar
systems observed, interstellar space examined and duration of time monitored
with respect to previous optical and near-infrared technosignature finders.Comment: 14 pages, 5 figures, 3 table
Single-breath-hold photoacoustic computed tomography of the breast
We have developed a single-breath-hold photoacoustic computed tomography (SBH-PACT) system to reveal detailed angiographic structures in human breasts. SBH-PACT features a deep penetration depth (4 cm in vivo) with high spatial and temporal resolutions (255 µm in-plane resolution and a 10 Hz 2D frame rate). By scanning the entire breast within a single breath hold (~15 s), a volumetric image can be acquired and subsequently reconstructed utilizing 3D back-projection with negligible breathing-induced motion artifacts. SBH-PACT clearly reveals tumors by observing higher blood vessel densities associated with tumors at high spatial resolution, showing early promise for high sensitivity in radiographically dense breasts. In addition to blood vessel imaging, the high imaging speed enables dynamic studies, such as photoacoustic elastography, which identifies tumors by showing less compliance. We imaged breast cancer patients with breast sizes ranging from B cup to DD cup, and skin pigmentations ranging from light to dark. SBH-PACT identified all the tumors without resorting to ionizing radiation or exogenous contrast, posing no health risks
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Assessment of pulmonary edema: principles and practice
Pulmonary edema increasingly is recognized as a perioperative complication affecting outcome. Several risk factors have been identified, including those of cardiogenic origin, such as heart failure or excessive fluid administration, and those related to increased pulmonary capillary permeability secondary to inflammatory mediators.
Effective treatment requires prompt diagnosis and early intervention. Consequently, over the past 2 centuries a concentrated effort to develop clinical tools to rapidly diagnose pulmonary edema and track response to treatment has occurred. The ideal properties of such a tool would include high sensitivity and specificity, easy availability, and the ability to diagnose early accumulation of lung water before the development of the full clinical presentation. In addition, clinicians highly value the ability to precisely quantify extravascular lung water accumulation and differentiate hydrostatic from high permeability etiologies of pulmonary edema.
In this review, advances in understanding the physiology of extravascular lung water accumulation in health and in disease and the various mechanisms that protect against the development of pulmonary edema under physiologic conditions are discussed. In addition, the various bedside modalities available to diagnose early accumulation of extravascular lung water and pulmonary edema, including chest auscultation, chest roentgenography, lung ultrasonography, and transpulmonary thermodilution, are examined. Furthermore, advantages and limitations of these methods for the operating room and intensive care unit that are critical for proper modality selection in each individual case are explored
04251 -- Imaging Beyond the Pinhole Camera
From 13.06.04 to 18.06.04, the
Dagstuhl Seminar 04251 ``Imaging Beyond the Pin-hole Camera. 12th Seminar on Theoretical Foundations of Computer Vision\u27\u27 was held
in the International Conference and Research Center (IBFI),
Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Recommended from our members
A Smartphone-Based Tool for Rapid, Portable, and Automated Wide-Field Retinal Imaging.
Purpose:High-quality, wide-field retinal imaging is a valuable method for screening preventable, vision-threatening diseases of the retina. Smartphone-based retinal cameras hold promise for increasing access to retinal imaging, but variable image quality and restricted field of view can limit their utility. We developed and clinically tested a smartphone-based system that addresses these challenges with automation-assisted imaging. Methods:The system was designed to improve smartphone retinal imaging by combining automated fixation guidance, photomontage, and multicolored illumination with optimized optics, user-tested ergonomics, and touch-screen interface. System performance was evaluated from images of ophthalmic patients taken by nonophthalmic personnel. Two masked ophthalmologists evaluated images for abnormalities and disease severity. Results:The system automatically generated 100° retinal photomontages from five overlapping images in under 1 minute at full resolution (52.3 pixels per retinal degree) fully on-phone, revealing numerous retinal abnormalities. Feasibility of the system for diabetic retinopathy (DR) screening using the retinal photomontages was performed in 71 diabetics by masked graders. DR grade matched perfectly with dilated clinical examination in 55.1% of eyes and within 1 severity level for 85.2% of eyes. For referral-warranted DR, average sensitivity was 93.3% and specificity 56.8%. Conclusions:Automation-assisted imaging produced high-quality, wide-field retinal images that demonstrate the potential of smartphone-based retinal cameras to be used for retinal disease screening. Translational Relevance:Enhancement of smartphone-based retinal imaging through automation and software intelligence holds great promise for increasing the accessibility of retinal screening
A dataset of annotated omnidirectional videos for distancing applications
Omnidirectional (or 360â—¦ ) cameras are acquisition devices that, in the next few years, could have a big impact on video surveillance applications, research, and industry, as they can record a spherical view of a whole environment from every perspective. This paper presents two new contributions to the research community: the CVIP360 dataset, an annotated dataset of 360â—¦ videos for distancing applications, and a new method to estimate the distances of objects in a scene from a single 360â—¦ image. The CVIP360 dataset includes 16 videos acquired outdoors and indoors, annotated by adding information about the pedestrians in the scene (bounding boxes) and the distances to the camera of some points in the 3D world by using markers at fixed and known intervals. The proposed distance estimation algorithm is based on geometry facts regarding the acquisition process of the omnidirectional device, and is uncalibrated in practice: the only required parameter is the camera height. The proposed algorithm was tested on the CVIP360 dataset, and empirical results demonstrate that the estimation error is negligible for distancing applications
- …