145 research outputs found

    Edge adaptive filtering of depth maps for mobile devices

    Get PDF
    Abstract. Mobile phone cameras have an almost unlimited depth of field, and therefore the images captured with them have wide areas in focus. When the depth of field is digitally manipulated through image processing, accurate perception of depth in a captured scene is important. Capturing depth data requires advanced imaging methods. In case a stereo lens system is used, depth information is calculated from the disparities between stereo frames. The resulting depth map is often noisy or doesn’t have information for every pixel. Therefore it has to be filtered before it is used for emphasizing depth. Edges must be taken into account in this process to create natural-looking shallow depth of field images. In this study five filtering methods are compared with each other. The main focus is the Fast Bilateral Solver, because of its novelty and high reported quality. Mobile imaging requires fast filtering in uncontrolled environments, so optimizing the processing time of the filters is essential. In the evaluations the depth maps are filtered, and the quality and the speed is determined for every method. The results show that the Fast Bilateral Solver filters the depth maps well, and can handle noisy depth maps better than the other evaluated methods. However, in mobile imaging it is slow and needs further optimization.Reunatietoinen syvyyskarttojen suodatus mobiililaitteilla. Tiivistelmä. Matkapuhelimien kameroissa on lähes rajoittamaton syväterävyysalue, ja siksi niillä otetuissa kuvissa laajat alueet näkyvät tarkennettuina. Digitaalisessa syvyysterävyysalueen muokkauksessa tarvitaan luotettava syvyystieto. Syvyysdatan hankinta vaatii edistyneitä kuvausmenetelmiä. Käytettäessä stereokameroita syvyystieto lasketaan kuvien välisistä dispariteeteista. Tuloksena syntyvä syvyyskartta on usein kohinainen, tai se ei sisällä syvyystietoa joka pikselille. Tästä syystä se on suodatettava ennen käyttöä syvyyden korostamiseen. Tässä prosessissa reunat ovat otettava huomioon, jotta saadaan luotua luonnollisen näköisiä kapean syväterävyysalueen kuvia. Tässä tutkimuksessa verrataan viittä suodatusmenetelmää keskenään. Eniten keskitytään nopeaan bilateraaliseen ratkaisijaan, johtuen sen uutuudesta ja korkeasta tuloksen laadusta. Mobiililaitteella kuvantamisen vaatimuksena on nopea suodatus hallitsemattomissa olosuhteissa, joten suodattimien prosessointiajan optimointi on erittäin tärkeää. Vertailuissa syvyyskuvat suodatetaan ja suodatuksen laatu ja nopeus mitataan jokaiselle menetelmälle. Tulokset osoittavat, että nopea bilateraalinen ratkaisija suodattaa syvyyskarttoja hyvin ja osaa käsitellä kohinaisia syvyyskarttoja paremmin kuin muut tarkastellut menetelmät. Mobiilikuvantamiseen se on kuitenkin hidas ja tarvitsee pidemmälle menevää optimointia

    Focus Is All You Need: Loss Functions For Event-based Vision

    Full text link
    Event cameras are novel vision sensors that output pixel-level brightness changes ("events") instead of traditional video frames. These asynchronous sensors offer several advantages over traditional cameras, such as, high temporal resolution, very high dynamic range, and no motion blur. To unlock the potential of such sensors, motion compensation methods have been recently proposed. We present a collection and taxonomy of twenty two objective functions to analyze event alignment in motion compensation approaches (Fig. 1). We call them Focus Loss Functions since they have strong connections with functions used in traditional shape-from-focus applications. The proposed loss functions allow bringing mature computer vision tools to the realm of event cameras. We compare the accuracy and runtime performance of all loss functions on a publicly available dataset, and conclude that the variance, the gradient and the Laplacian magnitudes are among the best loss functions. The applicability of the loss functions is shown on multiple tasks: rotational motion, depth and optical flow estimation. The proposed focus loss functions allow to unlock the outstanding properties of event cameras.Comment: 29 pages, 19 figures, 4 table

    Development of a handheld fiber-optic probe-based raman imaging instrumentation: raman chemlighter

    Get PDF
    Raman systems based on handheld fiber-optic probes offer advantages in terms of smaller sizes and easier access to the measurement sites, which are favorable for biomedical and clinical applications in the complex environment. However, there are several common drawbacks of applying probes for many applications: (1) The fixed working distance requires the user to maintain a certain working distance to acquire higher Raman signals; (2) The single-point-measurement ability restricts realizing a mapping or scanning procedure; (3) Lack of real-time data processing and a straightforward co-registering method to link the Raman information with the respective measurement position. The thesis proposed and experimentally demonstrated various approaches to overcome these drawbacks. A handheld fiber-optic Raman probe with an autofocus unit was presented to overcome the problem arising from using fixed-focus lenses, by using a liquid lens as the objective lens, which allows dynamical adjustment of the focal length of the probe. An implementation of a computer vision-based positional tracking to co-register the regular Raman spectroscopic measurements with the spatial location enables fast recording of a Raman image from a large tissue sample by combining positional tracking of the laser spot through brightfield images. The visualization of the Raman image has been extended to augmented and mixed reality and combined with a 3D reconstruction method and projector-based visualization to offer an intuitive and easily understandable way of presenting the Raman image. All these advances are substantial and highly beneficial to further drive the clinical translation of Raman spectroscopy as potential image-guided instrumentation

    The Mark 3 Haploscope

    Get PDF
    A computer-operated binocular vision testing device was developed as one part of a system designed for NASA to evaluate the visual function of astronauts during spaceflight. This particular device, called the Mark 3 Haploscope, employs semi-automated psychophysical test procedures to measure visual acuity, stereopsis, phoria, fixation disparity, refractive state and accommodation/convergence relationships. Test procedures are self-administered and can be used repeatedly without subject memorization. The Haploscope was designed as one module of the complete NASA Vision Testing System. However, it is capable of stand-alone operation. Moreover, the compactness and portability of the Haploscope make possible its use in a broad variety of testing environments

    Visually guided vergence in a new stereo camera system

    Get PDF
    People move their eyes several times each second, to selectivelyanalyze visual information from specific locations. This is impor-tant, because analyzing the whole scene in foveal detail would re-quire a beachball-sized brain and thousands of additional caloriesper day. As artificial vision becomes more sophisticated, it mayface analogous constraints. Anticipating this, we previously devel-oped a robotic head with biologically realistic oculomotor capabil-ities. Here we present a system for accurately orienting the cam-eras toward a three-dimensional point. The robot’s cameras con-verge when looking at something nearby, so each camera shouldideally centre the same visual feature. At the end of a saccade,we combine priors with cross-correlation of the images from eachcamera to iteratively fine-tune their alignment, and we use the ori-entations to set focus distance. This system allows the robot toaccurately view a visual target with both eyes
    • …
    corecore