16,779 research outputs found

    Unobtrusive and pervasive video-based eye-gaze tracking

    Get PDF
    Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe

    3-D surface modelling of the human body and 3-D surface anthropometry

    Get PDF
    This thesis investigates three-dimensional (3-D) surface modelling of the human body and 3-D surface anthropometry. These are two separate, but closely related, areas. 3-D surface modelling is an essential technology for representing and describing the surface shape of an object on a computer. 3-D surface modelling of the human body has wide applications in engineering design, work space simulation, the clothing industry, medicine, biomechanics and animation. These applications require increasingly realistic surface models of the human body. 3-D surface anthropometry is a new interdisciplinary subject. It is defined in this thesis as the art, science, and technology of acquiring, modelling and interrogating 3-D surface data of the human body. [Continues.

    Option data and modeling BSM implied volatility

    Get PDF
    This contribution to the Handbook of Computational Finance, Springer-Verlag, gives an overview on modeling implied volatility data. After introducing the concept of Black-Scholes-Merton implied volatility (IV), the empirical stylized facts of IV data are reviewed. We then discuss recent results on IV surface dynamics and the computational aspects of IV. The main focus is on various parametric, semi- and nonparametric modeling strategies for IV data, including ones which respect no-arbitrage bounds.Implied volatility

    Common Correlation and Calibrating the Lognormal Forward Rate Model

    Get PDF
    1997 three papers that introduced very similar lognormal diffusion processes for interest rates appeared virtuously simultaneously. These models, now commonly called the 'LIBOR models' are based on either lognormal diffusions of forward rates as in Brace, Gatarek & Musiela (1997) and Miltersen, Sandermann & Sondermann (1997) or lognormal diffusions of swap rates, as in Jamshidian (1997). The consequent research interest in the calibration of the LIBOR models has engendered a growing empirical literature, including many papers by Brigo and Mercurio, and Riccardo Rebonato (www.fabiomercurio.it and www.damianobrigo.it and www.rebonato.com). The art of model calibration requires a reasonable knowledge of option pricing and a thorough background in statistics - techniques that are quite different to those required to design no-arbitrage pricing models. Researchers will find the book by Brigo and Mercurio (2001) and the forthcoming book by Rebonato (2002) invaluable aids to their understanding.

    3D Geometric Analysis of Tubular Objects based on Surface Normal Accumulation

    Get PDF
    This paper proposes a simple and efficient method for the reconstruction and extraction of geometric parameters from 3D tubular objects. Our method constructs an image that accumulates surface normal information, then peaks within this image are located by tracking. Finally, the positions of these are optimized to lie precisely on the tubular shape centerline. This method is very versatile, and is able to process various input data types like full or partial mesh acquired from 3D laser scans, 3D height map or discrete volumetric images. The proposed algorithm is simple to implement, contains few parameters and can be computed in linear time with respect to the number of surface faces. Since the extracted tube centerline is accurate, we are able to decompose the tube into rectilinear parts and torus-like parts. This is done with a new linear time 3D torus detection algorithm, which follows the same principle of a previous work on 2D arc circle recognition. Detailed experiments show the versatility, accuracy and robustness of our new method.Comment: in 18th International Conference on Image Analysis and Processing, Sep 2015, Genova, Italy. 201

    Camera Calibration with Non-Central Local Camera Models

    Get PDF
    Kamerakalibrierung ist eine wichtige Grundvoraussetzung für viele Computer-Vision-Algorithmen wie Stereo-Vision und visuelle Odometrie. Das Ziel der Kamerakalibrierung besteht darin, sowohl die örtliche Lage der Kameras als auch deren Abbildungsmodell zu bestimmen. Das Abbildungsmodell einer Kamera beschreibt den Zusammenhang zwischen der 3D-Welt und der Bildebene. Aktuell werden häufig einfache globale Kamera-Modelle in einem Kalibrierprozess geschätzt, welcher mit vergleichsweise geringem Aufwand und einer großen Fehlertoleranz durchgeführt werden kann. Um das resultierende Kameramodell zu bewerten, wird in der Regel der Rückprojektionsfehler als Maß herangezogen. Jedoch können auch einfache Kameramodelle, die das Abbildungsverhalten von optischen Systemen nicht präzise beschreiben können, niedrige Rückprojektionsfehler erzielen. Dies führt dazu, dass immer wieder schlecht kalibrierte Kameramodelle nicht als solche identifiziert werden. Um dem entgegen zu wirken, wird in dieser Arbeit ein neues kontinuierliches nicht-zentrales Kameramodell basierend auf B-Splines vorgeschlagen. Dieses Abbildungsmodell ermöglicht es, verschiedene Objektive und nicht-zentrale Verschiebungen, die zum Beispiel durch eine Platzierung der Kamera hinter einer Windschutzscheibe entstehen, akkurat abzubilden. Trotz der allgemeinen Modellierung kann dieses Kameramodell durch einen einfach zu verwendenden Schachbrett-Kalibrierprozess geschätzt werden. Um Kalibrierergebnisse zu bewerten, wird anstelle des mittleren Rückprojektionsfehlers ein Kalibrier-Benchmark vorgeschlagen. Die Grundwahrheit des Kameramodells wird durch ein diskretes Sichtstrahlen-basiertes Modell beschrieben. Um dieses Modell zu schätzen, wird ein Kalibrierprozess vorgestellt, welches ein aktives Display als Ziel verwendet. Dabei wird eine lokale Parametrisierung für die Sichtstrahlen vorgestellt und ein Weg aufgezeigt, die Oberfläche des Displays zusammen mit den intrinsischen Kameraparametern zu schätzen. Durch die Schätzung der Oberfläche wird der mittlere Punkt-zu-Linien-Abstand um einen Faktor von mehr als 20 reduziert. Erst dadurch kann das so geschätzte Kameramodell als Grundwahrheit dienen. Das vorgeschlagene Kameramodell und die dazugehörigen Kalibrierprozesse werden durch eine ausführliche Auswertung in Simulation und in der echten Welt mithilfe des neuen Kalibrier-Benchmarks bewertet. Es wird gezeigt, dass selbst in dem vereinfachten Fall einer ebenen Glasscheibe, die vor der Kamera platziert ist, das vorgeschlagene Modell sowohl einem zentralen als auch einem nicht-zentralen globalen Kameramodell überlegen ist. Am Ende wird die Praxistauglichkeit des vorgeschlagenen Modells bewiesen, indem ein automatisches Fahrzeug kalibriert wird, das mit sechs Kameras ausgestattet ist, welche in unterschiedliche Richtungen zeigen. Der mittlere Rückprojektionsfehler verringert sich durch das neue Modell bei allen Kameras um den Faktor zwei bis drei. Der Kalibrier-Benchmark ermöglicht es in Zukunft, die Ergebnisse verschiedener Kalibrierverfahren miteinander zu vergleichen und die Genauigkeit des geschätzten Kameramodells mithilfe der Grundwahrheit akkurat zu bestimmen. Die Verringerung des Kalibrierfehlers durch das neue vorgeschlagene Kameramodell hilft die Genauigkeit weiterführender Algorithmen wie Stereo-Vision, visuelle Odometrie oder 3D-Rekonstruktion zu erhöhen

    Hubble Space Telescope Weak-lensing Study of the Galaxy Cluster XMMU J2235.3-2557 at z=1.4: A Surprisingly Massive Galaxy Cluster when the Universe is One-third of its Current Age

    Full text link
    We present a weak-lensing analysis of the z=1.4 galaxy cluster XMMU J2235.3-2557, based on deep Advanced Camera for Surveys images. Despite the observational challenge set by the high redshift of the lens, we detect a substantial lensing signal at the >~ 8 sigma level. This clear detection is enabled in part by the high mass of the cluster, which is verified by our both parametric and non-parametric estimation of the cluster mass. Assuming that the cluster follows a Navarro-Frenk-White mass profile, we estimate that the projected mass of the cluster within r=1 Mpc is (8.5+-1.7) x 10^14 solar mass, where the error bar includes the statistical uncertainty of the shear profile, the effect of possible interloping background structures, the scatter in concentration parameter, and the error in our estimation of the mean redshift of the background galaxies. The high X-ray temperature 8.6_{-1.2}^{+1.3} keV of the cluster recently measured with Chandra is consistent with this high lensing mass. When we adopt the 1-sigma lower limit as a mass threshold and use the cosmological parameters favored by the Wilkinson Microwave Anisotropy Probe 5-year (WMAP5) result, the expected number of similarly massive clusters at z >~ 1.4 in the 11 square degree survey is N ~ 0.005. Therefore, the discovery of the cluster within the survey volume is a rare event with a probability < 1%, and may open new scenarios in our current understanding of cluster formation within the standard cosmological model.Comment: Accepted to ApJ for publication. 40 pages and 14 figure

    A non-invasive technique for burn area measurement

    Get PDF
    The need for a reliable and accurate method for assessing the surface area of burn wounds currently exists in the branch of medicine involved with burn care and treatment. The percentage of the surface area is of critical importance in evaluating fluid replacement amounts and nutritional support during the 24 hours of postburn therapy. A noninvasive technique has been developed which facilitates the measurement of burn area. The method we shall describe is an inexpensive technique to measure the burn areas accurately. Our imaging system is based on a technique known as structured light. Most structured light computer imaging systems, including ours, use triangulation to determine the location of points in three dimensions as the intersection of two lines: a ray of light originating from the structured light projector and the line of sight determined by the location of the image point in the camera plane. The geometry used to determine 3D location by triangulation is identical to the geometry of other stereo-based vision system, including the human vision system. Our system projects a square grid pattern from 35mm slide onto the patient. The grid on the slide is composed of uniformly spaced orthogonal stripes which may be indexed by row and column. Each slide also has square markers placed in between time lines of the grid in both the horizontal and vertical directions in the center of the slide. Our system locates intersections of the projected grid stripes in the camera image and determines the 3D location of the corresponding points on the body by triangulation. Four steps are necessary in order to reconstruct 3D locations of points on the surface of the skin: camera and projector calibration; image processing to locate the grid intersections in the camera image; grid labeling to establish the correspondence between projected and imaged intersections; and triangulation to determine three-dimensional position. Three steps are required to segment burned portion in image: edge detection to get the strongest edges of the region; edge following to form a closed boundary; and region filling to identify the burn region. After combining the reconstructed 3D locations and segmented image, numerical analysis and geometric modeling techniques are used to calculate the burn area. We use cubic spline interpolation, bicubic surface patches and Gaussian quadrature double integration to calculate the burn wound area. The accuracy of this technique is demonstrated The benefits and advantages of this technique are, first, that we don’t have to make any assumptions about the shape of the human body and second, there is no need for either the Rule-of-Nines, or the weight and height of the patient. This technique can be used for human body shape, regardless of weight proportion, size, sex or skin pigmentation. The low cost, intuitive method, and demonstrated efficiency of this computer imaging technique makes it a desirable alternative to current methods and provides the burn care specialist with a sterile, safe, and effective diagnostic tool in assessing and investigating burn areas
    corecore