19 research outputs found
A variational method for dejittering large fluorescence line scanner images
International audienceWe propose a variational method dedicated to jitter correction of large fluorescence scanner images. Our method consists in minimizing a global energy functional to estimate a dense displacement field representing the spatially-varying jitter. The computational approach is based on a half-quadratic splitting of the energy functional, which decouples the realignment data term and the dedicated differential-based regularizer. The resulting problem amounts to alternatively solving two convex and nonconvex optimization subproblems with appropriate algorithms. Experimental results on artificial and large real fluorescence images demonstrate that our method is not only capable to handle large displacements but is also efficient in terms of subpixel precision without inducing additional intensity artifacts
Digital 3D Technologies for Humanities Research and Education: An Overview
Digital 3D modelling and visualization technologies have been widely applied to support research in the humanities since the 1980s. Since technological backgrounds, project opportunities, and methodological considerations for application are widely discussed in the literature, one of the next tasks is to validate these techniques within a wider scientific community and establish them in the culture of academic disciplines. This article resulted from a postdoctoral thesis and is intended to provide a comprehensive overview on the use of digital 3D technologies in the humanities with regards to (1) scenarios, user communities, and epistemic challenges; (2) technologies, UX design, and workflows; and (3) framework conditions as legislation, infrastructures, and teaching programs. Although the results are of relevance for 3D modelling in all humanities disciplines, the focus of our studies is on modelling of past architectural and cultural landscape objects via interpretative 3D reconstruction methods
Computer-supported movement guidance: investigating visual/visuotactile guidance and informing the design of vibrotactile body-worn interfaces
This dissertation explores the use of interactive systems to support
movement guidance, with applications in various fields such as sports,
dance, physiotherapy, and immersive sketching. The research focuses
on visual, haptic, and visuohaptic approaches and aims to overcome
the limitations of traditional guidance methods, such as dependence
on an expert and high costs for the novice. The main contributions of
the thesis are (1) an evaluation of the suitability of various types of
displays and visualizations of the human body for posture guidance,
(2) an investigation into the influence of different viewpoints/perspectives,
the addition of haptic feedback, and various movement
properties on movement guidance in virtual environments, (3) an
investigation into the effectiveness of visuotactile guidance for hand
movements in a virtual environment, (4) two in-depth studies of haptic
perception on the body to inform the design of wearable and handheld
interfaces that leverage tactile output technologies, and (5) an
investigation into new interaction techniques for tactile guidance of
arm movements. The results of this research advance the state of the
art in the field, provide design and implementation insights, and pave
the way for new investigations in computer-supported movement
guidance
Optical flow estimation via steered-L1 norm
Global variational methods for estimating optical flow are among the best performing methods due to the subpixel accuracy and the âfill-inâ effect they provide. The fill-in effect allows optical flow displacements to be estimated even in low and untextured areas of the image. The estimation of such displacements are induced by the smoothness term. The L1 norm provides a robust regularisation term for the optical flow energy function with a very good performance for edge-preserving. However this norm suffers from several issues, among these is the isotropic nature of this norm which reduces the fill-in effect and eventually the accuracy of estimation in areas near motion boundaries. In this paper we propose an enhancement to the L1 norm that improves the fill-in effect for this smoothness term. In order to do this we analyse the structure tensor matrix and use its eigenvectors to steer the smoothness term into components that are âorthogonal toâ and âaligned withâ image structures. This is done in primal-dual formulation. Results show a reduced end-point error and improved accuracy compared to the conventional L1 norm
Optical flow estimation via steered-L1 norm
Global variational methods for estimating optical flow are among the best performing methods due to the subpixel accuracy and the âfill-inâ effect they provide. The fill-in effect allows optical flow displacements to be estimated even in low and untextured areas of the image. The estimation of such displacements are induced by the smoothness term. The L1 norm provides a robust regularisation term for the optical flow energy function with a very good performance for edge-preserving. However this norm suffers from several issues, among these is the isotropic nature of this norm which reduces the fill-in effect and eventually the accuracy of estimation in areas near motion boundaries. In this paper we propose an enhancement to the L1 norm that improves the fill-in effect for this smoothness term. In order to do this we analyse the structure tensor matrix and use its eigenvectors to steer the smoothness term into components that are âorthogonal toâ and âaligned withâ image structures. This is done in primal-dual formulation. Results show a reduced end-point error and improved accuracy compared to the conventional L1 norm
Gaze and Peripheral Vision Analysis for Human-Environment Interaction: Applications in Automotive and Mixed-Reality Scenarios
This thesis studies eye-based user interfaces which integrate information about the userâs perceptual focus-of-attention into multimodal systems to enrich the interaction with the surrounding environment. We examine two new modalities: gaze input and output in the peripheral field of view. All modalities are considered in the whole spectrum of the mixed-reality continuum. We show the added value of these new forms of multimodal interaction in two important application domains: Automotive User Interfaces and Human-Robot Collaboration. We present experiments that analyze gaze under various conditions and help to design a 3D model for peripheral vision. Furthermore, this work presents several new algorithms for eye-based interaction, like deictic reference in mobile scenarios, for non-intrusive user identification, or exploiting the peripheral field view for advanced multimodal presentations. These algorithms have been integrated into a number of software tools for eye-based interaction, which are used to implement 15 use cases for intelligent environment applications. These use cases cover a wide spectrum of applications, from spatial interactions with a rapidly changing environment from within a moving vehicle, to mixed-reality interaction between teams of human and robots.In dieser Arbeit werden blickbasierte Benutzerschnittstellen untersucht, die Infor- mationen ¨uber das Blickfeld des Benutzers in multimodale Systeme integrieren, um neuartige Interaktionen mit der Umgebung zu erm¨oglichen. Wir untersuchen zwei neue Modalit¨aten: Blickeingabe und Ausgaben im peripheren Sichtfeld. Alle Modalit¨aten werden im gesamten Spektrum des Mixed-Reality-Kontinuums betra- chtet. Wir zeigen die Anwendung dieser neuen Formen der multimodalen Interak- tion in zwei wichtigen Dom¨anen auf: Fahrerassistenzsysteme und Werkerassistenz bei Mensch-Roboter-Kollaboration. Wir pr¨asentieren Experimente, die blickbasierte Benutzereingaben unter verschiedenen Bedingungen analysieren und helfen, ein 3D- Modell f¨ur das periphere Sehen zu entwerfen. Dar¨uber hinaus stellt diese Arbeit mehrere neue Algorithmen f¨ur die blickbasierte Interaktion vor, wie die deiktis- che Referenz in mobilen Szenarien, die nicht-intrusive Benutzeridentifikation, oder die Nutzung des peripheren Sichtfeldes f¨ur neuartige multimodale Pr¨asentationen. Diese Algorithmen sind in eine Reihe von Software-Werkzeuge integriert, mit de- nen 15 Anwendungsf¨alle f¨ur intelligente Umgebungen implementiert wurden. Diese Demonstratoren decken ein breites Anwendungsspektrum ab: von der r¨aumlichen In- teraktionen aus einem fahrenden Auto heraus bis hin zu Mixed-Reality-Interaktionen zwischen Mensch-Roboter-Teams