259 research outputs found

    Similarity, Retrieval, and Classification of Motion Capture Data

    Get PDF
    Three-dimensional motion capture data is a digital representation of the complex spatio-temporal structure of human motion. Mocap data is widely used for the synthesis of realistic computer-generated characters in data-driven computer animation and also plays an important role in motion analysis tasks such as activity recognition. Both for efficiency and cost reasons, methods for the reuse of large collections of motion clips are gaining in importance in the field of computer animation. Here, an active field of research is the application of morphing and blending techniques for the creation of new, realistic motions from prerecorded motion clips. This requires the identification and extraction of logically related motions scattered within some data set. Such content-based retrieval of motion capture data, which is a central topic of this thesis, constitutes a difficult problem due to possible spatio-temporal deformations between logically related motions. Recent approaches to motion retrieval apply techniques such as dynamic time warping, which, however, are not applicable to large data sets due to their quadratic space and time complexity. In our approach, we introduce various kinds of relational features describing boolean geometric relations between specified body points and show how these features induce a temporal segmentation of motion capture data streams. By incorporating spatio-temporal invariance into the relational features and induced segments, we are able to adopt indexing methods allowing for flexible and efficient content-based retrieval in large motion capture databases. As a further application of relational motion features, a new method for fully automatic motion classification and retrieval is presented. We introduce the concept of motion templates (MTs), by which the spatio-temporal characteristics of an entire motion class can be learned from training data, yielding an explicit, compact matrix representation. The resulting class MT has a direct, semantic interpretation, and it can be manually edited, mixed, combined with other MTs, extended, and restricted. Furthermore, a class MT exhibits the characteristic as well as the variational aspects of the underlying motion class at a semantically high level. Classification is then performed by comparing a set of precomputed class MTs with unknown motion data and labeling matching portions with the respective motion class label. Here, the crucial point is that the variational (hence uncharacteristic) motion aspects encoded in the class MT are automatically masked out in the comparison, which can be thought of as locally adaptive feature selection

    Automated 3D model generation for urban environments [online]

    Get PDF
    Abstract In this thesis, we present a fast approach to automated generation of textured 3D city models with both high details at ground level and complete coverage for birds-eye view. A ground-based facade model is acquired by driving a vehicle equipped with two 2D laser scanners and a digital camera under normal traffic conditions on public roads. One scanner is mounted horizontally and is used to determine the approximate component of relative motion along the movement of the acquisition vehicle via scan matching; the obtained relative motion estimates are concatenated to form an initial path. Assuming that features such as buildings are visible from both ground-based and airborne view, this initial path is globally corrected by Monte-Carlo Localization techniques using an aerial photograph or a Digital Surface Model as a global map. The second scanner is mounted vertically and is used to capture the 3D shape of the building facades. Applying a series of automated processing steps, a texture-mapped 3D facade model is reconstructed from the vertical laser scans and the camera images. In order to obtain an airborne model containing the roof and terrain shape complementary to the facade model, a Digital Surface Model is created from airborne laser scans, then triangulated, and finally texturemapped with aerial imagery. Finally, the facade model and the airborne model are fused to one single model usable for both walk- and fly-thrus. The developed algorithms are evaluated on a large data set acquired in downtown Berkeley, and the results are shown and discussed

    Enabling Automated, Reliable and Efficient Aerodynamic Shape Optimization With Output-Based Adapted Meshes

    Full text link
    Simulation-based aerodynamic shape optimization has been greatly pushed forward during the past several decades, largely due to the developments of computational fluid dynamics (CFD), geometry parameterization methods, mesh deformation techniques, sensitivity computation, and numerical optimization algorithms. Effective integration of these components has made aerodynamic shape optimization a highly automated process, requiring less and less human interference. Mesh generation, on the other hand, has become the main overhead of setting up the optimization problem. Obtaining a good computational mesh is essential in CFD simulations for accurate output predictions, which as a result significantly affects the reliability of optimization results. However, this is in general a nontrivial task, heavily relying on the user’s experience, and it can be worse with the emerging high-fidelity requirements or in the design of novel configurations. On the other hand, mesh quality and the associated numerical errors are typically only studied before and after the optimization, leaving the design search path unveiled to numerical errors. This work tackles these issues by integrating an additional component, output-based mesh adaptation, within traditional aerodynamic shape optimizations. First, we develop a more suitable error estimator for optimization problems by taking into account errors in both the objective and constraint outputs. The localized output errors are then used to drive mesh adaptation to achieve the desired accuracy on both the objective and constraint outputs. With the variable fidelity offered by the adaptive meshes, multi-fidelity optimization frameworks are developed to tightly couple mesh adaptation and shape optimization. The objective functional and its sensitivity are first evaluated on an initial coarse mesh, which is then subsequently adapted as the shape optimization proceeds. The effort to set up the optimization is minimal since the initial mesh can be fairly coarse and easy to generate. Meanwhile, the proposed framework saves computational costs by reducing the mesh size at the early stages of the optimization, when the design is far from optimal, and avoiding exhaustive search on low-fidelity meshes when the outputs are inaccurate. To further improve the computational efficiency, we also introduce new methods to accelerate the error estimation and mesh adaptation using machine learning techniques. Surrogate models are developed to predict the localized output error and optimal mesh anisotropy to guide the adaptation. The proposed machine learning approaches demonstrate good performance in two-dimensional test problems, encouraging more study and developments to incorporate them within aerodynamic optimization techniques. Although CFD has been extensively used in aircraft design and optimization, the design automation, reliability, and efficiency are largely limited by the mesh generation process and the fixed-mesh optimization paradigm. With the emerging high-fidelity requirements and the further developments of unconventional configurations, CFD-based optimization has to be made more accurate and more efficient to achieve higher design reliability and lower computational cost. Furthermore, future aerodynamic optimization needs to avoid unnecessary overhead in mesh generation and optimization setup to further automate the design process. The author expects the methods developed in this work to be the keys to enable more automated, reliable, and efficient aerodynamic shape optimization, making CFD-based optimization a more powerful tool in aircraft design.PHDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163034/1/cgderic_1.pd

    Between the scales: water from different perspectives

    No full text
    In der vorliegenden Arbeit werden verschiedene Wassermodelle in sogenannten Multiskalen-Computersimulationen mit zwei Auflösungen untersucht, in atomistischer Auflösung und in einer vergröberten Auflösung, die als "coarse-grained" bezeichnet wird. In der atomistischen Auflösung wird ein WassermolekĂŒl, entsprechend seiner chemischen Struktur, durch drei Atome beschrieben, im Gegensatz dazu wird ein MolekĂŒl in der coarse-grained Auflösung durch eine Kugel dargestellt.rnrnDie coarse-grained Modelle, die in dieser Arbeit vorgestellt werden, werden mit verschiedenen coarse-graining Methoden entwickelt. Hierbei kommen hauptsĂ€chlich die "iterative Boltzmann Inversion" und die "iterative Monte Carlo Inversion" zum Einsatz. Beides sind struktur-basierte AnsĂ€tze, die darauf abzielen bestimmte strukturelle Eigenschaften, wie etwa die Paarverteilungsfunktionen, des zugrundeliegenden atomistischen Systems zu reproduzieren. Zur automatisierten Anwendung dieser Methoden wurde das Softwarepaket "Versatile Object-oriented Toolkit for Coarse-Graining Applications" (VOTCA) entwickelt.rnrnEs wird untersucht, in welchem Maße coarse-grained Modelle mehrere Eigenschaftenrndes zugrundeliegenden atomistischen Modells gleichzeitig reproduzieren können, z.B. thermodynamische Eigenschaften wie Druck und KompressibilitĂ€t oder strukturelle Eigenschaften, die nicht zur Modellbildung verwendet wurden, z.B. das tetraedrische Packungsverhalten, welches fĂŒr viele spezielle Eigenschaft von Wasser verantwortlich ist.rnrnMit Hilfe des "Adaptive Resolution Schemes" werden beide Auflösungen in einer Simulation kombiniert. Dabei profitiert man von den Vorteilen beider Modelle:rnVon der detaillierten Darstellung eines rĂ€umlich kleinen Bereichs in atomistischer Auflösung und von der rechnerischen Effizienz des coarse-grained Modells, die den Bereich simulierbarer Zeit- und LĂ€ngenskalen vergrössert.rnrnIn diesen Simulationen kann der Einfluss des WasserstoffbrĂŒckenbindungsnetzwerks auf die Hydration von Fullerenen untersucht werden. Es zeigt sich, dass die Struktur der WassermolekĂŒle an der OberflĂ€che hauptsĂ€chlich von der Art der Wechselwirkung zwischen dem Fulleren und Wasser und weniger von dem WasserstoffbrĂŒckenbindungsnetzwerk dominiert wird.rnWater is one of the most frequently studied fluids on earth. In this thesis, water was investigated at two resolutions using multi-scale computer simulation techniques. First, the atomistic and coarse-grained resolutions were studied separately. In the atomistic resolution, a~water molecule is described chemically by three atoms, while in the coarse-grained case, a~molecule is modeled by a~sphere.rnrnIn this work, various coarse-grained models have been developed using different coarse-graining techniques, mainly iterative Boltzmann inversion and iterative inverse Monte Carlo, which are structure-based approaches that aim to reproduce distributions, such as the pair distribution functions, of the atomistic model. In this context the Versatile Object-oriented Toolkit for Coarse-graining applications (VOTCA) was developed.rnrnIt was studied to which extent the coarse-grained models can simultaneously reproduce several properties of the underlying atomistic model, such as thermodynamic properties like pressure and compressibility or structural properties, which have not been used in the coarse-graining process, e.g. the tetrahedral packing behavior, which is responsible for many special properties of water.rnrnSubsequently, these two resolutions were combined using the adaptive resolution scheme, which combines the advantage of atomistic details in a~small cavity of high resolution with the computational efficiency of the coarse-grained model in order to access larger time and length scales. In this scheme, the introduced coarse-grained models were used to study the influence of the hydrogen bonds on the hydration of small fullerenes. It was found that the interface structure is more dependent on the nature of the interaction between the solute and water molecules than on the presence of the hydrogen bond network.r

    Natural Parameterization

    Get PDF
    The objective of this project has been to develop an approach for imitating physical objects with an underlying stochastic variation. The key assumption is that a set of “natural parameters” can be extracted by a new subdivision algorithm so they reflect what is called the object’s “geometric DNA”. A case study on one hundred wheat grain crosssections (Triticum aestivum) showed that it was possible to extract thirty-six such parameters and to reuse them for Monte Carlo simulation of “new” stochastic phantoms which possessthe same stochastic behavior as the “original” cross-sections

    Toward Robots with Peripersonal Space Representation for Adaptive Behaviors

    Get PDF
    The abilities to adapt and act autonomously in an unstructured and human-oriented environment are necessarily vital for the next generation of robots, which aim to safely cooperate with humans. While this adaptability is natural and feasible for humans, it is still very complex and challenging for robots. Observations and findings from psychology and neuroscience in respect to the development of the human sensorimotor system can inform the development of novel approaches to adaptive robotics. Among these is the formation of the representation of space closely surrounding the body, the Peripersonal Space (PPS) , from multisensory sources like vision, hearing, touch and proprioception, which helps to facilitate human activities within their surroundings. Taking inspiration from the virtual safety margin formed by the PPS representation in humans, this thesis first constructs an equivalent model of the safety zone for each body part of the iCub humanoid robot. This PPS layer serves as a distributed collision predictor, which translates visually detected objects approaching a robot\u2019s body parts (e.g., arm, hand) into the probabilities of a collision between those objects and body parts. This leads to adaptive avoidance behaviors in the robot via an optimization-based reactive controller. Notably, this visual reactive control pipeline can also seamlessly incorporate tactile input to guarantee safety in both pre- and post-collision phases in physical Human-Robot Interaction (pHRI). Concurrently, the controller is also able to take into account multiple targets (of manipulation reaching tasks) generated by a multiple Cartesian point planner. All components, namely the PPS, the multi-target motion planner (for manipulation reaching tasks), the reaching-with-avoidance controller and the humancentred visual perception, are combined harmoniously to form a hybrid control framework designed to provide safety for robots\u2019 interactions in a cluttered environment shared with human partners. Later, motivated by the development of manipulation skills in infants, in which the multisensory integration is thought to play an important role, a learning framework is proposed to allow a robot to learn the processes of forming sensory representations, namely visuomotor and visuotactile, from their own motor activities in the environment. Both multisensory integration models are constructed with Deep Neural Networks (DNNs) in such a way that their outputs are represented in motor space to facilitate the robot\u2019s subsequent actions

    Interlacing Self-Localization, Moving Object Tracking and Mapping for 3D Range Sensors

    Get PDF
    This work presents a solution for autonomous vehicles to detect arbitrary moving traffic participants and to precisely determine the motion of the vehicle. The solution is based on three-dimensional images captured with modern range sensors like e.g. high-resolution laser scanners. As result, objects are tracked and a detailed 3D model is built for each object and for the static environment. The performance is demonstrated in challenging urban environments that contain many different objects

    Molecular Dynamics Simulation

    Get PDF
    Condensed matter systems, ranging from simple fluids and solids to complex multicomponent materials and even biological matter, are governed by well understood laws of physics, within the formal theoretical framework of quantum theory and statistical mechanics. On the relevant scales of length and time, the appropriate ‘first-principles’ description needs only the Schroedinger equation together with Gibbs averaging over the relevant statistical ensemble. However, this program cannot be carried out straightforwardly—dealing with electron correlations is still a challenge for the methods of quantum chemistry. Similarly, standard statistical mechanics makes precise explicit statements only on the properties of systems for which the many-body problem can be effectively reduced to one of independent particles or quasi-particles. [...

    BlickpunktabhÀngige Computergraphik

    Get PDF
    Contemporary digital displays feature multi-million pixels at ever-increasing refresh rates. Reality, on the other hand, provides us with a view of the world that is continuous in space and time. The discrepancy between viewing the physical world and its sampled depiction on digital displays gives rise to perceptual quality degradations. By measuring or estimating where we look, gaze-contingent algorithms aim at exploiting the way we visually perceive to remedy visible artifacts. This dissertation presents a variety of novel gaze-contingent algorithms and respective perceptual studies. Chapter 4 and 5 present methods to boost perceived visual quality of conventional video footage when viewed on commodity monitors or projectors. In Chapter 6 a novel head-mounted display with real-time gaze tracking is described. The device enables a large variety of applications in the context of Virtual Reality and Augmented Reality. Using the gaze-tracking VR headset, a novel gaze-contingent render method is described in Chapter 7. The gaze-aware approach greatly reduces computational efforts for shading virtual worlds. The described methods and studies show that gaze-contingent algorithms are able to improve the quality of displayed images and videos or reduce the computational effort for image generation, while display quality perceived by the user does not change.Moderne digitale Bildschirme ermöglichen immer höhere Auflösungen bei ebenfalls steigenden Bildwiederholraten. Die RealitĂ€t hingegen ist in Raum und Zeit kontinuierlich. Diese Grundverschiedenheit fĂŒhrt beim Betrachter zu perzeptuellen Unterschieden. Die Verfolgung der Aug-Blickrichtung ermöglicht blickpunktabhĂ€ngige Darstellungsmethoden, die sichtbare Artefakte verhindern können. Diese Dissertation trĂ€gt zu vier Bereichen blickpunktabhĂ€ngiger und wahrnehmungstreuer Darstellungsmethoden bei. Die Verfahren in Kapitel 4 und 5 haben zum Ziel, die wahrgenommene visuelle QualitĂ€t von Videos fĂŒr den Betrachter zu erhöhen, wobei die Videos auf gewöhnlicher Ausgabehardware wie z.B. einem Fernseher oder Projektor dargestellt werden. Kapitel 6 beschreibt die Entwicklung eines neuartigen Head-mounted Displays mit UnterstĂŒtzung zur Erfassung der Blickrichtung in Echtzeit. Die Kombination der Funktionen ermöglicht eine Reihe interessanter Anwendungen in Bezug auf Virtuelle RealitĂ€t (VR) und Erweiterte RealitĂ€t (AR). Das vierte und abschließende Verfahren in Kapitel 7 dieser Dissertation beschreibt einen neuen Algorithmus, der das entwickelte Eye-Tracking Head-mounted Display zum blickpunktabhĂ€ngigen Rendern nutzt. Die QualitĂ€t des Shadings wird hierbei auf Basis eines Wahrnehmungsmodells fĂŒr jeden Bildpixel in Echtzeit analysiert und angepasst. Das Verfahren hat das Potenzial den Berechnungsaufwand fĂŒr das Shading einer virtuellen Szene auf ein Bruchteil zu reduzieren. Die in dieser Dissertation beschriebenen Verfahren und Untersuchungen zeigen, dass blickpunktabhĂ€ngige Algorithmen die DarstellungsqualitĂ€t von Bildern und Videos wirksam verbessern können, beziehungsweise sich bei gleichbleibender BildqualitĂ€t der Berechnungsaufwand des bildgebenden Verfahrens erheblich verringern lĂ€sst

    Aria 1.5 : user manual.

    Full text link
    • 

    corecore