59,077 research outputs found

    Motion in place: a case study of archaeological reconstruction using motion capture

    Get PDF
    Human movement constitutes a fundamental part of the archaeological process, and of any interpretationof a site’s usage; yet there has to date been little or no consideration of how movement observed (incontemporary situations) and inferred (in archaeological reconstruction) can be documented. This paper reports on the Motion in Place Platform project, which seeks to use motion capture hardware and data totest human responses to Virtual Reality (VR) environments and their real-world equivalents using round houses of the Southern British Iron Age which have been both modelled in 3D and reconstructed in the present day as a case study. This allows us to frame questions about the assumptions which are implicitlyhardwired into VR presentations of archaeology and cultural heritage in new ways. In the future, this will lead to new insights into how VR models can be constructed, used and transmitted

    Reconfiguring experimental archaeology using 3D reconstruction

    Get PDF
    Experimental archaeology has long yielded valuable insights into the tools and techniques that featured in past peoples’ relationship with the material world around them. We can determine, for example, how many trees would need to be felled to construct a large round-house of the southern British Iron Age (over one hundred), infer the exact angle needed to strike a flint core in order to knap an arrowhead in the manner of a Neolithic hunter-gatherer, or recreate the precise environmental conditions needed to store grain in underground silos over the winter months, with only the technologies and materials available to Romano-Briton villagers (see Coles 1973; Reynolds 1993). However, experimental archaeology has, hitherto, confined itself to rather rigid, empirical and quantitative questions such as those posed in these examples. This is quite understandable, and in line with good scientific practice, which stipulates that any ‘experiment’ must be based on replicable data, and be reproducible. Despite their potential in this area however, it is notable that digital reconstruction technologies have yet to play a significant role in experimental archaeology. Whilst many excellent examples of digital 3D reconstruction of heritage sites exist (for example the Digital Roman Forum project: http://dlib.etc.ucla.edu/projects/Forum) most, if not all, of these are characterized by a drive to establish a photorealistic re-creation of physical features. This paper will discuss possibilities that lie beyond straightforward positivist re-creation of heritage sites, in the experimental reconstruction of intangible heritage. Between 2010 and 2012, the authors led the Motion in Place Platform project (MiPP: http://www.motioninplace.org/), a capital grant under the AHRC's DEDEFI scheme developing motion capture and analysis tools for exploring how people move through spaces. In the course of MiPP, a series of experiments were conducted using motion capture hardware and software at the Silchester Roman town archaeological excavation in Hampshire, and at the Butser Ancient Farm facility, where Romano-British and Iron Age dwellings have been constructed according to the best experimental practice. As well as reconstructing such Roman and early British dwellings in 3D, the authors were able to use motion capture to reconstruct the kind of activities that – according to the material evidence – are likely to have been carried out by the occupants who used them. Bespoke motion capture suits developed for the project were employed, and the traces captured and rendered with a combination of Autodesk and Unity3D software. This sheds new light on how the reconstructed spaces - and, by inference, their ancient counterparts - were most likely to have been used. In particular the exercises allowed the evaluation and visualisation of changes in behaviour which occur as a result of familiarity with an environment and the acquisition of expertise over time; and to assess how interaction between different actors affects how everyday tasks are carried out

    Acquisition and distribution of synergistic reactive control skills

    Get PDF
    Learning from demonstration is an afficient way to attain a new skill. In the context of autonomous robots, using a demonstration to teach a robot accelerates the robot learning process significantly. It helps to identify feasible solutions as starting points for future exploration or to avoid actions that lead to failure. But the acquisition of pertinent observationa is predicated on first segmenting the data into meaningful sequences. These segments form the basis for learning models capable of recognising future actions and reconstructing the motion to control a robot. Furthermore, learning algorithms for generative models are generally not tuned to produce stable trajectories and suffer from parameter redundancy for high degree of freedom robots This thesis addresses these issues by firstly investigating algorithms, based on dynamic programming and mixture models, for segmentation sensitivity and recognition accuracy on human motion capture data sets of repetitive and categorical motion classes. A stability analysis of the non-linear dynamical systems derived from the resultant mixture model representations aims to ensure that any trajectories converge to the intended target motion as observed in the demonstrations. Finally, these concepts are extended to humanoid robots by deploying a factor analyser for each mixture model component and coordinating the structure into a low dimensional representation of the demonstrated trajectories. This representation can be constructed as a correspondence map is learned between the demonstrator and robot for joint space actions. Applying these algorithms for demonstrating movement skills to robot is a further step towards autonomous incremental robot learning

    Shape basis interpretation for monocular deformable 3D reconstruction

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper, we propose a novel interpretable shape model to encode object non-rigidity. We first use the initial frames of a monocular video to recover a rest shape, used later to compute a dissimilarity measure based on a distance matrix measurement. Spectral analysis is then applied to this matrix to obtain a reduced shape basis, that in contrast to existing approaches, can be physically interpreted. In turn, these pre-computed shape bases are used to linearly span the deformation of a wide variety of objects. We introduce the low-rank basis into a sequential approach to recover both camera motion and non-rigid shape from the monocular video, by simply optimizing the weights of the linear combination using bundle adjustment. Since the number of parameters to optimize per frame is relatively small, specially when physical priors are considered, our approach is fast and can potentially run in real time. Validation is done in a wide variety of real-world objects, undergoing both inextensible and extensible deformations. Our approach achieves remarkable robustness to artifacts such as noisy and missing measurements and shows an improved performance to competing methods.Peer ReviewedPostprint (author's final draft
    • 

    corecore