774 research outputs found

    Unwind: Interactive Fish Straightening

    Full text link
    The ScanAllFish project is a large-scale effort to scan all the world's 33,100 known species of fishes. It has already generated thousands of volumetric CT scans of fish species which are available on open access platforms such as the Open Science Framework. To achieve a scanning rate required for a project of this magnitude, many specimens are grouped together into a single tube and scanned all at once. The resulting data contain many fish which are often bent and twisted to fit into the scanner. Our system, Unwind, is a novel interactive visualization and processing tool which extracts, unbends, and untwists volumetric images of fish with minimal user interaction. Our approach enables scientists to interactively unwarp these volumes to remove the undesired torque and bending using a piecewise-linear skeleton extracted by averaging isosurfaces of a harmonic function connecting the head and tail of each fish. The result is a volumetric dataset of a individual, straight fish in a canonical pose defined by the marine biologist expert user. We have developed Unwind in collaboration with a team of marine biologists: Our system has been deployed in their labs, and is presently being used for dataset construction, biomechanical analysis, and the generation of figures for scientific publication

    A rigorous definition of axial lines: ridges on isovist fields

    Get PDF
    We suggest that 'axial lines' defined by (Hillier and Hanson, 1984) as lines of uninterrupted movement within urban streetscapes or buildings, appear as ridges in isovist fields (Benedikt, 1979). These are formed from the maximum diametric lengths of the individual isovists, sometimes called viewsheds, that make up these fields (Batty and Rana, 2004). We present an image processing technique for the identification of lines from ridges, discuss current strengths and weaknesses of the method, and show how it can be implemented easily and effectively.Comment: 18 pages, 5 figure

    Cross-domain self-supervised complete geometric representation learning for real-scanned point cloud based pathological gait analysis

    Get PDF
    Accurate lower-limb pose estimation is a prereq-uisite of skeleton based pathological gait analysis. To achievethis goal in free-living environments for long-term monitoring,single depth sensor has been proposed in research. However,the depth map acquired from a single viewpoint encodes onlypartial geometric information of the lower limbs and exhibitslarge variations across different viewpoints. Existing off-the-shelfthree-dimensional (3D) pose tracking algorithms and publicdatasets for depth based human pose estimation are mainlytargeted at activity recognition applications. They are relativelyinsensitive to skeleton estimation accuracy, especially at thefoot segments. Furthermore, acquiring ground truth skeletondata for detailed biomechanics analysis also requires consid-erable efforts. To address these issues, we propose a novelcross-domain self-supervised complete geometric representationlearning framework, with knowledge transfer from the unlabelledsynthetic point clouds of full lower-limb surfaces. The proposedmethod can significantly reduce the number of ground truthskeletons (with only 1%) in the training phase, meanwhileensuring accurate and precise pose estimation and capturingdiscriminative features across different pathological gait patternscompared to other methods

    Automatic 3D human modeling: an initial stage towards 2-way inside interaction in mixed reality

    Get PDF
    3D human models play an important role in computer graphics applications from a wide range of domains, including education, entertainment, medical care simulation and military training. In many situations, we want the 3D model to have a visual appearance that matches that of a specific living person and to be able to be controlled by that person in a natural manner. Among other uses, this approach supports the notion of human surrogacy, where the virtual counterpart provides a remote presence for the human who controls the virtual character\u27s behavior. In this dissertation, a human modeling pipeline is proposed for the problem of creating a 3D digital model of a real person. Our solution involves reshaping a 3D human template with a 2D contour of the participant and then mapping the captured texture of that person to the generated mesh. Our method produces an initial contour of a participant by extracting the user image from a natural background. One particularly novel contribution in our approach is the manner in which we improve the initial vertex estimate. We do so through a variant of the ShortStraw corner-finding algorithm commonly used in sketch-based systems. Here, we develop improvements to ShortStraw, presenting an algorithm called IStraw, and then introduce adaptations of this improved version to create a corner-based contour segmentatiuon algorithm. This algorithm provides significant improvements on contour matching over previously developed systems, and does so with low computational complexity. The system presented here advances the state of the art in the following aspects. First, the human modeling process is triggered automatically by matching the participant\u27s pose with an initial pose through a tracking device and software. In our case, the pose capture and skeletal model are provided by the Microsoft Kinect and its associated SDK. Second, color image, depth data, and human tracking information from the Kinect and its SDK are used to automatically extract the contour of the participant and then generate a 3D human model with skeleton. Third, using the pose and the skeletal model, we segment the contour into eight parts and then match the contour points on each segment to a corresponding anchor set associated with a 3D human template. Finally, we map the color image of the person to the 3D model as its corresponding texture map. The whole modeling process only take several seconds and the resulting human model looks like the real person. The geometry of the 3D model matches the contour of the real person, and the model has a photorealistic texture. Furthermore, the mesh of the human model is attached to the skeleton provided in the template, so the model can support programmed animations or be controlled by real people. This human control is commonly done through a literal mapping (motion capture) or a gesture-based puppetry system. Our ultimate goal is to create a mixed reality (MR) system, in which the participants can manipulate virtual objects, and in which these virtual objects can affect the participant, e.g., by restricting their mobility. This MR system prototype design motivated the work of this dissertation, since a realistic 3D human model of the participant is an essential part of implementing this vision

    Three-dimensional image technology in forensic anthropology: assessing the validity of biological profiles derived from CT-3D images of the skeleton

    Full text link
    This project explores the reliability of building a biological profile for an unknown individual based on three-dimensional (3D) images of the individual's skeleton. 3D imaging technology has been widely researched for medical and engineering applications, and it is increasingly being used as a tool for anthropological inquiry. While the question of whether a biological profile can be derived from 3D images of a skeleton with the same accuracy as achieved when using dry bones has been explored, bigger sample sizes, a standardized scanning protocol and more interobserver error data are needed before 3D methods can become widely and confidently used in forensic anthropology. 3D images of Computed Tomography (CT) scans were obtained from 130 innominate bones from Boston University's skeletal collection (School of Medicine). For each bone, both 3D images and original bones were assessed using the Phenice and Suchey-Brooks methods. Statistical analysis was used to determine the agreement between 3D image assessment versus traditional assessment. A pool of six individuals with varying experience in the field of forensic anthropology scored a subsample (n = 20) to explore interobserver error. While a high agreement was found for age and sex estimation for specimens scored by the author, the interobserver study shows that observers found it difficult to apply standard methods to 3D images. Higher levels of experience did not result in higher agreement between observers, as would be expected. Thus, a need for training in 3D visualization before applying anthropological methods to 3D bones is suggested. Future research should explore interobserver error using a larger sample size in order to test the hypothesis that training in 3D visualization will result in a higher agreement between scores. The need for the development of a standard scanning protocol focusing on the optimization of 3D image resolution is highlighted. Applications for this research include the possibility of digitizing skeletal collections in order to expand their use and for deriving skeletal collections from living populations and creating population-specific standards. Further research for the development of a standard scanning and processing protocol is needed before 3D methods in forensic anthropology are considered as reliable tools for generating biological profiles

    Modelling human pose and shape based on a database of human 3D scans

    Get PDF
    Generating realistic human shapes and motion is an important task both in the motion picture industry and in computer games. In feature films, high quality and believability are the most important characteristics. Additionally, when creating virtual doubles the generated charactes have to match as closely as possible to given real persons. In contrast, in computer games the level of realism does not need to be as high but real-time performance is essential. It is desirable to meet all these requirements with a general model of human pose and shape. In addition, many markerless human tracking methods applied, e.g., in biomedicine or sports science can benefit greatly from the availability of such a model because most methods require a 3D model of the tracked subject as input, which can be generated on-the-fly given a suitable shape and pose model. In this thesis, a comprehensive procedure is presented to generate different general models of human pose. A database of 3D scans spanning the space of human pose and shape variations is introduced. Then, four different approaches for transforming the database into a general model of human pose and shape are presented, which improve the current state of the art. Experiments are performed to evaluate and compare the proposed models on real-world problems, i.e., characters are generated given semantic constraints and the underlying shape and pose of humans given 3D scans, multi-view video, or uncalibrated monocular images is estimated.Die Erzeugung realistischer Menschenmodelle ist eine wichtige Anwendung in der Filmindustrie und bei Computerspielen. In Spielen ist Echtzeitsynthese unabdingbar aber der Detailgrad muß nicht so hoch sein wie in Filmen. Für virtuelle Doubles, wie sie z.B. in Filmen eingesetzt werden, muss der generierte Charakter dem gegebenen realen Menschen möglichst ähnlich sein. Mit einem generellen Modell für menschliche Pose und Körperform ist es möglich alle diese Anforderungen zu erfüllen. Zusätzlich können viele Verfahren zur markerlosen Bewegungserfassung, wie sie z.B. in der Biomedizin oder in den Sportwissenschaften eingesetzt werden, von einem generellen Modell für Pose und Körperform profitieren. Da diese ein 3D Modell der erfassten Person benötigen, das jetzt zur Laufzeit generiert werden kann. In dieser Doktorarbeit wird ein umfassender Ansatz vorgestellt, um verschiedene Modelle für Pose und Körperform zu berechnen. Zunächst wird eine Datenbank von 3D Scans aufgebaut, die Pose- und Körperformvariationen von Menschen umfasst. Dann werden vier verschiedene Verfahren eingeführt, die daraus generelle Modelle für Pose und Körperform berechnen und Probleme beim Stand der Technik beheben. Die vorgestellten Modelle werden auf realistischen Problemstellungen getestet. So werden Menschenmodelle aus einigen wenigen Randbedingungen erzeugt und Pose und Körperform von Probanden wird aus 3D Scans, Multi-Kamera Videodaten und Einzelbildern der bekleideten Personen geschätzt

    Pushing the envelope for estimating poses and actions via full 3D reconstruction

    Get PDF
    Estimating poses and actions of human bodies and hands is an important task in the computer vision community due to its vast applications, including human computer interaction, virtual reality and augmented reality, medical image analysis. Challenges: There are many in-the-wild challenges in this task (see chapter 1). Among them, in this thesis, we focused on two challenges which could be relieved by incorporating the 3D geometry: (1) inherent 2D-to-3D ambiguity driven by the non-linear 2D projection process when capturing 3D objects. (2) lack of sufficient and quality annotated datasets due to the high-dimensionality of subjects' attribute space and inherent difficulty in annotating 3D coordinate values. Contributions: We first tried to jointly tackle the 2D-to-3D ambiguity and insufficient data issues by (1) explicitly reconstructing 2.5D and 3D samples and use them as new training data to train a pose estimator. Next, we tried to (2) encode 3D geometry in the training process of the action recognizer to reduce the 2D-to-3D ambiguity. In appendix, we proposed a (3) new hand pose synthetic dataset that can be used for more complete attribute changes and multi-modal experiments in the future. Experiments: Throughout experiments, we found interesting facts: (1) 2.5D depth map reconstruction and data augmentation can improve the accuracy of the depth-based hand pose estimation algorithm, (2) 3D mesh reconstruction can be used to generate a new RGB data and it improves the accuracy of RGB-based dense hand pose estimation algorithm, (3) 3D geometry from 3D poses and scene layouts could be successfully utilized to reduce the 2D-to-3D ambiguity in the action recognition problem.Open Acces

    Computer-assisted animation creation techniques for hair animation and shade, highlight, and shadow

    Get PDF
    制度:新 ; 報告番号:甲3062号 ; 学位の種類:博士(工学) ; 授与年月日:2010/2/25 ; 早大学位記番号:新532

    Three-dimensional body scanning: methods and applications for anthropometry

    Get PDF
    In questa tesi descriviamo i metodi informatici e gli esperimenti eseguiti per l\u2019applicazione della tecnologia whole body 3D scanner in supporto dell\u2019antropometria. I body scanner restituiscono in uscita una nuvola di punti, solitamente trasformata in mesh triangolare mediante l\u2019uso di algoritmi specifici per supportare la visualizzazione 3D della superficie e l\u2019estrazione di misure e landmarks antropometrici significativi. L\u2019antropometria digitale \ue8 gi\ue0 stata utilizzata con successo in vari studi per valutare importanti parametri medici. L\u2019analisi antropometrica digitale \ue8 solitamente eseguita utilizzando soluzioni software fornite dai costruttori che sono chiuse e specifiche per il prodotto, che richiedono attenzione nell\u2019acquisizione e dei forti limiti sulla posa assunta dal soggetto. Questo pu\uf2 portare a dei problemi nella comparazione di dati acquisiti in diversi luoghi, nella realizzazione di studi multicentrici su larga scala e nell\u2019applicazione di metodi avanzati di shape analysis sui modelli acquisiti. L\u2019obiettivo del nostro lavoro \ue8 di superare questi problemi selezionando e personalizzando strumenti di processing geometrico capaci di creare un sistema aperto ed indipendente dallo strumento per l\u2019analisi di dati da body scanner. Abbiamo inoltre sviluppato e validato dei metodi per estrarre automaticamente dei punti caratteristici, segmenti corporei e misure significative che possono essere utilizzate nella ricerca antropometrica e metabolica. Nello specifico, presentiamo tre esperimenti. Nel primo, utilizzando uno specifico software per l\u2019antropometria digitale, abbiamo valutato la performance dello scanner Breuckmann BodySCAN nelle misure antropometriche. I soggetti degli esperimenti sono 12 giovani adulti che sono stati sottoposti procedure di antropometria manuale e digitale tridimensionale (25 misurazioni) indossando abbigliamento intimo attillato. Le misure duplicate effettuate da un\u2019antropometrista esperto mostrano una correlazione r=0.975-0.999; la loro media \ue8 significativamente (secondo il test t di Student) diversa su 4 delle 25 misure. Le misure digitali effettuate in duplicato da un antropometrista esperto e da due antropometristi non esperti, mostrano indici di correlazione individuali r che variano nel range 0.975-0.999 e medie che che erano significativamente diverse in una misurazione su 25. La maggior parte delle misure effettuate dall\u2019antropometrista esperto, manuali e digitali, mostrano una correlazione significativa (coefficiente di correlazione intra-classe che variano nell\u2019intervallo 0.855-0.995, p<0.0001). Concludiamo che lo scanner Breuckmann BodySCAN \ue8 uno strumento affidabile ed efficace per le misure antropometriche. In un secondo esperimento, compariamo alcune caratteristiche geometriche facilmente misurabili ottenute dalle scansioni di femmine obese (BMI>30) con i parametri di composizione corporea (misurata con una DXA) dei soggetti stessi, per investigare quali misure dei descrittori di forma correlavano meglio con il grasso del torso e corporeo. I risultati ottenuti mostrano che alcuni dei parametri geometrici testati presentano una elevata correlazione, mentre altri non correlano fortemente con il grasso corporeo. Questi risultati supportano il ruolo dell\u2019antropometria digitale nell\u2019indagine sulle caratteristiche fisiche rilevanti per la salute, ed incoraggiano la realizzazione di ulteriori studi che analizzino la relazione tra descrittori di forma e composizione corporea. Infine, presentiamo un nuovo metodo per caratterizzare le superfici tridimensionali mediante il calcolo di una funzione chiamata \u201cArea projection transform\u201d, la quale misura la possibilit\ue0 dei punti dello spazio 3D di essere il centro di simmetria radiale della forma a predeterminati raggi. La trasformata pu\uf2 essere usata per rilevare e caratterizzare in maniera robusta i regioni salienti (approssimativamente parti sferiche e cilindriche) ed \ue8, quindi, adatta ad applicazioni come la detection di caratteristiche anatomiche. In particolare, mostriamo che \ue8 possibile costruire grafi che uniscono questi punti seguendo i valori massimali della MAPT (Radial Simmetry Graphs) e che questi grafi possono essere usati per estrarre rilevanti propriet\ue0 della forma o definire corrispondenze puntuali robuste nei confronti di problematiche quali parti mancanti, rumore topologico e deformazioni articolate. Concludiamo che le potenziali applicazioni della tecnologia della scansione tridimensionale applicata all\u2019antropometria sono innumerevoli, limitate solo dall\u2019abilit\ue0 della conoscienza scientifica di connettere il fenomeno biologico con le appropriate descrizioni matematiche/geometriche.In this thesis we describe the developed computer method and experiments performed in order to apply whole body 3D scanner technology in support to anthropometry. The output of whole body scanners is a cloud of points, usually transformed in a triangulated mesh through the use of specific algorithms in order to support the 3D visualization of the surface and the extraction of meaningful anthropometric landmarks and measurements. Digital anthropometry has been already used in various studies to assess important health-related parameters. Digital anthropometric analysis is usually performed using device-specific and closed software solutions provided by scanner manufacturers, and requires often a careful acquisition, with strong constraints on subject pose. This may create problems in comparing data acquired in different places and performing large-scale multi-centric studies as well as in applying advanced shape analysis tools on the captured models. The aim of our work is to overcome these problems by selecting and customizing geometrical processing tools able to create an open and device-independent method for the analysis of body scanner data. We also developed and validated methods to extract automatically feature points, body segments and relevant measurements that can be used in anthropometric and metabolic research. In particular we present three experiments. In the first, using specific digital anthropometry software, we evaluated the Breuckmann BodySCAN for performance in anthropometric measurement. Subjects of the experiment were 12 young adults underwent both manual and 3D digital anthropometry (25 measurements) wearing close-fitting underwear. Duplicated manual measurement taken by one experienced anthropometrist showed correlation r 0.975-0.999; their means were significantly different in four out of 25 measurements by Student\u2019s t test. Duplicate digital measurements taken by one experienced anthropometrist and two na\uefve anthropometrists showed individual correlation coefficients r ranging 0.975-0.999 and means were significantly different in one out of 25 measurements. Most measurements taken by the experienced anthropometrist in the manual and digital mode showed significant correlation (intraclass correlation coefficient ranging 0.855-0.995, p<0.0001). We conclude that the Breuckmann BodyScan is reliable and effective tool for digital anthropometry. In a second experiment, we compare easily detectable geometrical features obtained from 3D scans of female obese (BMI > 30) subjects with body composition (measured with a DXA device) of the same subjects, in order to investigate which measurements on shape descriptors better correlate with torso and body fat. The results obtained show that some of the tested geometrical parameters have a relevant correlation, while other ones do not strongly correlate with body fat. These results support the role of digital anthropometry in investigating health-related physical characteristics and encourage the realization of further studies analyzing the relationships between shape descriptors and body composition. Finally, we present a novel method to characterize 3D surfaces through the computation of a function called Area Projection Transform, measuring the likelihood of points in the 3D space to be center of radial symmetry at selected scales (radii). The transform can be used to detect and characterize robustly salient regions (approximately spherical and cylindrical parts) and it is, therefore, suitable for applications like anatomical features detection. In particular, we show that it is possible to build graphs joining these points following maximal values of the MAPT (Radial Symmetry Graphs) and that these graphs can be used to extract relevant shape properties or to establish point correspondences on models robustly against holes, topological noise and articulated deformations. It is concluded that whole body scanning technology application to anthropometry are potentially countless, limited only by the ability of science to connect the biological phenomenon with the appropriate mathematical/geometrical descriptions
    corecore