430 research outputs found

    Advances in identifying osseous fractured areas and virtually reducing bone fractures

    Get PDF
    [ES]Esta tesis pretende el desarrollo de técnicas asistidas por ordenador para ayudar a los especialistas durante la planificación preoperatoria de una reducción de fractura ósea. Como resultado, puede reducirse el tiempo de intervención y pueden evitarse errores de interpretación, con los consecuentes beneficios en el tratamiento y en el tiempo de recuperación del paciente. La planificación asistida por ordenador de una reducción de fractura ósea puede dividirse en tres grandes etapas: identificación de fragmentos óseos a partir de imágenes médicas, cálculo de la reducción y posterior estabilización de la fractura, y evaluación de los resultados obtenidos. La etapa de identificación puede incluir también la generación de modelos 3D de fragmentos óseos. Esta tesis aborda la identificación de fragmentos óseos a partir de imágenes médicas generadas por TC, la generación de modelos 3D de fragmentos, y el cálculo de la reducción de fracturas, sin incluir el uso de elementos de fijación.[EN]The aim of this work is the development of computer-assisted techniques for helping specialists in the pre-operative planning of bone fracture reduction. As a result, intervention time may be reduced and potential misinterpretations circumvented, with the consequent benefits in the treatment and recovery time of the patient. The computer-assisted planning of a bone fracture reduction may be divided into three main stages: identification of bone fragments from medical images, computation of the reduction and subsequent stabilization of the fracture, and evaluation of the obtained results. The identification stage may include the generation of 3D models of bone fragments, with the purpose of obtaining useful models for the two subsequent stages. This thesis deals with the identification of bone fragments from CT scans, the generation of 3D models of bone fragments, and the computation of the fracture reduction excluding the use of fixation devices.Tesis Univ. Jaén. Departamento de Informática. Leída 19 de septiembre de 201

    A Framework for the Semantics-aware Modelling of Objects

    Get PDF
    The evolution of 3D visual content calls for innovative methods for modelling shapes based on their intended usage, function and role in a complex scenario. Even if different attempts have been done in this direction, shape modelling still mainly focuses on geometry. However, 3D models have a structure, given by the arrangement of salient parts, and shape and structure are deeply related to semantics and functionality. Changing geometry without semantic clues may invalidate such functionalities or the meaning of objects or their parts. We approach the problem by considering semantics as the formalised knowledge related to a category of objects; the geometry can vary provided that the semantics is preserved. We represent the semantics and the variable geometry of a class of shapes through the parametric template: an annotated 3D model whose geometry can be deformed provided that some semantic constraints remain satisfied. In this work, we design and develop a framework for the semantics-aware modelling of shapes, offering the user a single application environment where the whole workflow of defining the parametric template and applying semantics-aware deformations can take place. In particular, the system provides tools for the selection and annotation of geometry based on a formalised contextual knowledge; shape analysis methods to derive new knowledge implicitly encoded in the geometry, and possibly enrich the given semantics; a set of constraints that the user can apply to salient parts and a deformation operation that takes into account the semantic constraints and provides an optimal solution. The framework is modular so that new tools can be continuously added. While producing some innovative results in specific areas, the goal of this work is the development of a comprehensive framework combining state of the art techniques and new algorithms, thus enabling the user to conceptualise her/his knowledge and model geometric shapes. The original contributions regard the formalisation of the concept of annotation, with attached properties, and of the relations between significant parts of objects; a new technique for guaranteeing the persistence of annotations after significant changes in shape's resolution; the exploitation of shape descriptors for the extraction of quantitative information and the assessment of shape variability within a class; and the extension of the popular cage-based deformation techniques to include constraints on the allowed displacement of vertices. In this thesis, we report the design and development of the framework as well as results in two application scenarios, namely product design and archaeological reconstruction

    Computer graphics simulation of organic and inorganic optical and morphological appearance changes.

    Get PDF
    Organic bodies are subject to internal biological, chemical and physical processes as well as environmental interactions after death, which cause significant structural and optical changes. Simulating corpse decomposition and the environmental effects on its surface can help improve the realism of computer generated scenes and provide the impression of a living, dynamic environment. The aim of this doctorate thesis is to simulate post mortem processes of the human body and their visual effects on its appearance. The proposed method is divided into three processes; surface weathering due to environmental activities, livor mortis and natural mummification by desiccation. The decomposing body is modelled by a layered model consisting of a tetrahedral mesh representing the volume and a high resolution triangle surface mesh representing the skin. A particle-based surface weathering approach is employed to add environmental effects. The particles transport substances that are deposited on the object’s surface. A novel, biologically-inspired blood pooling simulation is used to recreate the physical processes of livor mortis and its visual effects on the corpse’s appearance. For the mummification, a physically-based approach is used to simulate the moisture diffusion process inside the object and the resulting de- formations of the volume and skin. In order to simulate the colouration changes associated with livor mortis and mummification, a chemically-based layered skin shader that considers time and spatially varying haemoglobin, oxygen and moisture contents is proposed. The suggested approach is able to model changes in the internal structure and the surface appearance of the body that resemble the post mortem processes livor mortis, natural mummification by desiccation and surface weathering. The surface weathering approach is able to add blemishes, such as rust and moss, to an object’s surface while avoiding inconsistencies in deposit sizes and dis- continuities on texture seams. The livor mortis approach is able to model the pink colouration changes caused by blood pooling, pressure induced blanching effects, fixation of hypostasis and the purple discolouration due to oxygen loss in blood. The mummification method is able to reproduce volume shrinkage effects caused by moisture loss, skin wrinkling and skin darkening that are comparable to real mummies

    Applied Visualization in the Neurosciences and the Enhancement of Visualization through Computer Graphics

    Get PDF
    The complexity and size of measured and simulated data in many fields of science is increasing constantly. The technical evolution allows for capturing smaller features and more complex structures in the data. To make this data accessible by the scientists, efficient and specialized visualization techniques are required. Maximum efficiency and value for the user can only be achieved by adapting visualization to the specific application area and the specific requirements of the scientific field. Part I: In the first part of my work, I address the visualization in the neurosciences. The neuroscience tries to understand the human brain; beginning at its smallest parts, up to its global infrastructure. To achieve this ambitious goal, the neuroscience uses a combination of three-dimensional data from a myriad of sources, like MRI, CT, or functional MRI. To handle this diversity of different data types and sources, the neuroscience need specialized and well evaluated visualization techniques. As a start, I will introduce an extensive software called \"OpenWalnut\". It forms the common base for developing and using visualization techniques with our neuroscientific collaborators. Using OpenWalnut, standard and novel visualization approaches are available to the neuroscientific researchers too. Afterwards, I am introducing a very specialized method to illustrate the causal relation of brain areas, which was, prior to that, only representable via abstract graph models. I will finalize the first part of my work with an evaluation of several standard visualization techniques in the context of simulated electrical fields in the brain. The goal of this evaluation was clarify the advantages and disadvantages of the used visualization techniques to the neuroscientific community. We exemplified these, using clinically relevant scenarios. Part II: Besides the data preprocessing, which plays a tremendous role in visualization, the final graphical representation of the data is essential to understand structure and features in the data. The graphical representation of data can be seen as the interface between the data and the human mind. The second part of my work is focused on the improvement of structural and spatial perception of visualization -- the improvement of the interface. Unfortunately, visual improvements using computer graphics methods of the computer game industry is often seen sceptically. In the second part, I will show that such methods can be applied to existing visualization techniques to improve spatiality and to emphasize structural details in the data. I will use a computer graphics paradigm called \"screen space rendering\". Its advantage, amongst others, is its seamless applicability to nearly every visualization technique. I will start with two methods that improve the perception of mesh-like structures on arbitrary surfaces. Those mesh structures represent second-order tensors and are generated by a method named \"TensorMesh\". Afterwards I show a novel approach to optimally shade line and point data renderings. With this technique it is possible for the first time to emphasize local details and global, spatial relations in dense line and point data.In vielen Bereichen der Wissenschaft nimmt die Größe und Komplexität von gemessenen und simulierten Daten zu. Die technische Entwicklung erlaubt das Erfassen immer kleinerer Strukturen und komplexerer Sachverhalte. Um solche Daten dem Menschen zugänglich zu machen, benötigt man effiziente und spezialisierte Visualisierungswerkzeuge. Nur die Anpassung der Visualisierung auf ein Anwendungsgebiet und dessen Anforderungen erlaubt maximale Effizienz und Nutzen für den Anwender. Teil I: Im ersten Teil meiner Arbeit befasse ich mich mit der Visualisierung im Bereich der Neurowissenschaften. Ihr Ziel ist es, das menschliche Gehirn zu begreifen; von seinen kleinsten Teilen bis hin zu seiner Gesamtstruktur. Um dieses ehrgeizige Ziel zu erreichen nutzt die Neurowissenschaft vor allem kombinierte, dreidimensionale Daten aus vielzähligen Quellen, wie MRT, CT oder funktionalem MRT. Um mit dieser Vielfalt umgehen zu können, benötigt man in der Neurowissenschaft vor allem spezialisierte und evaluierte Visualisierungsmethoden. Zunächst stelle ich ein umfangreiches Softwareprojekt namens \"OpenWalnut\" vor. Es bildet die gemeinsame Basis für die Entwicklung und Nutzung von Visualisierungstechniken mit unseren neurowissenschaftlichen Kollaborationspartnern. Auf dieser Basis sind klassische und neu entwickelte Visualisierungen auch für Neurowissenschaftler zugänglich. Anschließend stelle ich ein spezialisiertes Visualisierungsverfahren vor, welches es ermöglicht, den kausalen Zusammenhang zwischen Gehirnarealen zu illustrieren. Das war vorher nur durch abstrakte Graphenmodelle möglich. Den ersten Teil der Arbeit schließe ich mit einer Evaluation verschiedener Standardmethoden unter dem Blickwinkel simulierter elektrischer Felder im Gehirn ab. Das Ziel dieser Evaluation war es, der neurowissenschaftlichen Gemeinde die Vor- und Nachteile bestimmter Techniken zu verdeutlichen und anhand klinisch relevanter Fälle zu erläutern. Teil II: Neben der eigentlichen Datenvorverarbeitung, welche in der Visualisierung eine enorme Rolle spielt, ist die grafische Darstellung essenziell für das Verständnis der Strukturen und Bestandteile in den Daten. Die grafische Repräsentation von Daten bildet die Schnittstelle zum Gehirn des Menschen. Der zweite Teile meiner Arbeit befasst sich mit der Verbesserung der strukturellen und räumlichen Wahrnehmung in Visualisierungsverfahren -- mit der Verbesserung der Schnittstelle. Leider werden viele visuelle Verbesserungen durch Computergrafikmethoden der Spieleindustrie mit Argwohn beäugt. Im zweiten Teil meiner Arbeit werde ich zeigen, dass solche Methoden in der Visualisierung angewendet werden können um den räumlichen Eindruck zu verbessern und Strukturen in den Daten hervorzuheben. Dazu nutze ich ein in der Computergrafik bekanntes Paradigma: das \"Screen Space Rendering\". Dieses Paradigma hat den Vorteil, dass es auf nahezu jede existierende Visualiserungsmethode als Nachbearbeitunsgschritt angewendet werden kann. Zunächst führe ich zwei Methoden ein, die die Wahrnehmung von gitterartigen Strukturen auf beliebigen Oberflächen verbessern. Diese Gitter repräsentieren die Struktur von Tensoren zweiter Ordnung und wurden durch eine Methode namens \"TensorMesh\" erzeugt. Anschließend zeige ich eine neuartige Technik für die optimale Schattierung von Linien und Punktdaten. Mit dieser Technik ist es erstmals möglich sowohl lokale Details als auch globale räumliche Zusammenhänge in dichten Linien- und Punktdaten zu erfassen

    Vertex classification for non-uniform geometry reduction.

    Get PDF
    Complex models created from isosurface extraction or CAD and highly accurate 3D models produced from high-resolution scanners are useful, for example, for medical simulation, Virtual Reality and entertainment. Often models in general require some sort of manual editing before they can be incorporated in a walkthrough, simulation, computer game or movie. The visualization challenges of a 3D editing tool may be regarded as similar to that of those of other applications that include an element of visualization such as Virtual Reality. However the rendering interaction requirements of each of these applications varies according to their purpose. For rendering photo-realistic images in movies computer farms can render uninterrupted for weeks, a 3D editing tool requires fast access to a model's fine data. In Virtual Reality rendering acceleration techniques such as level of detail can temporarily render parts of a scene with alternative lower complexity versions in order to meet a frame rate tolerable for the user. These alternative versions can be dynamic increments of complexity or static models that were uniformly simplified across the model by minimizing some cost function. Scanners typically have a fixed sampling rate for the entire model being scanned, and therefore may generate large amounts of data in areas not of much interest or that contribute little to the application at hand. It is therefore desirable to simplify such models non-uniformly. Features such as very high curvature areas or borders can be detected automatically and simplified differently to other areas without any interaction or visualization. However a problem arises when one wishes to manually select features of interest in the original model to preserve and create stand alone, non-uniformly reduced versions of large models, for example for medical simulation. To inspect and view such models the memory requirements of LoD representations can be prohibitive and prevent storage of a model in main memory. Furthermore, although asynchronous rendering of a base simplified model ensures a frame rate tolerable to the user whilst detail is paged, no guarantees can be made that what the user is selecting is at the original resolution of the model or of an appropriate LoD owing to disk lag or the complexity of a particular view selected by the user. This thesis presents an interactive method in the con text of a 3D editing application for feature selection from any model that fits in main memory. We present a new compression/decompression of triangle normals and colour technique which does not require dedicated hardware that allows for 87.4% memory reduction and allows larger models to fit in main memory with at most 1.3/2.5 degrees of error on triangle normals and to be viewed interactively. To address scale and available hardware resources, we reference a hierarchy of volumes of different sizes. The distances of the volumes at each level of the hierarchy to the intersection point of the line of sight with the model are calculated and these distances sorted. At startup an appropriate level of the tree is automatically chosen by separating the time required for rendering from that required for sorting and constraining the latter according to the resources available. A clustered navigation skin and depth buffer strategy allows for the interactive visualisation of models of any size, ensuring that triangles from the closest volumes are rendered over the navigation skin even when the clustered skin may be closer to the viewer than the original model. We show results with scanned models, CAD, textured models and an isosurface. This thesis addresses numerical issues arising from the optimisation of cost functions in LoD algorithms and presents a semi-automatic solution for selection of the threshold on the condition number of the matrix to be inverted for optimal placement of the new vertex created by an edge collapse. We show that the units in which a model is expressed may inadvertently affect the condition of these matrices, hence affecting the evaluation of different LoD methods with different solvers. We use the same solver with an automatically calibrated threshold to evaluate different uniform geometry reduction techniques. We then present a framework for non-uniform reduction of regular scanned models that can be used in conjunction with a variety of LoD algorithms. The benefits of non-uniform reduction are presented in the context of an animation system. (Abstract shortened by UMI.)

    Doctor of Philosophy

    Get PDF
    dissertationDespite the progress that has been made since the inception of the finite element method, the field of biomechanics has generally relied on software tools that were not specifically designed to target this particular area of application. Software designed specifically for the field of computational biomechanics does not appear to exist. To overcome this limitation, FEBio was developed, an acronym for “Finite Elements for Biomechanics”, which provided an open-source framework for developing finite element software that is tailored to the specific needs of the biomechanics and biophysics communities. The proposed work added an extendible framework to FEBio that greatly facilitates the implementation of novel features and provides an ideal platform for exploring novel computational approaches. This framework supports plugins, which simplify the process of adding new features even more since plugins can be developed independently from the main source code. Using this new framework, this work extended FEBio in two important areas of interest in biomechanics. First, as tetrahedral elements continue to be the preferred modeling primitive for representing complex geometries, several tetrahedral formulations were investigated in terms of their robustness and accuracy for solving problems in computational biomechanics. The focus was on the performance of quadratic tetrahedral formulations in large deformation contact analyses, as this is an important area of application in biomechanics. Second, the application of prestrain to computational models has been recognized as an important component in simulations of biological tissues in order to accurately predict the mechanical response. As this remains challenging to do in existing software packages, a general computational framework for applying prestrain was incorporated in the FEBio software. The work demonstrated via several examples how plugins greatly simplify the development of novel features. In addition, it showed that the quadratic tetrahedral formulations studied in this work are viable alternatives for contact analyses. Finally, it demonstrated the newly developed prestrain plugin and showed how it can be used in various applications of prestrain

    Ray tracing techniques for computer games and isosurface visualization

    Get PDF
    Ray tracing is a powerful image synthesis technique, that has been used for high-quality offline rendering since decades. In recent years, this technique has become more important for realtime applications, but still plays only a minor role in many areas. Some of the reasons are that ray tracing is compute intensive and has to rely on preprocessed data structures to achieve fast performance. This dissertation investigates methods to broaden the applicability of ray tracing and is divided into two parts. The first part explores the opportunities offered by ray tracing based game technology in the context of current and expected future performance levels. In this regard, novel methods are developed to efficiently support certain kinds of dynamic scenes, while avoiding the burden to fully recompute the required data structures. Furthermore, todays ray tracing performance levels are below what is needed for 3D games. Therefore, the multi-core CPU of the Playstation 3 is investigated, and an optimized ray tracing architecture presented to take steps towards the required performance. In part two, the focus shifts to isosurface raytracing. Isosurfaces are particularly important to understand the distribution of certain values in volumetric data. Since the structure of volumetric data sets is diverse, op- timized algorithms and data structures are developed for rectilinear as well as unstructured data sets which allow for realtime rendering of isosurfaces including advanced shading and visualization effects. This also includes tech- niques for out-of-core and time-varying data sets.Ray-tracing ist ein flexibles Bildgebungsverfahren, das schon seit Jahrzehnten für hoch qualitative, aber langsame Bilderzeugung genutzt wird. In den letzten Jahren wurde Ray-tracing auch für Echtzeitanwendungen immer interessanter, spielt aber in vielen Anwendungsbereichen noch immer eine untergeordnete Rolle. Einige der Gründe sind die Rechenintensität von Ray-tracing sowie die Abhängigkeit von vorberechneten Datenstrukturen um hohe Geschwindigkeiten zu erreichen. Diese Dissertation untersucht Methoden um die Anwendbarkeit von Ray-tracing in zwei verschiedenen Bereichen zu erhöhen. Im ersten Teil dieser Dissertation werden die Möglichkeiten, die Ray- tracing basierte Spieletechnologie bietet, im Kontext mit aktueller sowie zukünftig erwarteten Geschwindigkeiten untersucht. Darüber hinaus werden in diesem Zusammenhang Methoden entwickelt um bestimmte zeitveränderliche Szenen darstellen zu können ohne die dafür benötigen Datenstrukturen von Grund auf neu erstellen zu müssen. Da die Geschwindigkeit von Ray-tracing für Spiele bisher nicht ausreichend ist, wird die Mehrkern- CPU der Playstation 3 untersucht, und ein optimiertes Ray-tracing System beschrieben, das Ray-tracing näher an die benötigte Geschwindigkeit heranbringt. Der zweite Teil beschäftigt sich mit der Darstellung von Isoflächen mittels Ray-tracing. Isoflächen sind insbesonders wichtig um die Verteilung einzelner Werte in volumetrischen Datensätzen zu verstehen. Da diese Datensätze verschieden strukturiert sein können, werden für gitterförmige und unstrukturierte Datensätze optimierte Algorithmen und Datenstrukturen entwickelt, die die Echtzeitdarstellung von Isoflächen erlauben. Dies beinhaltet auch Erweiterungen für extrem große und zeitveränderliche Datensätze

    Numerical investigation of bone adaptation to exercise and fracture in Thoroughbred racehorses

    Get PDF
    Third metacarpal bone (MC3) fracture has a massive welfare and economic impact on horse racing, representing 45% of all fatal lower limb fractures, which in themselves represent more than 80% of reasons for death or euthanasia on the UK racecourses. Most of these fractures occur due to the accumulation of tissue fatigue as a result of repetitive loading rather than a specific traumatic event. Despite considerable research in the field, including applying various diagnostic methods, it still remains a challenge to accurately predict the fracture risk and prevent this type of injury. The objective of this thesis is to develop computational tools to quantify bone adaptation and resistance to fracture, thereby providing the basis for a viable and robust solution. Recent advances in subject-specific finite element model generation, for example computed tomography imaging and efficient segmentation algorithms, have significantly improved the accuracy of finite element modelling. Numerical analysis techniques are widely used to enhance understanding of fracture in bones and provide better insight into relationships between load transfer and bone morphology. This thesis proposes a finite element based framework allowing for integrated simulation of bone remodelling under specific loading conditions, followed by the evaluation of its fracture resistance. Accurate representation of bone geometry and heterogeneous material properties are obtained from calibrated computed tomography scans.The material mapping between CT-scan data and discretised geometries for the finite element method is carried out by using Moving Least Squares approximation and L2-projection. Thus is then used for numerical investigations and assessment of density gradients at the common site of fracture. Bone is able to adapt its density to changes in external conditions. This property is one of the most important mechanisms for the development of resistance to fracture. Therefore, a finite element approach for simulating adaptive bone changes (also called bone remodelling) is proposed. The implemented method is based on a phenomenological model of the macroscopic behaviour of bone based on the thermodynamics of open systems. Numerical results showed that the proposed technique has the potential to accurately simulate the long-term bone response to specified training conditions and also improve possible treatment options for bone implants. Assessment of the fracture risk was conducted with crack propagation analysis. The potential of two different approaches was investigated: smeared phase-field and discrete configurational mechanics approach. The popular phase-field method represents a crack by a smooth damage variable leading to a phase-field approximation of the variational formulation for brittle fracture. A robust solution scheme was implemented using a monolithic solution scheme with arc-length control. In the configurational mechanics approach, the driving forces, and fracture energy release rate, are expressed in terms of nodal quantities, enabling a fully implicit formulation for modelling the evolving crack front. The approach was extended for the first time to capture the influence of heterogeneous density distribution. The outcomes of this study showed that discrete and smeared crack approximations are capable of predicting crack paths in three-dimensional heterogeneous bodies with comparable results. However, due to the necessity of using significantly finer meshes, phase-field was found to be less numerically efficient. Finally, the current state of the framework's development was assessed using numerical simulations for bone adaptation and subsequent fracture propagation, including analysis of an equine metacarpal bone. Numerical convergence was demonstrated for all examples, and the use of singularity elements proved to further improve the rate of convergence. It was shown that bone adaptation history and bone density distribution influence both fracture resistance and the resulting crack path. The promising results of this study offer a~novel framework to simulate changes in the bone structure in response to exercise and quantify the likelihood of a fracture
    corecore