11 research outputs found

    BEMDEC: An Adaptive and Robust Methodology for Digital Image Feature Extraction

    Get PDF
    The intriguing study of feature extraction, and edge detection in particular, has, as a result of the increased use of imagery, drawn even more attention not just from the field of computer science but also from a variety of scientific fields. However, various challenges surrounding the formulation of feature extraction operator, particularly of edges, which is capable of satisfying the necessary properties of low probability of error (i.e., failure of marking true edges), accuracy, and consistent response to a single edge, continue to persist. Moreover, it should be pointed out that most of the work in the area of feature extraction has been focused on improving many of the existing approaches rather than devising or adopting new ones. In the image processing subfield, where the needs constantly change, we must equally change the way we think. In this digital world where the use of images, for variety of purposes, continues to increase, researchers, if they are serious about addressing the aforementioned limitations, must be able to think outside the box and step away from the usual in order to overcome these challenges. In this dissertation, we propose an adaptive and robust, yet simple, digital image features detection methodology using bidimensional empirical mode decomposition (BEMD), a sifting process that decomposes a signal into its two-dimensional (2D) bidimensional intrinsic mode functions (BIMFs). The method is further extended to detect corners and curves, and as such, dubbed as BEMDEC, indicating its ability to detect edges, corners and curves. In addition to the application of BEMD, a unique combination of a flexible envelope estimation algorithm, stopping criteria and boundary adjustment made the realization of this multi-feature detector possible. Further application of two morphological operators of binarization and thinning adds to the quality of the operator

    Detail Enhancing Denoising of Digitized 3D Models from a Mobile Scanning System

    Get PDF
    The acquisition process of digitizing a large-scale environment produces an enormous amount of raw geometry data. This data is corrupted by system noise, which leads to 3D surfaces that are not smooth and details that are distorted. Any scanning system has noise associate with the scanning hardware, both digital quantization errors and measurement inaccuracies, but a mobile scanning system has additional system noise introduced by the pose estimation of the hardware during data acquisition. The combined system noise generates data that is not handled well by existing noise reduction and smoothing techniques. This research is focused on enhancing the 3D models acquired by mobile scanning systems used to digitize large-scale environments. These digitization systems combine a variety of sensors – including laser range scanners, video cameras, and pose estimation hardware – on a mobile platform for the quick acquisition of 3D models of real world environments. The data acquired by such systems are extremely noisy, often with significant details being on the same order of magnitude as the system noise. By utilizing a unique 3D signal analysis tool, a denoising algorithm was developed that identifies regions of detail and enhances their geometry, while removing the effects of noise on the overall model. The developed algorithm can be useful for a variety of digitized 3D models, not just those involving mobile scanning systems. The challenges faced in this study were the automatic processing needs of the enhancement algorithm, and the need to fill a hole in the area of 3D model analysis in order to reduce the effect of system noise on the 3D models. In this context, our main contributions are the automation and integration of a data enhancement method not well known to the computer vision community, and the development of a novel 3D signal decomposition and analysis tool. The new technologies featured in this document are intuitive extensions of existing methods to new dimensionality and applications. The totality of the research has been applied towards detail enhancing denoising of scanned data from a mobile range scanning system, and results from both synthetic and real models are presented

    A study of information-theoretic metaheuristics applied to functional neuroimaging datasets

    Get PDF
    This dissertation presents a new metaheuristic related to a two-dimensional ensemble empirical mode decomposition (2DEEMD). It is based on Green’s functions and is called Green’s Function in Tension - Bidimensional Empirical Mode Decomposition (GiT-BEMD). It is employed for decomposing and extracting hidden information of images. A natural image (face image) as well as images with artificial textures have been used to test and validate the proposed approach. Images are selected to demonstrate efficiency and performance of the GiT-BEMD algorithm in extracting textures on various spatial scales from the different images. In addition, a comparison of the performance of the new algorithm GiT-BEMD with a canonical BEEMD is discussed. Then, GiT-BEMD as well as canonical bidimensional EEMD (BEEMD) are applied to an fMRI study of a contour integration task. Thus, it explores the potential of employing GiT-BEMD to extract such textures, so-called bidimensional intrinsic mode functions (BIMFs), of functional biomedical images. Because of the enormous computational load and the artifacts accompanying the extracted textures when using a canonical BEEMD, GiT-BEMD is developed to cope with such challenges. It is seen that the computational cost is decreased dramatically, and the quality of the extracted textures is enhanced considerably. Consequently, GiT-BEMD achieves a higher quality of the estimated BIMFs as can be seen from a direct comparison of the results obtained with different variants of BEEMD and GiT-BEMD. Moreover, results generated by 2DBEEMD, especially in case of GiT-BEMD, distinctly show a superior precision in spatial localization of activity blobs when compared with a canonical general linear model (GLM) analysis employing statistical parametric mapping (SPM). Furthermore, to identify most informative textures, i.e. BIMFs, a support vector machine (SVM) as well as a random forest (RF) classifier is employed. Classification performance demonstrates the potential of the extracted BIMFs in supporting decision making of the classifier. With GiT-BEMD, the classification performance improved significantly which might also be a consequence of a clearer structure for these modes compared to the ones obtained with canonical BEEMD. Altogether, there is strong believe that the newly proposed metaheuristic GiT-BEMD offers a highly competitive alternative to existing BEMD algorithms and represents a promising technique for blindly decomposing images and extracting textures thereof which may be used for further analysis

    Estimation of Dose Distribution for Lu-177 Therapies in Nuclear Medicine

    Get PDF
    In nuclear medicine, two frequent applications of 177-Lu therapy exist: DOTATOC therapy for patients with a neuroendocrine tumor and PSMA thearpy for prostate cancer. During the therapy a pharmaceutical is injected intravenously, which attaches to tumor cells due to its molecular composition. Since the pharmaceutical contains a radioactive 177Lu isotope, tumor cells are destroyed through irradiation. Afterwards the substance is excreted via the kidneys. Since the latter are very sensitive to high energy radiation, it is necessary to compute exactly how much radioactivity can be administered to the patient without endangering healthy organs. This calculation is called dosimetry and currently is made according to the state of the art MIRD method. At the beginning of this work, an error assessment of the established method is presented, which has determined an overall error of 25% in the renal dose value. The presented study improves and personalizes the MIRD method in several respects and reduces individual error estimates considerably. In order to be able to estimate of the amount of activity, first a test dose is injected to the patient. Subsequently, after 4h, 24h, 48h and 72h SPECT images are taken. From these images the activity at each voxel can be obtained a specified time points, i. e. the physical decline and physiological metabolization of the pharmaceutical can be followed in time. To calculate the amount of decay in each voxel from the four SPECT registrations, a time activity curve must be integrated. In this work, a statistical method was developed to estimate the time dependent activity and then integrate a voxel-by-voxel time-activity curve. This procedure results in a decay map for all available 26 patients (13 PSMA/13 DOTATOC). After the decay map has been estimated, a full Monte Carlo simulation has been carried out on the basis of these decay maps to determine a related dose distribution. The simulation results are taken as reference (“Gold Standard”) and compared with methods for an approximate but faster estimation of the dose distribution. Recently, a convolution with Dose Voxel Kernels (DVK) has been established as a standard dose estimation method (Soft Tissue Scaling STS). Thereby a radioactive Lutetium isotope is placed in a cube consisting of soft tissue. Then radiation interactions are simulated for a number of 10^10 decays. The resulting Dose Voxel Kernel is then convolved with the estimated decay map. The result is a dose distribution, which, however, does not take into account any tissue density differences. To take tissue inhomogeneities into account, three methods are described in the literature, namely Center Scaling (CS), Density Scaling (DS), and Percentage Scaling (PS). However, their application did not improve the results of the STS method as is demonstrated in this study. Consequently, a neural network was trained finally to estimate DVKs adapted to the respective individual tissue density distribution. During the convolution process, it uses for each voxel an adapted DVK that was deduced from the corresponding tissue density kernel. This method outperformed the MIRD method, which resulted in an uncertainty of the renal dose between -42.37-10.22% an achieve a reduction in the uncertainty to a range between -26.00%-7.93%. These dose deviations were calculated for 26 patients and relate to the mean renal dose compared with the respective result of the Monte Carlo simulation. In order to improve the estimates of dose distribution even further, a 3D 2D neural network was trained in the second part of the work. This network predicts the dose distribution of an entire patient. In combination with an Empirical Mode Decomposition, this method achieved deviations of only -12.21%-2.13% . The mean deviation of the dose estimates is in the range of the statistical error of the Monte Carlo simulation. In the third part of the work, a neural network was used to automatically segment the kidney, spleen and tumors. Compared to an established segmentation algorithm, the method developed in this work can segment tumors because it uses not only the CT image as input, but also the SPECT image

    Data-driven parameter and model order reduction for industrial optimisation problems with applications in naval engineering

    Get PDF
    In this work we study data-driven reduced order models with a specific focus on reduction in parameter space to fight the curse of dimensionality, especially for functions with low-intrinsic structure, in the context of digital twins. To this end we proposed two different methods to improve the accuracy of responce surfaces built using the Active Subspaces (AS): a kernel-based approach which maps the inputs onto an higher dimensional space before applying AS, and a local approach in which a clustering induced by the presence of a global active subspace is exploited to construct localized regressors. We also used AS within a multi-fidelity nonlinear autoregressive scheme to reduced the approximation error of high-dimensional scalar function using only high-fidelity data. This multi-fidelity approach has also been integrated within a non-intrusive Proper Oorthogonal Decomposition (POD) based framework in which every modal coefficient is reconstructed with a greater precision. Moving to optimization algorithms we devised an extension of the classical genetic algorithm exploiting AS to accelerate the convergence, especially for highdimensional optimization problems. We applied different combinations of such methods in a diverse range of engineering problems such as structural optimization of cruise ships, shape optimization of a combatant hull and a NACA airfoil profile, and the prediction of hydroacoustic noises. A specific attention has been devoted to the naval engineering applications and many of the methodological advances in this work have been inspired by them. This work has been conducted within the framework of the IRONTH project, an industrial Ph.D. grant financed by Fincantieri S.p.A

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Validação do critério ISSF aplicado a juntas adesivas usando métodos numéricos

    Get PDF
    Due to the limitations presented by conventional joining techniques, like bolted and welded joints, the industry has turned attention to adhesively-bonded joints. The lower weight and decreased stress concentrations are some of the advantages made possible by this technique. Over the years, diverse analytical and numerical approaches to the failure of these joints were investigated. The work presented in this report aims to propose and validate a fracture mechanics based approach to joint failure, named Intensity of Singular Stress Fields (ISSF). With this purpose, aluminium and composite single-lap joints bonded with a brittle adhesive were tested. Different overlap lengths (LO) were also considered in order to evaluate this parameter influence in the final results. The experimental data was treated and the average maximum loads sustained by the joints were collected. Then, a numerical method for joint strength prediction was proposed, consisting of a combination of experimental and numerical information. The numerical data was obtained through simulations resorting to the Finite Element Method (FEM) and a meshless technique, the Radial Point Interpolation Method (RPIM). The validation of the approach was achieved by analysing the polar stress components and comparing the experimental and numerical results. It was experimentally verified that increasing LO leads to an increase in strength of the joints. The proposed technique was successfully applied for both aluminium and composite adherends even though they had different formulations. The results attained with the proposed method were promising given itssimplicity compared with previously proposed methodologies. The method’s application to meshless methods was also confirmed since the RPIM presented very similar results to the FEM, despite presenting some oscillations.Devido às limitações das técnicas de ligação convencionais, tais como as ligações aparafusadas e a soldadura, a indústria virou a sua atenção para as juntas adesivas estruturais. O baixo peso e a redução das concentrações de tensões são algumas das vantagens inerentes a esta técnica. Ao longo dos anos foram investigadas diversas abordagens analíticas e numéricas relativas à fratura deste tipo de juntas. O presente trabalho tem como objetivo propor e validar um método baseado na mecânica da fratura para avaliar a falha destas juntas. Para o efeito, foram testadas juntas de sobreposição simples de alumínio e compósito ligadas por um adesivo frágil. Também foram considerados diferentes comprimentos de sobreposição (LO) de forma a avaliar a influência deste parâmetro nos resultados finais. Os dados experimentais foram tratados e foram recolhidas as cargas máximas médias suportadas pelas juntas. Posteriormente, foi proposto um método numérico para prever a resistência das juntas, que consiste na combinação de informação analítica e numérica. Os dados numéricos foram obtidos através de simulações recorrendo ao Método dos Elementos Finitos (MEF) e a uma técnica meshless, o Radial Point Interpolation Method (RPIM). A validação da abordagem foi conseguida através da análise das componentes polares das tensões e por comparação entre os resultados experimentais e analíticos. Verificou-se experimentalmente que um aumento do comprimento de sobreposição origina um aumento da resistência das juntas. A técnica foi aplicada com sucesso para aderentes de alumínio e de compósito mesmo apresentando formulações distintas. Os resultados obtidos com o método proposto foram promissores dada a simplicidade do mesmo quando comparado com metodologias previamente propostas. A aplicabilidade do método aos métodos sem malha também foi comprovada já que o RPIM apresentou resultados muito similares ao MEF, apesar de apresentar algumas oscilações

    Selected Papers from the 5th International Electronic Conference on Sensors and Applications

    Get PDF
    This Special Issue comprises selected papers from the proceedings of the 5th International Electronic Conference on Sensors and Applications, held on 15–30 November 2018, on sciforum.net, an online platform for hosting scholarly e-conferences and discussion groups. In this 5th edition of the electronic conference, contributors were invited to provide papers and presentations from the field of sensors and applications at large, resulting in a wide variety of excellent submissions and topic areas. Papers which attracted the most interest on the web or that provided a particularly innovative contribution were selected for publication in this collection. These peer-reviewed papers are published with the aim of rapid and wide dissemination of research results, developments, and applications. We hope this conference series will grow rapidly in the future and become recognized as a new way and venue by which to (electronically) present new developments related to the field of sensors and their applications

    Frictional Contact in Interactive Deformable Environments

    Get PDF
    L\u2019uso di simulazioni garantisce notevoli vantaggi in termini di economia, realismo e di flessibilit\ue0 in molte aree di ricerca e in ambito dello sviluppo tecnologico. Per questo motivo le simulazioni vengono usate spesso in ambiti quali la prototipazione di parti meccaniche, nella pianificazione e nell\u2019addestramento di procedure di assemblaggio e disassemblaggio inoltre, di recente, le simulazioni si sono dimostrate validi strumenti anche nell\u2019assistenza e nell\u2019addestramento ai chirurghi, in particolare nel caso della chirurgia laparoscopica. La chirurgia laparoscopica, infatti, \ue8 considerata lo standard per molte procedure chirurgiche. La principale differenza rispetto alla chirurgia tradizionale risiede nella notevole limitazione che ha il chirurgo nell\u2019interagire e nel percepire l\u2019ambiente in lavora, sia nella vista che nel tatto. Questo rappresenta una forte limitazione per il chirurgo a cui \ue8 richiesta una lunga fase di addestramento prima di poter ottenere la necessaria destrezza per intervenire in laparoscopia con profitto. Queste limitazioni, d\u2019altra parte, rendono la laparoscopia il candidato ideale per l\u2019introduzione della simulazione nell\u2019addestramento. Attualmente sono disponibili in commercio dei software per l\u2019addestramento alla laparoscopia, tuttavia essi sono in genere basati su modelli rigidi, o modelli che comunque mancano del necessario realismo fisico. L\u2019introduzione di modelli deformabili migliorerebbe notevolmente l\u2019accuratezza e il realismo delle simulazioni. Nel caso dell\u2019addestramento il maggior realismo permetterebbe all\u2019utente di acquisire non solo le conoscenze motorie basilari ma anche capacit\ue0 e conoscenze di pi\uf9 alto livello. I corpi rigidi, infatti, rappresentano una buona approssimazione della realt\ue0 solo in situazioni particolari ed entro intervalli di sollecitazioni molto ristretti. Quando si considerano materiali non ingegneristici, come accade nelle simulazioni chirurgiche, le deformazioni non possono essere trascurate senza compromettere irrimediabilmente il realismo dei risultati. L\u2019uso di modelli deformabili tuttavia introduce notevole complessit\ue0 computazionale per il calcolo della fisica che regola le deformazioni e limita fortemente l\u2019uso di dati precalcolati, spesso utilizzati per velocizzare la fase di identificazione delle collisioni tra i corpi. I ritardi dovuti all\u2019uso di modelli deformabili rappresentano un grosso limite soprattutto nelle applicazioni interattive che, per consentire all\u2019utente di interagire con l\u2019ambiente, richiedono il calcolo della simulazione entro intervalli di tempo molto ridotti. In questa tesi viene affrontato il tema della simulazione di ambienti interattivi composti da corpi deformabili che interagiscono con attrito. Vengono analizzati e sviluppati differenti tecniche e metodi per le diverse componenti della simulazione: dalla simulazione di modelli deformabili, agli algoritmi di identificazione e soluzione delle collisioni e alla modellazione e integrazione dell\u2019attrito nella simulazione. In particolare vengono valutati i principali metodi che rappresentano lo stato dell\u2019arte nella modellazione di materiali deformabili. L\u2019analisi considera i fondamenti fisici su cui i modelli si basano e quindi sul grado di realismo che possono garantire in termini di deformazioni modellabili e la semplicit\ue0 d\u2019uso degli stessi (ovvero la facilit\ue0 di comprensione del metodo, la calibrazione del modello e la possibilit\ue0 di adattare il modello a situazioni differenti) ma viene considerata anche la complessit\ue0 computazionale di ciascun metodo in quanto essa rappresenta un fattore estremamente importante nella scelta e nell\u2019uso dei modelli deformabili nelle simulazioni. Il confronto dei differenti modelli e le caratteristiche identificate hanno motivato lo sviluppo di un metodo innovativo per fornire un\u2019interfaccia comune ai vari metodi di simulazione dei tessuti deformabili. Tale interfaccia ha il vantaggio di fornire dei metodi omogenei per la manipolazione dei diversi modelli deformabili. Ci\uf2 garantisce la possibilit\ue0 di scambiare il modello usato per la simulazione delle deformazioni mantenendo inalterati le altre strutture dati e i metodi della simulazione. L\u2019introduzione di tale interfaccia unificata si dimostra particolarmente vantaggiosa in quanto permette l\u2019uso di un solo metodo per l\u2019identificazione delle collisioni per tutti i differenti modelli deformabili. Ci\uf2 semplifica molto l\u2019analisi e la definizione dei requisiti di tale modulo software. L\u2019identificazione delle collisioni tra modelli rigidi generalmente precalcola delle partizioni dello spazio in cui i corpi sono definiti oppure sfrutta la suddivisione del corpo analizzato in parti convesse per velocizzare la simulazione. Nel caso di modelli deformabili non \ue8 possibile applicare tali tecniche a causa dei continui cambiamenti nella configurazione dei corpi. Dopo che le collisioni tra i corpi sono state riconosciute e che i punti di contatto sono stati identificati e necessario risolvere le collisioni tenendo conto della fisica sottostante i contatti. Per garantire il realismo \ue8 necessario assicurare che i corpi non si compenetrino mai e che nella simulazione delle collisioni tutti i fenomeni fisici di interesse coinvolti nel contatto tra i corpi vengano considerati: questi includono le forze elastiche che si esercitano tra i corpi e le forze di attrito che si generano lungo le superfici di contatto. L\u2019innovativo metodo proposto per la soluzione delle collisioni garantisce il realismo della simulazione e l\u2019integrazione con l\u2019interfaccia proposta per la gestione unificata dei modelli. Una caratteristica importante dei tessuti biologici \ue8 il comportamento anisotropico, dovuto, in genere, alla loro struttura fibrosa. In questa tesi viene proposto un nuovo metodo per aggiungere l\u2019anisotropia al comportamento dei modelli massa molla. Il metodo ha il vantaggio di mantenere la velocit\ue0 computazionale e la semplicit\ue0 di implementazione dei modelli massa molla classici e riesce a differenziare efficacemente la risposta del modello alle sollecitazioni lungo le differenti direzioni. Le tecniche descritte sono state integrate in due applicazioni che forniscono la simulazione della fisica di ambienti con corpi deformabili. La prima delle due implementa tutti i metodi descritti per la simulazione dei modelli deformabili, identifica le collisioni con precisione e le risolve fornendo la possibilit\ue0 di scegliere il modello di attrito pi\uf9 adatto, dimostrando cos\uec la fattibilit\ue0 dell\u2019approccio proposto. La limitazione principale di tale simulatore risiede nell\u2019alto tempo di calcolo richiesto per la simulazione dei singoli passi di simulazione. Tale limitazione \ue8 stata superata in una seconda implementazione che sfrutta il parallelismo intrinseco delle simulazioni fisiche per ottimizzare gli algoritmi e che, quindi, riesce a sfruttare al meglio la potenza computazionale delle architetture hardware parallele. Al fine di ottenere le prestazioni richieste per la simulazione di ambienti interattivi con ritorno di forza, la simulazione \ue8 basata su un algoritmo di identificazione delle collisioni semplificato, ma implementa gli altri metodi descritti in questa tesi. L\u2019implementazione parallela sfrutta le capacit\ue0 di calcolo delle moderne schede video munite di processori altamente paralleli e ci\uf2 permette di aggiornare la scena ogni millisecondo. Questo elimina ogni discontinuit\ue0 nel ritorno di forza reso all\u2019utente e nell\u2019aggiornamento della grafica della scena, inoltre garantisce il realismo necessario alla simulazione fisica sottostante. Le applicazioni implementate provano la fattibilit\ue0 della simulazione della fisica di interazioni complesse tra corpi deformabili. Inoltre, l\u2019implementazione parallela della simulazione rappresenta un promettente punto di partenza per la realizzazione di simulazioni interattive che potr\ue0 essere utilizzato in ambiti di ricerca differenti, quali l\u2019addestramento di chirurghi o la prototipazione rapida.The use of simulations provides great advantages in term of economy, realism, and adaptability to user requirements in many research and technological fields. For this reason simulations are currently exploited, for example, in prototyping of machinery parts, in assembly-disassembly test or training and, recently, simulations have also allowed the development of many useful and promising tools for the assistance and learning of surgical procedures. This is particularly true for laparoscopic intervention. Laparoscopy, in fact, represents the gold standard for many surgical procedures. The principal difference from standard surgery is the reduction of the surgeon ability to perceive the surgical scenario, both from visual and tactile point of view. This represents a great limitation for surgeons who undergo long training before being able to perform laparoscopic intervention with proficiency. This, on the other hand, makes laparoscopy an excellent candidate for the use of simulations for training. Some commercial training softwares are already available on the market, but they are usually based on rigid body models that completely lack the physical realism. The introduction of deformable models may leads to a great increment in terms of realism and accuracy. And, in the case of laparoscopy trainer it may allow the user to learn not only basic motor skills, but also higher level capabilities and knowledge. Rigid bodies, in fact, represents a good approximation of reality only in some situations and in very restricted ranges of solicitations. In particular, when non engineering materials are involved, as happens in surgical simulations, deformations cannot be neglected without completely loosing the realism of the environment. The use of deformable models, however, is limited for the high computational costs involved in the computation of the physics undergoing the deformations and because of the reduction in pre computable data in particular for collision detection between bodies. This represents a very limiting factor in interactive environments where, to allow the user to interactively control the virtual bodies, the simulation should be performed in real time. In this thesis we address the simulation of interactive environment populated with deformable models that interact with frictional contacts. This includes the analysis and the development of different techniques which implement the various parts of the simulation: mainly the methods for the simulation of deformable models, the collision detection and collision solution techniques but also the modelling and the integration of suitable friction models in the simulation. In particular we evaluated the principal methods that represent the state of the art in soft tissue modeling. Our analysis is based on the physical background of each method and thus on its realism in terms of deformations that the method can mimic and on the ease of use (i.e. method understanding, calibration and ability to adapt to different scenarios) but we also compared the computational complexity of different models, as it represents an extremely important factor in the choice and in the use of models in simulations. The comparison of different features in analyzed methods motivated us to the development of an innovative method to wrap in a common representation framework different methodologies of soft tissue simulation. This framework has the advantage of providing a unified interface for all the deformable models and thus it provides the ability to switch between deformable model keeping unchanged all other data structures and methods of the simulation. The use of this unique interface allows us to use one single method to perform the collision detection phase for all the analyzed deformable models, this greatly helped during the identification of requirements and features of such software module. Collision detection phase, when applied to rigid bodies, usually takes advantage of pre computation to subdivide body shapes in convex elements or to construct partitions of the space in which the body is defined to speed up the computation. When handling deformable models this is not possible because of the continuous changes in bodies shape. The collision detection method used in this work takes into account this problem and regularly adapt the data structures to the body configuration. After collisions have been detected and contact points have been identified on colliding bodies, it is necessary to solve the collision in a physics based way. To this extent we have to ensure that objects never compenetrate during the simulation and that, when solving collisions, all the physical phenomena involved in the contact of real bodies are taken into account: this include the elastic response of bodies during the contact and the frictional force exerted between each pair of colliding bodies. The innovative method for solving collision that we describe in this thesis ensures the realism of the simulation and the seamless interaction with the common framework used to integrate deformable models. One important feature of biologic tissues is their anisotropic behavior that usually comes from the fibrous structure of these tissues. In this thesis we propose a new method to introduce anisotropy in mass spring model. The method has the advantages of preserving the speed and ease of implementation of the model and it effectively introduces differentiation of the model behavior along the chosen directions. The described techniques have been integrated in two applications that allows the physical simulation of environments populated with deformable models. The first application implements all the described methods to simulate deformable models, it performs precise collision detection and solution with the possibility to chose the most suitable friction model for the simulation. It demonstrates the effectiveness of the proposed framework. The main limitation of this simulator, i.e. its high computation time, is tackled and solved in a second application that exploits the intrinsic parallelism of physical simulations to optimize the implementation and to exploit parallel architecture computational power. To obtain the performances required for an interactive environment the simulation is based on a simplified collision detection algorithm, but it features all the other techniques described in this thesis. The parallel implementation exploits graphic cards processor, a highly parallel architecture that update the scene every milliseconds. This allows the rendering of smooth haptic feedback to the user and ensures the realism of the physics simulation. The implemented applications prove the feasibility of the simulation of complex interactions between deformable models with physics realism. In addition, the parallel implementation of the simulator represents a promising starting point for the development of interactive simulations that can be used in different fields of research, such as surgeon training or fast prototyping
    corecore