312 research outputs found

    Modeling and Simulation in Engineering

    Get PDF
    This book provides an open platform to establish and share knowledge developed by scholars, scientists, and engineers from all over the world, about various applications of the modeling and simulation in the design process of products, in various engineering fields. The book consists of 12 chapters arranged in two sections (3D Modeling and Virtual Prototyping), reflecting the multidimensionality of applications related to modeling and simulation. Some of the most recent modeling and simulation techniques, as well as some of the most accurate and sophisticated software in treating complex systems, are applied. All the original contributions in this book are jointed by the basic principle of a successful modeling and simulation process: as complex as necessary, and as simple as possible. The idea is to manipulate the simplifying assumptions in a way that reduces the complexity of the model (in order to make a real-time simulation), but without altering the precision of the results

    From Molecules to the Masses : Visual Exploration, Analysis, and Communication of Human Physiology

    Get PDF
    Det overordnede målet med denne avhandlingen er tverrfaglig anvendelse av medisinske illustrasjons- og visualiseringsteknikker for å utforske, analysere og formidle aspekter ved fysiologi til publikum med ulik faglig nivå og bakgrunn. Fysiologi beskriver de biologiske prosessene som skjer i levende vesener over tid. Vitenskapen om fysiologi er kompleks, men samtidig kritisk for vår forståelse av hvordan levende organismer fungerer. Fysiologi dekker en stor bredde romlig-temporale skalaer og fordrer behovet for å kombinere og bygge bro mellom basalfagene (biologi, fysikk og kjemi) og medisin. De senere årene har det vært en eksplosjon av nye, avanserte eksperimentelle metoder for å detektere og karakterisere fysiologiske data. Volumet og kompleksiteten til fysiologiske data krever effektive strategier for visualisering for å komplementere dagens standard analyser. Hvilke tilnærminger som benyttes i visualiseringen må nøye balanseres og tilpasses formålet med bruken av dataene, enten dette er for å utforske dataene, analysere disse eller kommunisere og presentere dem. Arbeidet i denne avhandlingen bidrar med ny kunnskap innen teori, empiri, anvendelse og reproduserbarhet av visualiseringsmetoder innen fysiologi. Først i avhandlingen er en rapport som oppsummerer og utforsker dagens kunnskap om muligheter og utfordringer for visualisering innen fysiologi. Motivasjonen for arbeidet er behovet forskere innen visualiseringsfeltet, og forskere i ulike anvendelsesområder, har for en sammensatt oversikt over flerskala visualiseringsoppgaver og teknikker. Ved å bruke søk over et stort spekter av metodiske tilnærminger, er dette den første rapporten i sitt slag som kartlegger visualiseringsmulighetene innen fysiologi. I rapporten er faglitteraturen oppsummert slik at det skal være enkelt å gjøre oppslag innen ulike tema i rom-og-tid-skalaen, samtidig som litteraturen er delt inn i de tre høynivå visualiseringsoppgavene data utforsking, analyse og kommunikasjon. Dette danner et enkelt grunnlag for å navigere i litteraturen i feltet og slik danner rapporten et godt grunnlag for diskusjon og forskningsmuligheter innen feltet visualisering og fysiologi. Basert på arbeidet med rapporten var det særlig to områder som det er ønskelig for oss å fortsette å utforske: (1) utforskende analyse av mangefasetterte fysiologidata for ekspertbrukere, og (2) kommunikasjon av data til både eksperter og ikke-eksperter. Arbeidet vårt av mangefasetterte fysiologidata er oppsummert i to studier i avhandlingen. Hver studie omhandler prosesser som foregår på forskjellige romlig-temporale skalaer og inneholder konkrete eksempler på anvendelse av metodene vurdert av eksperter i feltet. I den første av de to studiene undersøkes konsentrasjonen av molekylære substanser (metabolitter) ut fra data innsamlet med magnetisk resonansspektroskopi (MRS), en avansert biokjemisk teknikk som brukes til å identifisere metabolske forbindelser i levende vev. Selv om MRS kan ha svært høy sensitivitet og spesifisitet i medisinske anvendelser, er analyseresultatene fra denne modaliteten abstrakte og vanskelige å forstå også for medisinskfaglige eksperter i feltet. Vår designstudie som undersøkte oppgavene og kravene til ekspertutforskende analyse av disse dataene førte til utviklingen av SpectraMosaic. Dette er en ny applikasjon som gjør det mulig for domeneeksperter å analysere konsentrasjonen av metabolitter normalisert for en hel kohort, eller etter prøveregion, individ, opptaksdato, eller status på hjernens aktivitetsnivå ved undersøkelsestidspunktet. I den andre studien foreslås en metode for å utføre utforskende analyser av flerdimensjonale fysiologiske data i motsatt ende av den romlig-temporale skalaen, nemlig på populasjonsnivå. En effektiv arbeidsflyt for utforskende dataanalyse må kritisk identifisere interessante mønstre og relasjoner, noe som blir stadig vanskeligere når dimensjonaliteten til dataene øker. Selv om dette delvis kan løses med eksisterende reduksjonsteknikker er det alltid en fare for at subtile mønstre kan gå tapt i reduksjonsprosessen. Isteden presenterer vi i studien DimLift, en iterativ dimensjonsreduksjonsteknikk som muliggjør brukeridentifikasjon av interessante mønstre og relasjoner som kan ligge subtilt i et datasett gjennom dimensjonale bunter. Nøkkelen til denne metoden er brukerens evne til å styre dimensjonalitetsreduksjonen slik at den følger brukerens egne undersøkelseslinjer. For videre å undersøke kommunikasjon til eksperter og ikke-eksperter, studeres i neste arbeid utformingen av visualiseringer for kommunikasjon til publikum med ulike nivåer av ekspertnivå. Det er naturlig å forvente at eksperter innen et emne kan ha ulike preferanser og kriterier for å vurdere en visuell kommunikasjon i forhold til et ikke-ekspertpublikum. Dette påvirker hvor effektivt et bilde kan benyttes til å formidle en gitt scenario. Med utgangspunkt i ulike teknikker innen biomedisinsk illustrasjon og visualisering, gjennomførte vi derfor en utforskende studie av kriteriene som publikum bruker når de evaluerer en biomedisinsk prosessvisualisering målrettet for kommunikasjon. Fra denne studien identifiserte vi muligheter for ytterligere konvergens av biomedisinsk illustrasjon og visualiseringsteknikker for mer målrettet visuell kommunikasjonsdesign. Særlig beskrives i større dybde utviklingen av semantisk konsistente retningslinjer for farging av molekylære scener. Hensikten med slike retningslinjer er å heve den vitenskapelige kompetansen til ikke-ekspertpublikum innen molekyler visualisering, som vil være spesielt relevant for kommunikasjon til befolkningen i forbindelse med folkehelseopplysning. All kode og empiriske funn utviklet i arbeidet med denne avhandlingen er åpen kildekode og tilgjengelig for gjenbruk av det vitenskapelige miljøet og offentligheten. Metodene og funnene presentert i denne avhandlingen danner et grunnlag for tverrfaglig biomedisinsk illustrasjon og visualiseringsforskning, og åpner flere muligheter for fortsatt arbeid med visualisering av fysiologiske prosesser.The overarching theme of this thesis is the cross-disciplinary application of medical illustration and visualization techniques to address challenges in exploring, analyzing, and communicating aspects of physiology to audiences with differing expertise. Describing the myriad biological processes occurring in living beings over time, the science of physiology is complex and critical to our understanding of how life works. It spans many spatio-temporal scales to combine and bridge the basic sciences (biology, physics, and chemistry) to medicine. Recent years have seen an explosion of new and finer-grained experimental and acquisition methods to characterize these data. The volume and complexity of these data necessitate effective visualizations to complement standard analysis practice. Visualization approaches must carefully consider and be adaptable to the user's main task, be it exploratory, analytical, or communication-oriented. This thesis contributes to the areas of theory, empirical findings, methods, applications, and research replicability in visualizing physiology. Our contributions open with a state-of-the-art report exploring the challenges and opportunities in visualization for physiology. This report is motivated by the need for visualization researchers, as well as researchers in various application domains, to have a centralized, multiscale overview of visualization tasks and techniques. Using a mixed-methods search approach, this is the first report of its kind to broadly survey the space of visualization for physiology. Our approach to organizing the literature in this report enables the lookup of topics of interest according to spatio-temporal scale. It further subdivides works according to any combination of three high-level visualization tasks: exploration, analysis, and communication. This provides an easily-navigable foundation for discussion and future research opportunities for audience- and task-appropriate visualization for physiology. From this report, we identify two key areas for continued research that begin narrowly and subsequently broaden in scope: (1) exploratory analysis of multifaceted physiology data for expert users, and (2) communication for experts and non-experts alike. Our investigation of multifaceted physiology data takes place over two studies. Each targets processes occurring at different spatio-temporal scales and includes a case study with experts to assess the applicability of our proposed method. At the molecular scale, we examine data from magnetic resonance spectroscopy (MRS), an advanced biochemical technique used to identify small molecules (metabolites) in living tissue that are indicative of metabolic pathway activity. Although highly sensitive and specific, the output of this modality is abstract and difficult to interpret. Our design study investigating the tasks and requirements for expert exploratory analysis of these data led to SpectraMosaic, a novel application enabling domain researchers to analyze any permutation of metabolites in ratio form for an entire cohort, or by sample region, individual, acquisition date, or brain activity status at the time of acquisition. A second approach considers the exploratory analysis of multidimensional physiological data at the opposite end of the spatio-temporal scale: population. An effective exploratory data analysis workflow critically must identify interesting patterns and relationships, which becomes increasingly difficult as data dimensionality increases. Although this can be partially addressed with existing dimensionality reduction techniques, the nature of these techniques means that subtle patterns may be lost in the process. In this approach, we describe DimLift, an iterative dimensionality reduction technique enabling user identification of interesting patterns and relationships that may lie subtly within a dataset through dimensional bundles. Key to this method is the user's ability to steer the dimensionality reduction technique to follow their own lines of inquiry. Our third question considers the crafting of visualizations for communication to audiences with different levels of expertise. It is natural to expect that experts in a topic may have different preferences and criteria to evaluate a visual communication relative to a non-expert audience. This impacts the success of an image in communicating a given scenario. Drawing from diverse techniques in biomedical illustration and visualization, we conducted an exploratory study of the criteria that audiences use when evaluating a biomedical process visualization targeted for communication. From this study, we identify opportunities for further convergence of biomedical illustration and visualization techniques for more targeted visual communication design. One opportunity that we discuss in greater depth is the development of semantically-consistent guidelines for the coloring of molecular scenes. The intent of such guidelines is to elevate the scientific literacy of non-expert audiences in the context of molecular visualization, which is particularly relevant to public health communication. All application code and empirical findings are open-sourced and available for reuse by the scientific community and public. The methods and findings presented in this thesis contribute to a foundation of cross-disciplinary biomedical illustration and visualization research, opening several opportunities for continued work in visualization for physiology.Doktorgradsavhandlin

    Enhancing the information content of geophysical data for nuclear site characterisation

    Get PDF
    Our knowledge and understanding to the heterogeneous structure and processes occurring in the Earth’s subsurface is limited and uncertain. The above is true even for the upper 100m of the subsurface, yet many processes occur within it (e.g. migration of solutes, landslides, crop water uptake, etc.) are important to human activities. Geophysical methods such as electrical resistivity tomography (ERT) greatly improve our ability to observe the subsurface due to their higher sampling frequency (especially with autonomous time-lapse systems), larger spatial coverage and less invasive operation, in addition to being more cost-effective than traditional point-based sampling. However, the process of using geophysical data for inference is prone to uncertainty. There is a need to better understand the uncertainties embedded in geophysical data and how they translate themselves when they are subsequently used, for example, for hydrological or site management interpretations and decisions. This understanding is critical to maximize the extraction of information in geophysical data. To this end, in this thesis, I examine various aspects of uncertainty in ERT and develop new methods to better use geophysical data quantitatively. The core of the thesis is based on two literature reviews and three papers. In the first review, I provide a comprehensive overview of the use of geophysical data for nuclear site characterization, especially in the context of site clean-up and leak detection. In the second review, I survey the various sources of uncertainties in ERT studies and the existing work to better quantify or reduce them. I propose that the various steps in the general workflow of an ERT study can be viewed as a pipeline for information and uncertainty propagation and suggested some areas have been understudied. One of these areas is measurement errors. In paper 1, I compare various methods to estimate and model ERT measurement errors using two long-term ERT monitoring datasets. I also develop a new error model that considers the fact that each electrode is used to make multiple measurements. In paper 2, I discuss the development and implementation of a new method for geoelectrical leak detection. While existing methods rely on obtaining resistivity images through inversion of ERT data first, the approach described here estimates leak parameters directly from raw ERT data. This is achieved by constructing hydrological models from prior site information and couple it with an ERT forward model, and then update the leak (and other hydrological) parameters through data assimilation. The approach shows promising results and is applied to data from a controlled injection experiment in Yorkshire, UK. The approach complements ERT imaging and provides a new way to utilize ERT data to inform site characterisation. In addition to leak detection, ERT is also commonly used for monitoring soil moisture in the vadose zone, and increasingly so in a quantitative manner. Though both the petrophysical relationships (i.e., choices of appropriate model and parameterization) and the derived moisture content are known to be subject to uncertainty, they are commonly treated as exact and error‐free. In paper 3, I examine the impact of uncertain petrophysical relationships on the moisture content estimates derived from electrical geophysics. Data from a collection of core samples show that the variability in such relationships can be large, and they in turn can lead to high uncertainty in moisture content estimates, and they appear to be the dominating source of uncertainty in many cases. In the closing chapters, I discuss and synthesize the findings in the thesis within the larger context of enhancing the information content of geophysical data, and provide an outlook on further research in this topic

    A 3D environment for surgical planning and simulation

    Get PDF
    The use of Computed Tomography (CT) images and their three-dimensional (3D) reconstruction has spread in the last decade for implantology and surgery. A common use of acquired CT datasets is to be handled by dedicated software that provide a work context to accomplish preoperative planning upon. These software are able to exploit image processing techniques and computer graphics to provide fundamental information needed to work in safety, in order to minimize the surgeon possible error during the surgical operation. However, most of them carry on lacks and flaws, that compromise the precision and additional safety that their use should provide. The research accomplished during my PhD career has concerned the development of an optimized software for surgical preoperative planning. With this purpose, the state of the art has been analyzed, and main deficiencies have been identified. Then, in order to produce practical solutions, those lacks and defects have been contextualized in a medical field in particular: it has been opted for oral implantology, due to the available support of a pool of implantologists. It has emerged that most software systems for oral implantology, that are based on a multi-view approach, often accompanied with a 3D rendered model, are affected by the following problems: unreliability of measurements computed upon misleading views (panoramic one), as well as a not optimized use of the 3D environment, significant planning errors implied by the software work context (incorrect cross-sectional planes), and absence of automatic recognition of fundamental anatomies (as the mandibular canal). Thus, it has been defined a fully 3D approach, and a planning software system in particular, where image processing and computer graphic techniques have been used to create a smooth and user-friendly completely-3D environment to work upon for oral implant planning and simulation. Interpolation of the axial slices is used to produce a continuous radiographic volume and to get an isotropic voxel, in order to achieve a correct work context. Freedom of choosing, arbitrarily, during the planning phase, the best cross-sectional plane for achieving correct measurements is obtained through interpolation and texture generation. Correct orientation of the planned implants is also easily computed, by exploiting a radiological mask with radio-opaque markers, worn by the patient during the CT scan, and reconstructing the cross-sectional images along the preferred directions. The mandibular canal is automatically recognised through an adaptive surface-extracting statistical-segmentation based algorithm developed on purpose. Then, aiming at completing the overall approach, interfacing between the software and an anthropomorphic robot, in order to being able to transfer the planning on a surgical guide, has been achieved through proper coordinates change and exploiting a physical reference frame in the radiological stent. Finally, every software feature has been evaluated and validated, statistically or clinically, and it has resulted that the precision achieved outperforms the one in literature

    Computer integrated system: medical imaging & visualization

    Get PDF
    The intent of this book’s conception is to present research work using a user centered design approach. Due to space constraints, the story of the journey, included in this book is relatively brief. However we believe that it manages to adequately represent the story of the journey, from its humble beginnings in 2008 to the point where it visualizes future trends amongst both researchers and practitioners across the Computer Science and Medical disciplines. This book aims not only to present a representative sampling of real-world collaboration between said disciplines but also to provide insights into the different aspects related to the use of real-world Computer Assisted Medical applications. Readers and potential clients should find the information particularly useful in analyzing the benefits of collaboration between these two fields, the products in and of their institutions. The work discussed here is a compilation of the work of several PhD students under my supervision, who have since graduated and produced several publications either in journals or proceedings of conferences. As their work has been published, this book will be more focused on the research methodology based on medical technology used in their research. The research work presented in this book partially encompasses the work under the MOA for collaborative Research and Development in the field of Computer Assisted Surgery and Diagnostics pertaining to Thoracic and Cardiovascular Diseases between UPM, UKM and IJN, spanning five years beginning from 15 Feb 2013

    Determining the Biomechanical Behavior of the Liver Using Medical Image Analysis and Evolutionary Computation

    Full text link
    Modeling the liver deformation forms the basis for the development of new clinical applications that improve the diagnosis, planning and guidance in liver surgery. However, the patient-specific modeling of this organ and its validation are still a challenge in Biomechanics. The reason is the difficulty to measure the mechanical response of the in vivo liver tissue. The current approach consist of performing minimally invasive or open surgery aimed at estimating the elastic constant of the proposed biomechanical models. This dissertation presents how the use of medical image analysis and evolutionary computation allows the characterization of the biomechanical behavior of the liver, avoiding the use of these minimally invasive techniques. In particular, the use of similarity coefficients commonly used in medical image analysis has permitted, on one hand, to estimate the patient-specific biomechanical model of the liver avoiding the invasive measurement of its mechanical response. On the other hand, these coefficients have also permitted to validate the proposed biomechanical models. Jaccard coefficient and Hausdorff distance have been used to validate the models proposed to simulate the behavior of ex vivo lamb livers, calculating the error between the volume of the experimentally deformed samples of the livers and the volume from biomechanical simulations of these deformations. These coefficients has provided information, such as the shape of the samples and the error distribution along their volume. For this reason, both coefficients have also been used to formulate a novel function, the Geometric Similarity Function (GSF). This function has permitted to establish a methodology to estimate the elastic constants of the models proposed for the human liver using evolutionary computation. Several optimization strategies, using GSF as cost function, have been developed aimed at estimating the patient-specific elastic constants of the biomechanical models proposed for the human liver. Finally, this methodology has been used to define and validate a biomechanical model proposed for an in vitro human liver.Martínez Martínez, F. (2014). Determining the Biomechanical Behavior of the Liver Using Medical Image Analysis and Evolutionary Computation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/39337TESI

    CELLmicrocosmos - Integrative cell modeling at the  molecular, mesoscopic and functional level

    Get PDF
    Sommer B. CELLmicrocosmos - Integrative cell modeling at the  molecular, mesoscopic and functional level. Bielefeld: Bielefeld University; 2012.The modeling of cells is an important application area of Systems Biology. In the context of this work, three cytological levels are defined: the mesoscopic, the molecular and the functional level. A number of related approaches which are quite diverse will be introduced during this work which can be categorized into these disciplines. But none of these approaches covers all areas. In this work, the combination of all three aforementioned cytological levels is presented, realized by the CELLmicrocosmos project, combining and extending different Bioinformatics-related methods. The mesoscopic level is covered by CellEditor which is a simple tool to generate eukaryotic or prokaryotic cell models. These are based on cell components represented by three-dimensional shapes. Different methods to generate these shapes are discussed by using partly external tools such as Amira, 3ds Max and/or Blender; abstract, interpretative, 3D-microscopy-based and molecular-structure-based cell component modeling. To communicate with these tools, CellEditor provides import as well as export capabilities based on the VRML97 format. In addition, different cytological coloring methods are discussed which can be applied to the cell models. MembraneEditor operates at the molecular level. This tool solves heterogeneous Membrane Packing Problems by distributing lipids on rectangular areas using collision detection. It provides fast and intuitive methods supporting a wide range of different application areas based on the PDB format. Moreover, a plugin interface enables the use of custom algorithms. In the context of this work, a high-density-generating lipid packing algorithm is evaluated; The Wanderer. The semi-automatic integration of proteins into the membrane is enabled by using data from the OPM and PDBTM database. Contrasting with the aforementioned structural levels, the third level covers the functional aspects of the cell. Here, protein-related networks or data sets can be imported and mapped into the previously generated cell models using the PathwayIntegration. For this purpose, data integration methods are applied, represented by the data warehouse DAWIS-M.D. which includes a number of established databases. This information is enriched by the text-mining data acquired from the ANDCell database. The localization of proteins is supported by different tools like the interactive Localization Table and the Localization Charts. The correlation of partly multi-layered cell components with protein-related networks is covered by the Network Mapping Problem. A special implementation of the ISOM layout is used for this purpose. Finally, a first approach to combine all these interrelated levels is represented; CellExplorer which integrates CellEditor as well as PathwayIntegration and imports structures generated with MembraneEditor. For this purpose, the shape-based cell components can be correlated with networks as well as molecular membrane structures using Membrane Mapping. It is shown that the tools discussed here can be applied to scientific as well as educational tasks: educational cell visualization, initial membrane modeling for molecular simulations, analysis of interrelated protein sets, cytological disease mapping. These are supported by the user-friendly combination of Java, Java 3D and Web Start technology. In the last part of this thesis the future of Integrative Cell Modeling is discussed. While the approaches discussed here represent basically three-dimensional snapshots of the cell, prospective approaches have to be extended into the fourth dimension; time

    Archaeobotanical applications of microCT imaging

    Get PDF
    This thesis explores the ways in which the three-dimensional and non-destructive imaging technique of microCT can be applied to archaeobotanical materials to extract additional information previously inaccessible using traditional two-dimensional techniques. Across a series of eight publications, two microCT imaging protocols focusing on the imaging and analysis of two distinct types of archaeobotanical remains are presented along with archaeological case studies to which they have been successfully applied. Both protocols seek to utilise the relatively new imaging technique of microCT in order to explore the histories of some of the world's most important, yet in some cases understudied food crops including rice (Oryza sativa) in Island Southeast Asia, sorghum (Sorghum bicolor) and pearl millet (Pennisetum glaucum) in Africa, and taro (Colocasia esculenta), sweet potato (Ipomoea batatas), and yams (Dioscoreaceae) in Southeast Asia and the Pacific. The first protocol outlines how organic cereal tempers can be virtually extracted from inside pottery sherds through the use of microCT scanning and 3D digital segmentation techniques. These extracted digital remains can then be taxonomically identified and their domesticated status assessed using the morphological information only accessible with the penetrative X-rays of microCT. This protocol has been successfully applied to extract new rice and sorghum assemblages from previously excavated pottery sherds and their analysis has expanded our knowledge of the dispersal and early cultivation histories of these staple food crops. The second protocol uses microCT to build the first virtual reference collection of a greatly understudied type of archaeobotanical evidence, archaeological parenchyma. This protocol was developed by imaging samples of important root crops in the Southeast Asia and Pacific region from Jon Hather's parenchyma reference collection and applying his taxonomic identification method developed in the 1980s and 90s. Here his method is updated and adapted to include the added three-dimensional contextual information provided by microCT scanning as well as the greater range of anatomical variation captured both within and between species. The microCT datasets of these reference samples will form part of the first publicly accessible, online and virtual, archaeological parenchyma reference collection, which will hopefully encourage wider adoption and application of the technique. Both archaeobotanical microCT protocols presented here demonstrate the enormous potential of the technique to expand on our current sources of archaeobotanical evidence. The digital nature of the datasets presents the possibility of increasing analytical efficiency in the future with the development of automated archaeobotanical analyses
    corecore