748 research outputs found

    ENABLING TECHNIQUES FOR EXPRESSIVE FLOW FIELD VISUALIZATION AND EXPLORATION

    Get PDF
    Flow visualization plays an important role in many scientific and engineering disciplines such as climate modeling, turbulent combustion, and automobile design. The most common method for flow visualization is to display integral flow lines such as streamlines computed from particle tracing. Effective streamline visualization should capture flow patterns and display them with appropriate density, so that critical flow information can be visually acquired. In this dissertation, we present several approaches that facilitate expressive flow field visualization and exploration. First, we design a unified information-theoretic framework to model streamline selection and viewpoint selection as symmetric problems. Two interrelated information channels are constructed between a pool of candidate streamlines and a set of sample viewpoints. Based on these information channels, we define streamline information and viewpoint information to select best streamlines and viewpoints, respectively. Second, we present a focus+context framework to magnify small features and reduce occlusion around them while compacting the context region in a full view. This framework parititions the volume into blocks and deforms them to guide streamline repositioning. The desired deformation is formulated into energy terms and achieved by minimizing the energy function. Third, measuring the similarity of integral curves is fundamental to many tasks such as feature detection, pattern querying, streamline clustering and hierarchical exploration. We introduce FlowString that extracts shape invariant features from streamlines to form an alphabet of characters, and encodes each streamline into a string. The similarity of two streamline segments then becomes a specially designed edit distance between two strings. Leveraging the suffix tree, FlowString provides a string-based method for exploratory streamline analysis and visualization. A universal alphabet is learned from multiple data sets to capture basic flow patterns that exist in a variety of flow fields. This allows easy comparison and efficient query across data sets. Fourth, for exploration of vascular data sets, which contain a series of vector fields together with multiple scalar fields, we design a web-based approach for users to investigate the relationship among different properties guided by histograms. The vessel structure is mapped from the 3D volume space to a 2D graph, which allow more efficient interaction and effective visualization on websites. A segmentation scheme is proposed to divide the vessel structure based on a user specified property to further explore the distribution of that property over space

    Pycortex: an interactive surface visualizer for fMRI.

    Get PDF
    Surface visualizations of fMRI provide a comprehensive view of cortical activity. However, surface visualizations are difficult to generate and most common visualization techniques rely on unnecessary interpolation which limits the fidelity of the resulting maps. Furthermore, it is difficult to understand the relationship between flattened cortical surfaces and the underlying 3D anatomy using tools available currently. To address these problems we have developed pycortex, a Python toolbox for interactive surface mapping and visualization. Pycortex exploits the power of modern graphics cards to sample volumetric data on a per-pixel basis, allowing dense and accurate mapping of the voxel grid across the surface. Anatomical and functional information can be projected onto the cortical surface. The surface can be inflated and flattened interactively, aiding interpretation of the correspondence between the anatomical surface and the flattened cortical sheet. The output of pycortex can be viewed using WebGL, a technology compatible with modern web browsers. This allows complex fMRI surface maps to be distributed broadly online without requiring installation of complex software

    Enabling viewpoint learning through dynamic label generation

    Get PDF
    Optimal viewpoint prediction is an essential task in many computer graphics applications. Unfortunately, common viewpointqualities suffer from two major drawbacks: dependency on clean surface meshes, which are not always available, and the lack ofclosed-form expressions, which requires a costly search involving rendering. To overcome these limitations we propose to sepa-rate viewpoint selection from rendering through an end-to-end learning approach, whereby we reduce the in¿uence of the meshquality by predicting viewpoints from unstructured point clouds instead of polygonal meshes. While this makes our approachinsensitive to the mesh discretization during evaluation, it only becomes possible when resolving label ambiguities that arise inthis context. Therefore, we additionally propose to incorporate the label generation into the training procedure, making the labeldecision adaptive to the current network predictions. We show how our proposed approach allows for learning viewpoint pre-dictions for models from different object categories and for different viewpoint qualities. Additionally, we show that predictiontimes are reduced from several minutes to a fraction of a second, as compared to state-of-the-art (SOTA) viewpoint quality eval-uation. Code and training data is available at https://github.com/schellmi42/viewpoint_learning, whichis to our knowledge the biggest viewpoint quality dataset available.This work was supported in part by project TIN2017-88515-C2-1-R(GEN3DLIVE), from the Spanish Ministerio de Economía yCompetitividad, by 839 FEDER (EU) funds.Peer ReviewedPostprint (published version

    Enabling Viewpoint Learning through Dynamic Label Generation

    Get PDF
    Optimal viewpoint prediction is an essential task in many computer graphics applications. Unfortunately, common viewpoint qualities suffer from two major drawbacks: dependency on clean surface meshes, which are not always available, and the lack of closed-form expressions, which requires a costly search involving rendering. To overcome these limitations we propose to separate viewpoint selection from rendering through an end-to-end learning approach, whereby we reduce the influence of the mesh quality by predicting viewpoints from unstructured point clouds instead of polygonal meshes. While this makes our approach insensitive to the mesh discretization during evaluation, it only becomes possible when resolving label ambiguities that arise in this context. Therefore, we additionally propose to incorporate the label generation into the training procedure, making the label decision adaptive to the current network predictions. We show how our proposed approach allows for learning viewpoint predictions for models from different object categories and for different viewpoint qualities. Additionally, we show that prediction times are reduced from several minutes to a fraction of a second, as compared to state-of-the-art (SOTA) viewpoint quality evaluation. We will further release the code and training data, which will to our knowledge be the biggest viewpoint quality dataset available

    Lattice-Boltzmann simulations of cerebral blood flow

    Get PDF
    Computational haemodynamics play a central role in the understanding of blood behaviour in the cerebral vasculature, increasing our knowledge in the onset of vascular diseases and their progression, improving diagnosis and ultimately providing better patient prognosis. Computer simulations hold the potential of accurately characterising motion of blood and its interaction with the vessel wall, providing the capability to assess surgical treatments with no danger to the patient. These aspects considerably contribute to better understand of blood circulation processes as well as to augment pre-treatment planning. Existing software environments for treatment planning consist of several stages, each requiring significant user interaction and processing time, significantly limiting their use in clinical scenarios. The aim of this PhD is to provide clinicians and researchers with a tool to aid in the understanding of human cerebral haemodynamics. This tool employs a high performance fluid solver based on the lattice-Boltzmann method (coined HemeLB), high performance distributed computing and grid computing, and various advanced software applications useful to efficiently set up and run patient-specific simulations. A graphical tool is used to segment the vasculature from patient-specific CT or MR data and configure boundary conditions with ease, creating models of the vasculature in real time. Blood flow visualisation is done in real time using in situ rendering techniques implemented within the parallel fluid solver and aided by steering capabilities; these programming strategies allows the clinician to interactively display the simulation results on a local workstation. A separate software application is used to numerically compare simulation results carried out at different spatial resolutions, providing a strategy to approach numerical validation. This developed software and supporting computational infrastructure was used to study various patient-specific intracranial aneurysms with the collaborating interventionalists at the National Hospital for Neurology and Neuroscience (London), using three-dimensional rotational angiography data to define the patient-specific vasculature. Blood flow motion was depicted in detail by the visualisation capabilities, clearly showing vortex fluid ow features and stress distribution at the inner surface of the aneurysms and their surrounding vasculature. These investigations permitted the clinicians to rapidly assess the risk associated with the growth and rupture of each aneurysm. The ultimate goal of this work is to aid clinical practice with an efficient easy-to-use toolkit for real-time decision support

    From Molecules to the Masses : Visual Exploration, Analysis, and Communication of Human Physiology

    Get PDF
    Det overordnede målet med denne avhandlingen er tverrfaglig anvendelse av medisinske illustrasjons- og visualiseringsteknikker for å utforske, analysere og formidle aspekter ved fysiologi til publikum med ulik faglig nivå og bakgrunn. Fysiologi beskriver de biologiske prosessene som skjer i levende vesener over tid. Vitenskapen om fysiologi er kompleks, men samtidig kritisk for vår forståelse av hvordan levende organismer fungerer. Fysiologi dekker en stor bredde romlig-temporale skalaer og fordrer behovet for å kombinere og bygge bro mellom basalfagene (biologi, fysikk og kjemi) og medisin. De senere årene har det vært en eksplosjon av nye, avanserte eksperimentelle metoder for å detektere og karakterisere fysiologiske data. Volumet og kompleksiteten til fysiologiske data krever effektive strategier for visualisering for å komplementere dagens standard analyser. Hvilke tilnærminger som benyttes i visualiseringen må nøye balanseres og tilpasses formålet med bruken av dataene, enten dette er for å utforske dataene, analysere disse eller kommunisere og presentere dem. Arbeidet i denne avhandlingen bidrar med ny kunnskap innen teori, empiri, anvendelse og reproduserbarhet av visualiseringsmetoder innen fysiologi. Først i avhandlingen er en rapport som oppsummerer og utforsker dagens kunnskap om muligheter og utfordringer for visualisering innen fysiologi. Motivasjonen for arbeidet er behovet forskere innen visualiseringsfeltet, og forskere i ulike anvendelsesområder, har for en sammensatt oversikt over flerskala visualiseringsoppgaver og teknikker. Ved å bruke søk over et stort spekter av metodiske tilnærminger, er dette den første rapporten i sitt slag som kartlegger visualiseringsmulighetene innen fysiologi. I rapporten er faglitteraturen oppsummert slik at det skal være enkelt å gjøre oppslag innen ulike tema i rom-og-tid-skalaen, samtidig som litteraturen er delt inn i de tre høynivå visualiseringsoppgavene data utforsking, analyse og kommunikasjon. Dette danner et enkelt grunnlag for å navigere i litteraturen i feltet og slik danner rapporten et godt grunnlag for diskusjon og forskningsmuligheter innen feltet visualisering og fysiologi. Basert på arbeidet med rapporten var det særlig to områder som det er ønskelig for oss å fortsette å utforske: (1) utforskende analyse av mangefasetterte fysiologidata for ekspertbrukere, og (2) kommunikasjon av data til både eksperter og ikke-eksperter. Arbeidet vårt av mangefasetterte fysiologidata er oppsummert i to studier i avhandlingen. Hver studie omhandler prosesser som foregår på forskjellige romlig-temporale skalaer og inneholder konkrete eksempler på anvendelse av metodene vurdert av eksperter i feltet. I den første av de to studiene undersøkes konsentrasjonen av molekylære substanser (metabolitter) ut fra data innsamlet med magnetisk resonansspektroskopi (MRS), en avansert biokjemisk teknikk som brukes til å identifisere metabolske forbindelser i levende vev. Selv om MRS kan ha svært høy sensitivitet og spesifisitet i medisinske anvendelser, er analyseresultatene fra denne modaliteten abstrakte og vanskelige å forstå også for medisinskfaglige eksperter i feltet. Vår designstudie som undersøkte oppgavene og kravene til ekspertutforskende analyse av disse dataene førte til utviklingen av SpectraMosaic. Dette er en ny applikasjon som gjør det mulig for domeneeksperter å analysere konsentrasjonen av metabolitter normalisert for en hel kohort, eller etter prøveregion, individ, opptaksdato, eller status på hjernens aktivitetsnivå ved undersøkelsestidspunktet. I den andre studien foreslås en metode for å utføre utforskende analyser av flerdimensjonale fysiologiske data i motsatt ende av den romlig-temporale skalaen, nemlig på populasjonsnivå. En effektiv arbeidsflyt for utforskende dataanalyse må kritisk identifisere interessante mønstre og relasjoner, noe som blir stadig vanskeligere når dimensjonaliteten til dataene øker. Selv om dette delvis kan løses med eksisterende reduksjonsteknikker er det alltid en fare for at subtile mønstre kan gå tapt i reduksjonsprosessen. Isteden presenterer vi i studien DimLift, en iterativ dimensjonsreduksjonsteknikk som muliggjør brukeridentifikasjon av interessante mønstre og relasjoner som kan ligge subtilt i et datasett gjennom dimensjonale bunter. Nøkkelen til denne metoden er brukerens evne til å styre dimensjonalitetsreduksjonen slik at den følger brukerens egne undersøkelseslinjer. For videre å undersøke kommunikasjon til eksperter og ikke-eksperter, studeres i neste arbeid utformingen av visualiseringer for kommunikasjon til publikum med ulike nivåer av ekspertnivå. Det er naturlig å forvente at eksperter innen et emne kan ha ulike preferanser og kriterier for å vurdere en visuell kommunikasjon i forhold til et ikke-ekspertpublikum. Dette påvirker hvor effektivt et bilde kan benyttes til å formidle en gitt scenario. Med utgangspunkt i ulike teknikker innen biomedisinsk illustrasjon og visualisering, gjennomførte vi derfor en utforskende studie av kriteriene som publikum bruker når de evaluerer en biomedisinsk prosessvisualisering målrettet for kommunikasjon. Fra denne studien identifiserte vi muligheter for ytterligere konvergens av biomedisinsk illustrasjon og visualiseringsteknikker for mer målrettet visuell kommunikasjonsdesign. Særlig beskrives i større dybde utviklingen av semantisk konsistente retningslinjer for farging av molekylære scener. Hensikten med slike retningslinjer er å heve den vitenskapelige kompetansen til ikke-ekspertpublikum innen molekyler visualisering, som vil være spesielt relevant for kommunikasjon til befolkningen i forbindelse med folkehelseopplysning. All kode og empiriske funn utviklet i arbeidet med denne avhandlingen er åpen kildekode og tilgjengelig for gjenbruk av det vitenskapelige miljøet og offentligheten. Metodene og funnene presentert i denne avhandlingen danner et grunnlag for tverrfaglig biomedisinsk illustrasjon og visualiseringsforskning, og åpner flere muligheter for fortsatt arbeid med visualisering av fysiologiske prosesser.The overarching theme of this thesis is the cross-disciplinary application of medical illustration and visualization techniques to address challenges in exploring, analyzing, and communicating aspects of physiology to audiences with differing expertise. Describing the myriad biological processes occurring in living beings over time, the science of physiology is complex and critical to our understanding of how life works. It spans many spatio-temporal scales to combine and bridge the basic sciences (biology, physics, and chemistry) to medicine. Recent years have seen an explosion of new and finer-grained experimental and acquisition methods to characterize these data. The volume and complexity of these data necessitate effective visualizations to complement standard analysis practice. Visualization approaches must carefully consider and be adaptable to the user's main task, be it exploratory, analytical, or communication-oriented. This thesis contributes to the areas of theory, empirical findings, methods, applications, and research replicability in visualizing physiology. Our contributions open with a state-of-the-art report exploring the challenges and opportunities in visualization for physiology. This report is motivated by the need for visualization researchers, as well as researchers in various application domains, to have a centralized, multiscale overview of visualization tasks and techniques. Using a mixed-methods search approach, this is the first report of its kind to broadly survey the space of visualization for physiology. Our approach to organizing the literature in this report enables the lookup of topics of interest according to spatio-temporal scale. It further subdivides works according to any combination of three high-level visualization tasks: exploration, analysis, and communication. This provides an easily-navigable foundation for discussion and future research opportunities for audience- and task-appropriate visualization for physiology. From this report, we identify two key areas for continued research that begin narrowly and subsequently broaden in scope: (1) exploratory analysis of multifaceted physiology data for expert users, and (2) communication for experts and non-experts alike. Our investigation of multifaceted physiology data takes place over two studies. Each targets processes occurring at different spatio-temporal scales and includes a case study with experts to assess the applicability of our proposed method. At the molecular scale, we examine data from magnetic resonance spectroscopy (MRS), an advanced biochemical technique used to identify small molecules (metabolites) in living tissue that are indicative of metabolic pathway activity. Although highly sensitive and specific, the output of this modality is abstract and difficult to interpret. Our design study investigating the tasks and requirements for expert exploratory analysis of these data led to SpectraMosaic, a novel application enabling domain researchers to analyze any permutation of metabolites in ratio form for an entire cohort, or by sample region, individual, acquisition date, or brain activity status at the time of acquisition. A second approach considers the exploratory analysis of multidimensional physiological data at the opposite end of the spatio-temporal scale: population. An effective exploratory data analysis workflow critically must identify interesting patterns and relationships, which becomes increasingly difficult as data dimensionality increases. Although this can be partially addressed with existing dimensionality reduction techniques, the nature of these techniques means that subtle patterns may be lost in the process. In this approach, we describe DimLift, an iterative dimensionality reduction technique enabling user identification of interesting patterns and relationships that may lie subtly within a dataset through dimensional bundles. Key to this method is the user's ability to steer the dimensionality reduction technique to follow their own lines of inquiry. Our third question considers the crafting of visualizations for communication to audiences with different levels of expertise. It is natural to expect that experts in a topic may have different preferences and criteria to evaluate a visual communication relative to a non-expert audience. This impacts the success of an image in communicating a given scenario. Drawing from diverse techniques in biomedical illustration and visualization, we conducted an exploratory study of the criteria that audiences use when evaluating a biomedical process visualization targeted for communication. From this study, we identify opportunities for further convergence of biomedical illustration and visualization techniques for more targeted visual communication design. One opportunity that we discuss in greater depth is the development of semantically-consistent guidelines for the coloring of molecular scenes. The intent of such guidelines is to elevate the scientific literacy of non-expert audiences in the context of molecular visualization, which is particularly relevant to public health communication. All application code and empirical findings are open-sourced and available for reuse by the scientific community and public. The methods and findings presented in this thesis contribute to a foundation of cross-disciplinary biomedical illustration and visualization research, opening several opportunities for continued work in visualization for physiology.Doktorgradsavhandlin

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Aerospace medicine and biology: A continuing bibliography with indexes, supplement 128, May 1974

    Get PDF
    This special bibliography lists 282 reports, articles, and other documents introduced into the NASA scientific and technical information system in April 1974
    corecore