15 research outputs found

    A Multiresolution Markovian Fusion Model for the Color Visualization of Hyperspectral Images

    Full text link

    Review of Fluorescence Guided Surgery Visualization and Overlay Techniques

    Get PDF
    In fluorescence guided surgery, data visualization represents a critical step between signal capture and display needed for clinical decisions informed by that signal. The diversity of methods for displaying surgical images are reviewed, and a particular focus is placed on electronically detected and visualized signals, as required for near-infrared or low concentration tracers. Factors driving the choices such as human perception, the need for rapid decision making in a surgical environment, and biases induced by display choices are outlined. Five practical suggestions are outlined for optimal display orientation, color map, transparency/alpha function, dynamic range compression, and color perception check

    Visualisation methods for polarimetric imaging

    Full text link
    Polarimetric imaging is a technique for measuring the spatial correlations of the aspects of the polarisation of light. Since human vision is essentially unable to detect polarisation, the data obtained from this imaging technique must be converted into the channels of the human visual system in order to visually process the spatial correlations in the data. The technique for converting non-visual data into a visual representation is known as data visualisation. While the techniques for visualising other types of data is well studied, techniques specific for polarimetric imaging are understudied. This research aims to survey the current state of polarimetric imaging visualisation, to analyse the current methods using metrics from visualisation research, to improve on the existing techniques, to test the effectiveness of different methods in terms of user performance, and to develop novel colourmapping methods

    Visualization of Multivariate Image Data using Image Fusion and Perceptually Optimized Color Scales based on sRGB

    No full text
    Saalbach A, Twellmann T, Nattkemper TW, White MJ, Khazen M, Leach MO. Visualization of Multivariate Image Data using Image Fusion and Perceptually Optimized Color Scales based on sRGB. In: Robert L. Galloway J, ed. Medical Imaging 2004: Visualization, Image-Guided Procedures, and Display. Proceedings of SPIE. Vol 5367. San Diego, CA: SPIE; 2004.Due to the rapid progress in medical imaging technology, analysis of multivariate image data is receiving increased interest. However, their visual exploration is a challenging task since it requires the integration of information from many different sources which usually cannot be perceived at once by an observer. Image fusion techniques are commonly used to obtain information from multivariate image data, while psychophysical aspects of data visualization are usually not considered. Visualization is typically achieved by means of device derived color scales. With respect to psychophysical aspects of visualization, more sophisticated color mapping techniques based on device independent (and perceptually uniform) color spaces like CIELUV have been proposed. Nevertheless, the benefit of these techniques is limited by the fact that they require complex color space transformations to account for device characteristics and viewing conditions. In this paper we present a new framework for the visualization of multivariate image data using image fusion and color mapping techniques. In order to overcome problems of consistent image presentations and color space transformations, we propose perceptually optimized color scales based on CIELUV in combination with sRGB (IEC 61966-2-1) color specification. In contrast to color definitions based purely on CIELUV, sRGB data can be used directly under reasonable conditions, without complex transformations and additional information. In the experimental section we demonstrate the advantages of our approach in an application of these techniques to the visualization of DCE-MRI images from breast cancer research

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    A computational approach to the quantification of animal camouflage

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2014Evolutionary pressures have led to some astonishing camouflage strategies in the animal kingdom. Cephalopods like cuttlefish and octopus mastered a rather unique skill: they can rapidly adapt the way their skin looks in color, texture and pattern, blending in with their backgrounds. Showing a general resemblance to a visual background is one of the many camouflage strategies used in nature. For animals like cuttlefish that can dynamically change the way they look, we would like to be able to determine which camouflage strategy a given pattern serves. For example, does an inexact match to a particular background mean the animal has physiological limitations to the patterns it can show, or is it employing a different camouflage strategy (e.g., disrupting its outline)? This thesis uses a computational and data-driven approach to quantify camouflage patterns of cuttlefish in terms of color and pattern. First, we assess the color match of cuttlefish to the features in its natural background in the eyes of its predators. Then, we study overall body patterns to discover relationships and limitations between chromatic components. To facilitate repeatability of our work by others, we also explore ways for unbiased data acquisition using consumer cameras and conventional spectrometers, which are optical imaging instruments most commonly used in studies of animal coloration and camouflage. This thesis makes the following contributions: (1) Proposes a methodology for scene-specific color calibration for the use of RGB cameras for accurate and consistent data acquisition. (2) Introduces an equation relating the numerical aperture and diameter of the optical fiber of a spectrometer to measurement distance and angle, quantifying the degree of spectral contamination. (3) Presents the first study assessing the color match of cuttlefish (S. officinalis) to its background using in situ spectrometry. (4) Develops a computational approach to pattern quantification using techniques from computer vision, image processing, statistics and pattern recognition; and introduces Cuttlefish72x5, the first database of calibrated raw (linear) images of cuttlefish.Funding was provided by the National Science Foundation, Office of Naval Research, NIH-NEI, and the Woods Hole Oceanographic Institution Academic Programs Office

    Multimodal breast imaging: Registration, visualization, and image synthesis

    Get PDF
    The benefit of registration and fusion of functional images with anatomical images is well appreciated in the advent of combined positron emission tomography and x-ray computed tomography scanners (PET/CT). This is especially true in breast cancer imaging, where modalities such as high-resolution and dynamic contrast-enhanced magnetic resonance imaging (MRI) and F-18-FDG positron emission tomography (PET) have steadily gained acceptance in addition to x-ray mammography, the primary detection tool. The increased interest in combined PET/MRI images has facilitated the demand for appropriate registration and fusion algorithms. A new approach to MRI-to-PET non-rigid breast image registration was developed and evaluated based on the location of a small number of fiducial skin markers (FSMs) visible in both modalities. The observed FSM displacement vectors between MRI and PET, distributed piecewise linearly over the breast volume, produce a deformed Finite-Element mesh that reasonably approximates non-rigid deformation of the breast tissue between the MRI and PET scans. The method does not require a biomechanical breast tissue model, and is robust and fast. The method was evaluated both qualitatively and quantitatively on patients and a deformable breast phantom. The procedure yields quality images with average target registration error (TRE) below 4 mm. The importance of appropriately jointly displaying (i.e. fusing) the registered images has often been neglected and underestimated. A combined MRI/PET image has the benefits of directly showing the spatial relationships between the two modalities, increasing the sensitivity, specificity, and accuracy of diagnosis. Additional information on morphology and on dynamic behavior of the suspicious lesion can be provided, allowing more accurate lesion localization including mapping of hyper- and hypo-metabolic regions as well as better lesion-boundary definition, improving accuracy when grading the breast cancer and assessing the need for biopsy. Eight promising fusion-for-visualization techniques were evaluated by radiologists from University Hospital, in Syracuse, NY. Preliminary results indicate that the radiologists were better able to perform a series of tasks when reading the fused PET/MRI data sets using color tables generated by a newly developed genetic algorithm, as compared to other commonly used schemes. The lack of a known ground truth hinders the development and evaluation of new algorithms for tasks such as registration and classification. A preliminary mesh-based breast phantom containing 12 distinct tissue classes along with tissue properties necessary for the simulation of dynamic positron emission tomography scans was created. The phantom contains multiple components which can be separately manipulated, utilizing geometric transformations, to represent populations or a single individual being imaged in multiple positions. This phantom will support future multimodal breast imaging work

    Computer Vision for Marine Environmental Monitoring

    Get PDF
    Osterloff J. Computer Vision for Marine Environmental Monitoring. Bielefeld: Universität Bielefeld; 2018.Ocean exploration using imaging techniques has recently become very popular as camera systems became affordable and technique developed further. Marine imaging provides a unique opportunity to monitor the marine environment. The visual exploration using images allows to explore the variety of fauna, flora and geological structures of the marine environment. This monitoring creates a bottleneck as a manual evaluation of the large amounts of underwater image data is very time consuming. Information encapsulated in the images need to be extracted so that they can be included in statistical analyzes. Objects of interest (OOI) have to be localized and identified in the recorded images. In order to overcome the bottleneck, computer vision (CV) is applied in this thesis to extract the image information (semi-) automatically. A pre-evaluation of the images by marking OOIs manually, i.e. the manual annotation process, is necessary to provide examples for the applied CV methods. Five major challenges are identified in this thesis to apply of CV for marine environmental monitoring. The challenges can be grouped into challenges caused by underwater image acquisition and by the use of manual annotations for machine learning (ML). The image acquisition challenges are the optical properties challenge, e.g. a wavelength dependent attenuation underwater, and the dynamics of these properties, as different amount of matter in the water column affect colors and illumination in the images. The manual annotation challenges for applying ML for underwater images are, the low number of available manual annotations, the quality of the annotations in terms of correctness and reproducibility and the spatial uncertainty of them. The latter is caused by allowing a spatial uncertainty to speed up the manual annotation process e.g. using point annotations instead of fully outlining OOIs on a pixel level. The challenges are resolved individually in four different new CV approaches. The individual CV approaches allow to extract new biologically relevant information from time-series images recorded underwater. Manual annotations provide the ground truth for the CV systems and therefore for the included ML. Placing annotations manually in underwater images is a challenging task. In order to assess the quality in terms of correctness and reproducibility a detailed quality assessment for manual annotations is presented. This includes the computation of a gold standard to increase the quality of the ground truth for the ML. In the individually tailored CV systems, different ML algorithms are applied and adapted for marine environmental monitoring purposes. Applied ML algorithms cover a broad variety from unsupervised to supervised methods, including deep learning algorithms. Depending on the biologically motivated research question, systems are evaluated individually. The first two CV systems are developed for the _in-situ_ monitoring of the sessile species _Lophelia pertusa_. Visual information of the cold-water coral is extracted automatically from time-series images recorded by a fixed underwater observatory (FUO) located at 260 m depth and 22 km off the Norwegian coast. Color change of a cold water coral reef over time is quantified and the polyp activity of the imaged coral is estimated (semi-) automatically. The systems allow for the first time to document an _in-situ_ change of color of a _Lophelia pertusa_ coral reef and to estimate the polyp activity for half a year with a temporal resolution of one hour. The third CV system presented in this thesis allows to monitor the mobile species shrimp _in-situ_. Shrimp are semitransparent creating additional challenges for localization and identification in images using CV. Shrimp are localized and identified in time-series images recorded by the same FUO. Spatial distribution and temporal occurrence changes are observed by comparing two different time periods. The last CV system presented in this thesis is developed to quantify the impact of sedimentation on calcareous algae samples in a _wet-lab_ experiment. The size and color change of the imaged samples over time can be quantified using a consumer camera and a color reference plate placed in the field of view for each recorded image. Extracting biologically relevant information from underwater images is only the first step for marine environmental monitoring. The extracted image information, like behavior or color change, needs to be related to other environmental parameters. Therefore, also data science methods are applied in this thesis to unveil some of the relations between individual species' information extracted semi-automatically from underwater images and other environmental parameters

    Control mechanisms for the procedural generation of visual pattern designs

    Get PDF

    Exploratory search in time-oriented primary data

    Get PDF
    In a variety of research fields, primary data that describes scientific phenomena in an original condition is obtained. Time-oriented primary data, in particular, is an indispensable data type, derived from complex measurements depending on time. Today, time-oriented primary data is collected at rates that exceed the domain experts’ abilities to seek valuable information undiscovered in the data. It is widely accepted that the magnitudes of uninvestigated data will disclose tremendous knowledge in data-driven research, provided that domain experts are able to gain insight into the data. Domain experts involved in data-driven research urgently require analytical capabilities. In scientific practice, predominant activities are the generation and validation of hypotheses. In analytical terms, these activities are often expressed in confirmatory and exploratory data analysis. Ideally, analytical support would combine the strengths of both types of activities. Exploratory search (ES) is a concept that seamlessly includes information-seeking behaviors ranging from search to exploration. ES supports domain experts in both gaining an understanding of huge and potentially unknown data collections and the drill-down to relevant subsets, e.g., to validate hypotheses. As such, ES combines predominant tasks of domain experts applied to data-driven research. For the design of useful and usable ES systems (ESS), data scientists have to incorporate different sources of knowledge and technology. Of particular importance is the state-of-the-art in interactive data visualization and data analysis. Research in these factors is at heart of Information Visualization (IV) and Visual Analytics (VA). Approaches in IV and VA provide meaningful visualization and interaction designs, allowing domain experts to perform the information-seeking process in an effective and efficient way. Today, bestpractice ESS almost exclusively exist for textual data content, e.g., put into practice in digital libraries to facilitate the reuse of digital documents. For time-oriented primary data, ES mainly remains at a theoretical state. Motivation and Problem Statement. This thesis is motivated by two main assumptions. First, we expect that ES will have a tremendous impact on data-driven research for many research fields. In this thesis, we focus on time-oriented primary data, as a complex and important data type for data-driven research. Second, we assume that research conducted to IV and VA will particularly facilitate ES. For time-oriented primary data, however, novel concepts and techniques are required that enhance the design and the application of ESS. In particular, we observe a lack of methodological research in ESS for time-oriented primary data. In addition, the size, the complexity, and the quality of time-oriented primary data hampers the content-based access, as well as the design of visual interfaces for gaining an overview of the data content. Furthermore, the question arises how ESS can incorporate techniques for seeking relations between data content and metadata to foster data-driven research. Overarching challenges for data scientists are to create usable and useful designs, urgently requiring the involvement of the targeted user group and support techniques for choosing meaningful algorithmic models and model parameters. Throughout this thesis, we will resolve these challenges from conceptual, technical, and systemic perspectives. In turn, domain experts can benefit from novel ESS as a powerful analytical support to conduct data-driven research. Concepts for Exploratory Search Systems (Chapter 3). We postulate concepts for the ES in time-oriented primary data. Based on a survey of analysis tasks supported in IV and VA research, we present a comprehensive selection of tasks and techniques relevant for search and exploration activities. The assembly guides data scientists in the choice of meaningful techniques presented in IV and VA. Furthermore, we present a reference workflow for the design and the application of ESS for time-oriented primary data. The workflow divides the data processing and transformation process into four steps, and thus divides the complexity of the design space into manageable parts. In addition, the reference workflow describes how users can be involved in the design. The reference workflow is the framework for the technical contributions of this thesis. Visual-Interactive Preprocessing of Time-Oriented Primary Data (Chapter 4). We present a visual-interactive system that enables users to construct workflows for preprocessing time-oriented primary data. In this way, we introduce a means of providing content-based access. Based on a rich set of preprocessing routines, users can create individual solutions for data cleansing, normalization, segmentation, and other preprocessing tasks. In addition, the system supports the definition of time series descriptors and time series distance measures. Guidance concepts support users in assessing the workflow generalizability, which is important for large data sets. The execution of the workflows transforms time-oriented primary data into feature vectors, which can subsequently be used for downstream search and exploration techniques. We demonstrate the applicability of the system in usage scenarios and case studies. Content-Based Overviews (Chapter 5). We introduce novel guidelines and techniques for the design of contentbased overviews. The three key factors are the creation of meaningful data aggregates, the visual mapping of these aggregates into the visual space, and the view transformation providing layouts of these aggregates in the display space. For each of these steps, we characterize important visualization and interaction design parameters allowing the involvement of users. We introduce guidelines supporting data scientists in choosing meaningful solutions. In addition, we present novel visual-interactive quality assessment techniques enhancing the choice of algorithmic model and model parameters. Finally, we present visual interfaces enabling users to formulate visual queries of the time-oriented data content. In this way, we provide means of combining content-based exploration with content-based search. Relation Seeking Between Data Content and Metadata (Chapter 6). We present novel visual interfaces enabling domain experts to seek relations between data content and metadata. These interfaces can be integrated into ESS to bridge analytical gaps between the data content and attached metadata. In three different approaches, we focus on different types of relations and define algorithmic support to guide users towards most interesting relations. Furthermore, each of the three approaches comprises individual visualization and interaction designs, enabling users to explore both the data and the relations in an efficient and effective way. We demonstrate the applicability of our interfaces with usage scenarios, each conducted together with domain experts. The results confirm that our techniques are beneficial for seeking relations between data content and metadata, particularly for data-centered research. Case Studies - Exploratory Search Systems (Chapter 7). In two case studies, we put our concepts and techniques into practice. We present two ESS constructed in design studies with real users, and real ES tasks, and real timeoriented primary data collections. The web-based VisInfo ESS is a digital library system facilitating the visual access to time-oriented primary data content. A content-based overview enables users to explore large collections of time series measurements and serves as a baseline for content-based queries by example. In addition, VisInfo provides a visual interface for querying time oriented data content by sketch. A result visualization combines different views of the data content and metadata with faceted search functionality. The MotionExplorer ESS supports domain experts in human motion analysis. Two content-based overviews enhance the exploration of large collections of human motion capture data from two perspectives. MotionExplorer provides a search interface, allowing domain experts to query human motion sequences by example. Retrieval results are depicted in a visual-interactive view enabling the exploration of variations of human motions. Field study evaluations performed for both ESS confirm the applicability of the systems in the environment of the involved user groups. The systems yield a significant improvement of both the effectiveness and the efficiency in the day-to-day work of the domain experts. As such, both ESS demonstrate how large collections of time-oriented primary data can be reused to enhance data-centered research. In essence, our contributions cover the entire time series analysis process starting from accessing raw time-oriented primary data, processing and transforming time series data, to visual-interactive analysis of time series. We present visual search interfaces providing content-based access to time-oriented primary data. In a series of novel explorationsupport techniques, we facilitate both gaining an overview of large and complex time-oriented primary data collections and seeking relations between data content and metadata. Throughout this thesis, we introduce VA as a means of designing effective and efficient visual-interactive systems. Our VA techniques empower data scientists to choose appropriate models and model parameters, as well as to involve users in the design. With both principles, we support the design of usable and useful interfaces which can be included into ESS. In this way, our contributions bridge the gap between search systems requiring exploration support and exploratory data analysis systems requiring visual querying capability. In the ESS presented in two case studies, we prove that our techniques and systems support data-driven research in an efficient and effective way
    corecore