19,638 research outputs found
Autoplot: A browser for scientific data on the web
Autoplot is software developed for the Virtual Observatories in Heliophysics
to provide intelligent and automated plotting capabilities for many typical
data products that are stored in a variety of file formats or databases.
Autoplot has proven to be a flexible tool for exploring, accessing, and viewing
data resources as typically found on the web, usually in the form of a
directory containing data files with multiple parameters contained in each
file. Data from a data source is abstracted into a common internal data model
called QDataSet. Autoplot is built from individually useful components, and can
be extended and reused to create specialized data handling and analysis
applications and is being used in a variety of science visualization and
analysis applications. Although originally developed for viewing
heliophysics-related time series and spectrograms, its flexible and generic
data representation model makes it potentially useful for the Earth sciences.Comment: 16 page
Using high resolution displays for high resolution cardiac data
The ability to perform fast, accurate, high resolution visualization is fundamental
to improving our understanding of anatomical data. As the volumes of data
increase from improvements in scanning technology, the methods applied to rendering
and visualization must evolve. In this paper we address the interactive display of
data from high resolution MRI scanning of a rabbit heart and subsequent histological
imaging. We describe a visualization environment involving a tiled LCD panel
display wall and associated software which provide an interactive and intuitive user
interface.
The oView software is an OpenGL application which is written for the VRJuggler
environment. This environment abstracts displays and devices away from the
application itself, aiding portability between different systems, from desktop PCs to
multi-tiled display walls. Portability between display walls has been demonstrated
through its use on walls at both Leeds and Oxford Universities. We discuss important
factors to be considered for interactive 2D display of large 3D datasets,
including the use of intuitive input devices and level of detail aspects
Improving perceptual multimedia quality with an adaptable communication protocol
Copyrights @ 2005 University Computing Centre ZagrebInnovations and developments in networking technology have been driven by technical considerations with little analysis of the benefit to the user. In this paper we argue that network parameters that define the network Quality of Service (QoS) must be driven by user-centric parameters such as user expectations and requirements for multimedia transmitted over a network. To this end a mechanism for mapping user-oriented parameters to network QoS parameters is outlined. The paper surveys existing methods for mapping user requirements to the network. An adaptable communication system is implemented to validate the mapping. The architecture adapts to varying network conditions caused by congestion so as to maintain user expectations and requirements. The paper also surveys research in the area of adaptable communications architectures and protocols. Our results show that such a user-biased approach to networking does bring tangible benefits to the user
Continuity in cognition
Designing for continuous interaction requires
designers to consider the way in which human users can
perceive and evaluate an artefact’s observable behaviour,
in order to make inferences about its state and plan, and
execute their own continuous behaviour. Understanding
the human point of view in continuous interaction requires
an understanding of human causal reasoning, of
the way in which humans perceive and structure the
world, and of human cognition. We present a framework
for representing human cognition, and show briefly how it
relates to the analysis of structure in continuous interaction,
and the ways in which it may be applied in design
Building Near-Real-Time Processing Pipelines with the Spark-MPI Platform
Advances in detectors and computational technologies provide new
opportunities for applied research and the fundamental sciences. Concurrently,
dramatic increases in the three Vs (Volume, Velocity, and Variety) of
experimental data and the scale of computational tasks produced the demand for
new real-time processing systems at experimental facilities. Recently, this
demand was addressed by the Spark-MPI approach connecting the Spark
data-intensive platform with the MPI high-performance framework. In contrast
with existing data management and analytics systems, Spark introduced a new
middleware based on resilient distributed datasets (RDDs), which decoupled
various data sources from high-level processing algorithms. The RDD middleware
significantly advanced the scope of data-intensive applications, spreading from
SQL queries to machine learning to graph processing. Spark-MPI further extended
the Spark ecosystem with the MPI applications using the Process Management
Interface. The paper explores this integrated platform within the context of
online ptychographic and tomographic reconstruction pipelines.Comment: New York Scientific Data Summit, August 6-9, 201
Multi-contrast imaging and digital refocusing on a mobile microscope with a domed LED array
We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope--a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities
Distributed Object Medical Imaging Model
Abstract- Digital medical informatics and images are commonly used in hospitals today,. Because of the interrelatedness of the radiology department and other departments, especially the intensive care unit and emergency department, the transmission and sharing of medical images has become a critical issue. Our research group has developed a Java-based Distributed Object Medical Imaging Model(DOMIM) to facilitate the rapid development and deployment of medical imaging applications in a distributed environment that can be shared and used by related departments and mobile physiciansDOMIM is a unique suite of multimedia telemedicine applications developed for the use by medical related organizations. The applications support realtime patients’ data, image files, audio and video diagnosis annotation exchanges. The DOMIM enables joint collaboration between radiologists and physicians while they are at distant geographical locations. The DOMIM environment consists of heterogeneous, autonomous, and legacy resources. The Common Object Request Broker Architecture (CORBA), Java Database Connectivity (JDBC), and Java language provide the capability to combine the DOMIM resources into an integrated, interoperable, and scalable system. The underneath technology, including IDL ORB, Event Service, IIOP JDBC/ODBC, legacy system wrapping and Java implementation are explored. This paper explores a distributed collaborative CORBA/JDBC based framework that will enhance medical information management requirements and development. It encompasses a new paradigm for the delivery of health services that requires process reengineering, cultural changes, as well as organizational changes
EPiK-a Workflow for Electron Tomography in Kepler.
Scientific workflows integrate data and computing interfaces as configurable, semi-automatic graphs to solve a scientific problem. Kepler is such a software system for designing, executing, reusing, evolving, archiving and sharing scientific workflows. Electron tomography (ET) enables high-resolution views of complex cellular structures, such as cytoskeletons, organelles, viruses and chromosomes. Imaging investigations produce large datasets. For instance, in Electron Tomography, the size of a 16 fold image tilt series is about 65 Gigabytes with each projection image including 4096 by 4096 pixels. When we use serial sections or montage technique for large field ET, the dataset will be even larger. For higher resolution images with multiple tilt series, the data size may be in terabyte range. Demands of mass data processing and complex algorithms require the integration of diverse codes into flexible software structures. This paper describes a workflow for Electron Tomography Programs in Kepler (EPiK). This EPiK workflow embeds the tracking process of IMOD, and realizes the main algorithms including filtered backprojection (FBP) from TxBR and iterative reconstruction methods. We have tested the three dimensional (3D) reconstruction process using EPiK on ET data. EPiK can be a potential toolkit for biology researchers with the advantage of logical viewing, easy handling, convenient sharing and future extensibility
- …