22,143 research outputs found
Visualization and Correction of Automated Segmentation, Tracking and Lineaging from 5-D Stem Cell Image Sequences
Results: We present an application that enables the quantitative analysis of
multichannel 5-D (x, y, z, t, channel) and large montage confocal fluorescence
microscopy images. The image sequences show stem cells together with blood
vessels, enabling quantification of the dynamic behaviors of stem cells in
relation to their vascular niche, with applications in developmental and cancer
biology. Our application automatically segments, tracks, and lineages the image
sequence data and then allows the user to view and edit the results of
automated algorithms in a stereoscopic 3-D window while simultaneously viewing
the stem cell lineage tree in a 2-D window. Using the GPU to store and render
the image sequence data enables a hybrid computational approach. An
inference-based approach utilizing user-provided edits to automatically correct
related mistakes executes interactively on the system CPU while the GPU handles
3-D visualization tasks. Conclusions: By exploiting commodity computer gaming
hardware, we have developed an application that can be run in the laboratory to
facilitate rapid iteration through biological experiments. There is a pressing
need for visualization and analysis tools for 5-D live cell image data. We
combine accurate unsupervised processes with an intuitive visualization of the
results. Our validation interface allows for each data set to be corrected to
100% accuracy, ensuring that downstream data analysis is accurate and
verifiable. Our tool is the first to combine all of these aspects, leveraging
the synergies obtained by utilizing validation information from stereo
visualization to improve the low level image processing tasks.Comment: BioVis 2014 conferenc
PlaceRaider: Virtual Theft in Physical Spaces with Smartphones
As smartphones become more pervasive, they are increasingly targeted by
malware. At the same time, each new generation of smartphone features
increasingly powerful onboard sensor suites. A new strain of sensor malware has
been developing that leverages these sensors to steal information from the
physical environment (e.g., researchers have recently demonstrated how malware
can listen for spoken credit card numbers through the microphone, or feel
keystroke vibrations using the accelerometer). Yet the possibilities of what
malware can see through a camera have been understudied. This paper introduces
a novel visual malware called PlaceRaider, which allows remote attackers to
engage in remote reconnaissance and what we call virtual theft. Through
completely opportunistic use of the camera on the phone and other sensors,
PlaceRaider constructs rich, three dimensional models of indoor environments.
Remote burglars can thus download the physical space, study the environment
carefully, and steal virtual objects from the environment (such as financial
documents, information on computer monitors, and personally identifiable
information). Through two human subject studies we demonstrate the
effectiveness of using mobile devices as powerful surveillance and virtual
theft platforms, and we suggest several possible defenses against visual
malware
Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images
The quality of modern astronomical data, the power of modern computers and
the agility of current image-processing software enable the creation of
high-quality images in a purely digital form. The combination of these
technological advancements has created a new ability to make color astronomical
images. And in many ways it has led to a new philosophy towards how to create
them. A practical guide is presented on how to generate astronomical images
from research data with powerful image-processing programs. These programs use
a layering metaphor that allows for an unlimited number of astronomical
datasets to be combined in any desired color scheme, creating an immense
parameter space to be explored using an iterative approach. Several examples of
image creation are presented.
A philosophy is also presented on how to use color and composition to create
images that simultaneously highlight scientific detail and are aesthetically
appealing. This philosophy is necessary because most datasets do not correspond
to the wavelength range of sensitivity of the human eye. The use of visual
grammar, defined as the elements which affect the interpretation of an image,
can maximize the richness and detail in an image while maintaining scientific
accuracy. By properly using visual grammar, one can imply qualities that a
two-dimensional image intrinsically cannot show, such as depth, motion and
energy. In addition, composition can be used to engage viewers and keep them
interested for a longer period of time. The use of these techniques can result
in a striking image that will effectively convey the science within the image,
to scientists and to the public.Comment: 104 pages, 38 figures, submitted to A
Mapping hybrid functional-structural connectivity traits in the human connectome
One of the crucial questions in neuroscience is how a rich functional
repertoire of brain states relates to its underlying structural organization.
How to study the associations between these structural and functional layers is
an open problem that involves novel conceptual ways of tackling this question.
We here propose an extension of the Connectivity Independent Component Analysis
(connICA) framework, to identify joint structural-functional connectivity
traits. Here, we extend connICA to integrate structural and functional
connectomes by merging them into common hybrid connectivity patterns that
represent the connectivity fingerprint of a subject. We test this extended
approach on the 100 unrelated subjects from the Human Connectome Project. The
method is able to extract main independent structural-functional connectivity
patterns from the entire cohort that are sensitive to the realization of
different tasks. The hybrid connICA extracted two main task-sensitive hybrid
traits. The first, encompassing the within and between connections of dorsal
attentional and visual areas, as well as fronto-parietal circuits. The second,
mainly encompassing the connectivity between visual, attentional, DMN and
subcortical networks. Overall, these findings confirms the potential ofthe
hybrid connICA for the compression of structural/functional connectomes into
integrated patterns from a set of individual brain networks.Comment: article: 34 pages, 4 figures; supplementary material: 5 pages, 5
figure
- ā¦