49,356 research outputs found
Multi-Perspective, Simultaneous Embedding
We describe MPSE: a Multi-Perspective Simultaneous Embedding method for
visualizing high-dimensional data, based on multiple pairwise distances between
the data points. Specifically, MPSE computes positions for the points in 3D and
provides different views into the data by means of 2D projections (planes) that
preserve each of the given distance matrices. We consider two versions of the
problem: fixed projections and variable projections. MPSE with fixed
projections takes as input a set of pairwise distance matrices defined on the
data points, along with the same number of projections and embeds the points in
3D so that the pairwise distances are preserved in the given projections. MPSE
with variable projections takes as input a set of pairwise distance matrices
and embeds the points in 3D while also computing the appropriate projections
that preserve the pairwise distances. The proposed approach can be useful in
multiple scenarios: from creating simultaneous embedding of multiple graphs on
the same set of vertices, to reconstructing a 3D object from multiple 2D
snapshots, to analyzing data from multiple points of view. We provide a
functional prototype of MPSE that is based on an adaptive and stochastic
generalization of multi-dimensional scaling to multiple distances and multiple
variable projections. We provide an extensive quantitative evaluation with
datasets of different sizes and using different number of projections, as well
as several examples that illustrate the quality of the resulting solutions
A New Perspective on Clustered Planarity as a Combinatorial Embedding Problem
The clustered planarity problem (c-planarity) asks whether a hierarchically
clustered graph admits a planar drawing such that the clusters can be nicely
represented by regions. We introduce the cd-tree data structure and give a new
characterization of c-planarity. It leads to efficient algorithms for
c-planarity testing in the following cases. (i) Every cluster and every
co-cluster (complement of a cluster) has at most two connected components. (ii)
Every cluster has at most five outgoing edges.
Moreover, the cd-tree reveals interesting connections between c-planarity and
planarity with constraints on the order of edges around vertices. On one hand,
this gives rise to a bunch of new open problems related to c-planarity, on the
other hand it provides a new perspective on previous results.Comment: 17 pages, 2 figure
Time-Contrastive Networks: Self-Supervised Learning from Video
We propose a self-supervised approach for learning representations and
robotic behaviors entirely from unlabeled videos recorded from multiple
viewpoints, and study how this representation can be used in two robotic
imitation settings: imitating object interactions from videos of humans, and
imitating human poses. Imitation of human behavior requires a
viewpoint-invariant representation that captures the relationships between
end-effectors (hands or robot grippers) and the environment, object attributes,
and body pose. We train our representations using a metric learning loss, where
multiple simultaneous viewpoints of the same observation are attracted in the
embedding space, while being repelled from temporal neighbors which are often
visually similar but functionally different. In other words, the model
simultaneously learns to recognize what is common between different-looking
images, and what is different between similar-looking images. This signal
causes our model to discover attributes that do not change across viewpoint,
but do change across time, while ignoring nuisance variables such as
occlusions, motion blur, lighting and background. We demonstrate that this
representation can be used by a robot to directly mimic human poses without an
explicit correspondence, and that it can be used as a reward function within a
reinforcement learning algorithm. While representations are learned from an
unlabeled collection of task-related videos, robot behaviors such as pouring
are learned by watching a single 3rd-person demonstration by a human. Reward
functions obtained by following the human demonstrations under the learned
representation enable efficient reinforcement learning that is practical for
real-world robotic systems. Video results, open-source code and dataset are
available at https://sermanet.github.io/imitat
Co-axial wet-spinning in 3D Bioprinting: state of the art and future perspective of microfluidic integration
Nowadays, 3D bioprinting technologies are rapidly emerging in the field of tissue engineering and regenerative medicine as effective tools enabling the fabrication of advanced tissue constructs that can recapitulate in vitro organ/tissue functions. Selecting the best strategy for bioink deposition is often challenging and time consuming process, as bioink properties-in the first instance, rheological and gelation-strongly influence the suitable paradigms for its deposition. In this short review, we critically discuss one of the available approaches used for bioprinting-namely co-axial wet-spinning extrusion. Such a deposition system, in fact, demonstrated to be promising in terms of printing resolution, shape fidelity and versatility when compared to other methods. An overview of the performances of co-axial technology in the deposition of cellularized hydrogel fibres is discussed, highlighting its main features. Furthermore, we show how this approach allows (i) to decouple the printing accuracy from bioink rheological behaviour-thus notably simplifying the development of new bioinks- A nd (ii) to build heterogeneous multi-materials and/or multicellular constructs that can better mimic the native tissues when combined with microfluidic systems. Finally, the ongoing challenges and the future perspectives for the ultimate fabrication of functional constructs for advanced research studies are highlighted. © 2018 IOP Publishing Ltd
Recommended from our members
ManiNetCluster: a novel manifold learning approach to reveal the functional links between gene networks.
BACKGROUND:The coordination of genomic functions is a critical and complex process across biological systems such as phenotypes or states (e.g., time, disease, organism, environmental perturbation). Understanding how the complexity of genomic function relates to these states remains a challenge. To address this, we have developed a novel computational method, ManiNetCluster, which simultaneously aligns and clusters gene networks (e.g., co-expression) to systematically reveal the links of genomic function between different conditions. Specifically, ManiNetCluster employs manifold learning to uncover and match local and non-linear structures among networks, and identifies cross-network functional links. RESULTS:We demonstrated that ManiNetCluster better aligns the orthologous genes from their developmental expression profiles across model organisms than state-of-the-art methods (p-value <2.2×10-16). This indicates the potential non-linear interactions of evolutionarily conserved genes across species in development. Furthermore, we applied ManiNetCluster to time series transcriptome data measured in the green alga Chlamydomonas reinhardtii to discover the genomic functions linking various metabolic processes between the light and dark periods of a diurnally cycling culture. We identified a number of genes putatively regulating processes across each lighting regime. CONCLUSIONS:ManiNetCluster provides a novel computational tool to uncover the genes linking various functions from different networks, providing new insight on how gene functions coordinate across different conditions. ManiNetCluster is publicly available as an R package at https://github.com/daifengwanglab/ManiNetCluster
Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images
In hyperspectral remote sensing data mining, it is important to take into
account of both spectral and spatial information, such as the spectral
signature, texture feature and morphological property, to improve the
performances, e.g., the image classification accuracy. In a feature
representation point of view, a nature approach to handle this situation is to
concatenate the spectral and spatial features into a single but high
dimensional vector and then apply a certain dimension reduction technique
directly on that concatenated vector before feed it into the subsequent
classifier. However, multiple features from various domains definitely have
different physical meanings and statistical properties, and thus such
concatenation hasn't efficiently explore the complementary properties among
different features, which should benefit for boost the feature
discriminability. Furthermore, it is also difficult to interpret the
transformed results of the concatenated vector. Consequently, finding a
physically meaningful consensus low dimensional feature representation of
original multiple features is still a challenging task. In order to address the
these issues, we propose a novel feature learning framework, i.e., the
simultaneous spectral-spatial feature selection and extraction algorithm, for
hyperspectral images spectral-spatial feature representation and
classification. Specifically, the proposed method learns a latent low
dimensional subspace by projecting the spectral-spatial feature into a common
feature space, where the complementary information has been effectively
exploited, and simultaneously, only the most significant original features have
been transformed. Encouraging experimental results on three public available
hyperspectral remote sensing datasets confirm that our proposed method is
effective and efficient
- …