3,597 research outputs found
Codonopsis pilosula twines either to the left or to the right
We report the twining handedness of Codonopsis pilosula, which has either a left- or right-handed helix among different plants, among different tillers within a single plant, and among different branches within a single tiller. The handedness was randomly distributed among different plants, among the tillers within the same plants, but not among the branches within the same tillers. Moreover, the handedness of the stems can be strongly influenced by external forces, i.e. the compulsory left and right forming inclined to produce more left- and right-handed twining stems, respectively, and the reversing could make a left-handed stem to be right-handed and vice versa. We also discuss the probable mechanisms these curious cases happen
OVSNet : Towards One-Pass Real-Time Video Object Segmentation
Video object segmentation aims at accurately segmenting the target object
regions across consecutive frames. It is technically challenging for coping
with complicated factors (e.g., shape deformations, occlusion and out of the
lens). Recent approaches have largely solved them by using backforth
re-identification and bi-directional mask propagation. However, their methods
are extremely slow and only support offline inference, which in principle
cannot be applied in real time. Motivated by this observation, we propose a
efficient detection-based paradigm for video object segmentation. We propose an
unified One-Pass Video Segmentation framework (OVS-Net) for modeling
spatial-temporal representation in a unified pipeline, which seamlessly
integrates object detection, object segmentation, and object re-identification.
The proposed framework lends itself to one-pass inference that effectively and
efficiently performs video object segmentation. Moreover, we propose a
maskguided attention module for modeling the multi-scale object boundary and
multi-level feature fusion. Experiments on the challenging DAVIS 2017
demonstrate the effectiveness of the proposed framework with comparable
performance to the state-of-the-art, and the great efficiency about 11.5 FPS
towards pioneering real-time work to our knowledge, more than 5 times faster
than other state-of-the-art methods.Comment: 10 pages, 6 figure
- …
