88 research outputs found

    S4Net: Single Stage Salient-Instance Segmentation

    Full text link
    We consider an interesting problem-salient instance segmentation in this paper. Other than producing bounding boxes, our network also outputs high-quality instance-level segments. Taking into account the category-independent property of each target, we design a single stage salient instance segmentation framework, with a novel segmentation branch. Our new branch regards not only local context inside each detection window but also its surrounding context, enabling us to distinguish the instances in the same scope even with obstruction. Our network is end-to-end trainable and runs at a fast speed (40 fps when processing an image with resolution 320x320). We evaluate our approach on a publicly available benchmark and show that it outperforms other alternative solutions. We also provide a thorough analysis of the design choices to help readers better understand the functions of each part of our network. The source code can be found at \url{https://github.com/RuochenFan/S4Net}

    Referring Camouflaged Object Detection

    Full text link
    In this paper, we consider the problem of referring camouflaged object detection (Ref-COD), a new task that aims to segment specified camouflaged objects based on some form of reference, e.g., image, text. We first assemble a large-scale dataset, called R2C7K, which consists of 7K images covering 64 object categories in real-world scenarios. Then, we develop a simple but strong dual-branch framework, dubbed R2CNet, with a reference branch learning common representations from the referring information and a segmentation branch identifying and segmenting camouflaged objects under the guidance of the common representations. In particular, we design a Referring Mask Generation module to generate pixel-level prior mask and a Referring Feature Enrichment module to enhance the capability of identifying camouflaged objects. Extensive experiments show the superiority of our Ref-COD methods over their COD counterparts in segmenting specified camouflaged objects and identifying the main body of target objects. Our code and dataset are publicly available at https://github.com/zhangxuying1004/RefCOD

    Effects of replacing dietary fishmeal with zymolytic black soldier fly larvae on the growth performance of the mud crab (<em>scylla paramamosain</em>) larvae

    Get PDF
    Black soldier fly have been shown to be one of the optimal alternatives to fishmeal, but there are few reports on the effects of zymolytic black soldier fly larvae (ZBSFL) on the growth and digestion of crustaceans. An 8-week feeding trial was conducted to evaluate the effects of different replacement levels of ZBFLS on growth performance, body composition, and digestive enzyme activity of the mud crab larvae. Four diets were formulated by replacing fishmeal with 0%, 5%, 10% and 15% ZBSFL in the basal diet. Crab larvae were randomly divided into four groups of three replicates each and fed twice daily. The results showed that the SR of crab larvae was higher than that of the no-substitution group when the substitution rate reached 5% (P < 0.05). There was no significant change in SR when the substitution rate was further increased. Weight growth rate and Specific growth rate were similar, both highest at 10% substitution ratio. The crude protein content of whole crab larvae gradually increased as the proportion of FM substituted by ZBSFL increased. The lipid content of whole crab larvae in the 5% substitution ratio group was significantly higher than that in all other groups (P < 0.05). Meanwhile. The activities of amylase, protease and lipase gradually increased. In this experiment, when the percentage of ZBSFL substitution for FM reached 10%, its growth performance was optimal, with higher SR, less negative effects and more balanced indicators in all aspects. When the substitution rate was further increased, it might increase the digestive burden of the crab and negatively affect its growth

    Underestimated ecosystem carbon turnover time and sequestration under the steady state assumption: a perspective from long‐term data assimilation

    Get PDF
    It is critical to accurately estimate carbon (C) turnover time as it dominates the uncertainty in ecosystem C sinks and their response to future climate change. In the absence of direct observations of ecosystem C losses, C turnover times are commonly estimated under the steady state assumption (SSA), which has been applied across a large range of temporal and spatial scales including many at which the validity of the assumption is likely to be violated. However, the errors associated with improperly applying SSA to estimate C turnover time and its covariance with climate as well as ecosystem C sequestrations have yet to be fully quantified. Here, we developed a novel model-data fusion framework and systematically analyzed the SSA-induced biases using time-series data collected from 10 permanent forest plots in the eastern China monsoon region. The results showed that (a) the SSA significantly underestimated mean turnover times (MTTs) by 29%, thereby leading to a 4.83-fold underestimation of the net ecosystem productivity (NEP) in these forest ecosystems, a major C sink globally; (b) the SSA-induced bias in MTT and NEP correlates negatively with forest age, which provides a significant caveat for applying the SSA to young-aged ecosystems; and (c) the sensitivity of MTT to temperature and precipitation was 22% and 42% lower, respectively, under the SSA. Thus, under the expected climate change, spatiotemporal changes in MTT are likely to be underestimated, thereby resulting in large errors in the variability of predicted global NEP. With the development of observation technology and the accumulation of spatiotemporal data, we suggest estimating MTTs at the disequilibrium state via long-term data assimilation, thereby effectively reducing the uncertainty in ecosystem C sequestration estimations and providing a better understanding of regional or global C cycle dynamics and C-climate feedback
    • 

    corecore