134 research outputs found

    成人日本語学習者の関係節の構造的曖昧性構文における処理について : モンゴル語-中国語モノリンガル及びバイリンガルの日本語学習者を中心に

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学准教授 広瀬 友紀, 東京大学教授 伊藤 たかね, 東京大学教授 小野 秀樹, 東京大学准教授 宇佐 美洋, 早稲田大学教授 酒井 弘University of Tokyo(東京大学

    Effect of mycotoxins contaminated corn on growth nutrient digestibility and in vitro rumen fermentation in goats

    Get PDF
    Two trials (in vivo and in vitro) were conducted to evaluate corn naturally contaminated with mycotoxins, majority being aflatoxin B1 (AFB1) on the performance, nutrient digestion and rumen fermentation in growing goats. China Lezhi black goats (12), weighing 16.39 to 16.45 kg, were fed with the diet of 40% concentrate (the mycotoxin naturally contaminated diet containing 74.49 μg/kg AFB1, 2.08 μg/kg AFB2, 59.71 μg/kg DON and 36.51 μg/kg ZEN) for 28 days. The results showed that the contaminated corn had no significant effect on feed intake but decreased the average daily gain (ADG) and feed conversion ratio (FCR) in growing goats. Digestibility of crude protein (CP) in the trial group was significantly lower than the control group and the digestibilities of acid detergent fibre (ADF) and neutral detergent fibre (NDF) decreased too, but not significantly. Neither volatile fatty acid (VFA) nor pH was significantly different between the 2 groups. The ammonia nitrogen (NH3-N) in the trial group was lower in both in vivo trial and in vitro trial (0 h to 3 h). In in vitro experiment, ruminal fluids were collected from 4 China Lezhi goats and incubated at 39°C for 48 h with control corn or AFB1 contaminated corn. The total gas production and gas production rate in the trial group were significantly lower than the control group. These reductions showed the negative effects of the naturally contaminated AFB1 corn on nutrient digestibility and rumen function in growing goats

    SPTS v2: Single-Point Scene Text Spotting

    Full text link
    End-to-end scene text spotting has made significant progress due to its intrinsic synergy between text detection and recognition. Previous methods commonly regard manual annotations such as horizontal rectangles, rotated rectangles, quadrangles, and polygons as a prerequisite, which are much more expensive than using single-point. For the first time, we demonstrate that training scene text spotting models can be achieved with an extremely low-cost single-point annotation by the proposed framework, termed SPTS v2. SPTS v2 reserves the advantage of the auto-regressive Transformer with an Instance Assignment Decoder (IAD) through sequentially predicting the center points of all text instances inside the same predicting sequence, while with a Parallel Recognition Decoder (PRD) for text recognition in parallel. These two decoders share the same parameters and are interactively connected with a simple but effective information transmission process to pass the gradient and information. Comprehensive experiments on various existing benchmark datasets demonstrate the SPTS v2 can outperform previous state-of-the-art single-point text spotters with fewer parameters while achieving 19×\times faster inference speed. Most importantly, within the scope of our SPTS v2, extensive experiments further reveal an important phenomenon that single-point serves as the optimal setting for the scene text spotting compared to non-point, rectangular bounding box, and polygonal bounding box. Such an attempt provides a significant opportunity for scene text spotting applications beyond the realms of existing paradigms. Code will be available at https://github.com/bytedance/SPTSv2.Comment: arXiv admin note: text overlap with arXiv:2112.0791

    SPTS: Single-Point Text Spotting

    Full text link
    Existing scene text spotting (i.e., end-to-end text detection and recognition) methods rely on costly bounding box annotations (e.g., text-line, word-level, or character-level bounding boxes). For the first time, we demonstrate that training scene text spotting models can be achieved with an extremely low-cost annotation of a single-point for each instance. We propose an end-to-end scene text spotting method that tackles scene text spotting as a sequence prediction task. Given an image as input, we formulate the desired detection and recognition results as a sequence of discrete tokens and use an auto-regressive Transformer to predict the sequence. The proposed method is simple yet effective, which can achieve state-of-the-art results on widely used benchmarks. Most significantly, we show that the performance is not very sensitive to the positions of the point annotation, meaning that it can be much easier to be annotated or even be automatically generated than the bounding box that requires precise positions. We believe that such a pioneer attempt indicates a significant opportunity for scene text spotting applications of a much larger scale than previously possible. The code will be publicly available
    corecore