167 research outputs found

    Attentional Neural Network: Feature Selection Using Cognitive Feedback

    Full text link
    Attentional Neural Network is a new framework that integrates top-down cognitive bias and bottom-up feature extraction in one coherent architecture. The top-down influence is especially effective when dealing with high noise or difficult segmentation problems. Our system is modular and extensible. It is also easy to train and cheap to run, and yet can accommodate complex behaviors. We obtain classification accuracy better than or competitive with state of art results on the MNIST variation dataset, and successfully disentangle overlaid digits with high success rates. We view such a general purpose framework as an essential foundation for a larger system emulating the cognitive abilities of the whole brain.Comment: Poster in Neural Information Processing Systems (NIPS) 201

    Fluorescence tracer method for analysis of droplet deposition pattern characteristics of the sprays applied via unmanned aerial vehicle

    Get PDF
    With the development of agricultural aviation technologies and their application in agricultural production, plant protection unmanned aerial vehicle (UAV) has been widely used to control pests and diseases of crops. The high speed rotation of the rotor in the UAV produces a powerful downwash affecting the distribution of pesticide droplets on the ground. Understanding spatial distribution of these droplets on the ground is important to evaluate application quality of the pesticides and plays an important role in improving the spray system in UAV and optimizing its operating parameters. Current methods for measuring the droplet deposition distributions use a number of collectors placed regularly on the ground to receive the droplets and measured their sizes; it is difficult for them to effectively obtain the deposition of all droplets resultdue to the downwash of UAV. This paper presents a new method to resolve this problem by improving accuracy and spatial continuity of pesticide droplets measurement applied by an unmanned helicopter. The flying parameters of a 3WQF–80–10 unmanned helicopter used to spray pesticides were obtained from the high–precision Beidou navigation system, and the RQT–C–3 fluorescent whitening tracer with mass fraction of 1.0% was used as the proxy for the pesticides. Two droplet collection methods: one used continuous strip paper and the other one used individual water sensitive paper, were used to measure the droplets deposition distribution. We divided the experimental field into three areas, with Areas 1 and 2 spaced 3 m apart, and Areas 2 and 3 spaced 1m apart. A metal bracket 8 m log and 0.5 m away from the ground was placed in each area. Prior to the experiment, a paper tape was fixed on the surface of the bracket and the water–sensitive paper cards were placed evenly in the area 0.5 m away from the paper tape. There were one paper tape and 15 water sensitive papers in each area, and a total of six spray tests were performed based on pro–designed flight parameters. The combinations of flight speed and flight height were: 2 m/s and 3 m, 2 m/s and 6 m, 2 m/s and 9 m, 3 m/s and 3 m, 3 m/s and 6 m, and 4 m/s and 9 m. The paper tape was detected by fluorescence spectroscopy analysis, and the water sensitive papers were scanned using an image processing software to obtain droplet deposition coverage rate. The results showed that distribution curves of the coverage rate obtained by the paper tape method coupled with the fluorescence spectrum tracer were consistent with that obtained from the images of the water sensitive paper method, with the R2 being 0.88~0.96. Because not all fine droplets fell on the water sensitive papers due to the effect of the high speed rotating rotor, the coverage rate curve measured by the continuous fluorescence method had multiple peaks and the value of its coverage rate was higher than that measured from the water sensitive paper method. When the unmanned helicopter flew at speed of 2 m/s and height of 3 m, the coverage ratio obtained from the continuous fluorescence method was up 16.92% compared to that sampled from the individual water–sensitive paper method, while when the flight speed was 4 m/s at height of 9 m, the coverage ratio in the latter was 97.77% higher than in the former. In terms of the impacts of unmanned helicopter operating conditions on coverage rate, when the helicopter flew at 2 m/ s and height of 3 m, the coverage rate of the droplets obtained from the two methods were the highest, being 8.34% for the continuous fluorescence method and 7.14% for the individual paper method. With the flight height and speed increasing, the spatial coverage rate of the droplets decreased. In summary, the high–speed rotor of UAV generates a downwash, making the droplets of pesticides move in different directions and resulting in a large spatial difference in their deposition on the ground. Therefore, the continuous sampling method is more adequate to evaluate the spatial distribution of the droplets. This study has implication for study on detecting deposition of pesticides and other agrochemicals applied by UAV

    TCBERT: A Technical Report for Chinese Topic Classification BERT

    Full text link
    Bidirectional Encoder Representations from Transformers or BERT~\cite{devlin-etal-2019-bert} has been one of the base models for various NLP tasks due to its remarkable performance. Variants customized for different languages and tasks are proposed to further improve the performance. In this work, we investigate supervised continued pre-training~\cite{gururangan-etal-2020-dont} on BERT for Chinese topic classification task. Specifically, we incorporate prompt-based learning and contrastive learning into the pre-training. To adapt to the task of Chinese topic classification, we collect around 2.1M Chinese data spanning various topics. The pre-trained Chinese Topic Classification BERTs (TCBERTs) with different parameter sizes are open-sourced at \url{https://huggingface.co/IDEA-CCNL}

    Ziya-Visual: Bilingual Large Vision-Language Model via Multi-Task Instruction Tuning

    Full text link
    Recent advancements enlarge the capabilities of large language models (LLMs) in zero-shot image-to-text generation and understanding by integrating multi-modal inputs. However, such success is typically limited to English scenarios due to the lack of large-scale and high-quality non-English multi-modal resources, making it extremely difficult to establish competitive counterparts in other languages. In this paper, we introduce the Ziya-Visual series, a set of bilingual large-scale vision-language models (LVLMs) designed to incorporate visual semantics into LLM for multi-modal dialogue. Composed of Ziya-Visual-Base and Ziya-Visual-Chat, our models adopt the Querying Transformer from BLIP-2, further exploring the assistance of optimization schemes such as instruction tuning, multi-stage training and low-rank adaptation module for visual-language alignment. In addition, we stimulate the understanding ability of GPT-4 in multi-modal scenarios, translating our gathered English image-text datasets into Chinese and generating instruction-response through the in-context learning method. The experiment results demonstrate that compared to the existing LVLMs, Ziya-Visual achieves competitive performance across a wide range of English-only tasks including zero-shot image-text retrieval, image captioning, and visual question answering. The evaluation leaderboard accessed by GPT-4 also indicates that our models possess satisfactory image-text understanding and generation capabilities in Chinese multi-modal scenario dialogues. Code, demo and models are available at ~\url{https://huggingface.co/IDEA-CCNL/Ziya-BLIP2-14B-Visual-v1}

    Discovery and Survey of a New Mandarivirus Associated with Leaf Yellow Mottle Disease of Citrus in Pakistan.

    Get PDF
    During biological indexing for viruses in citrus trees, in a collection of Symons sweet orange (SSO) (Citrus sinensis L. Osbeck) graft inoculated with bark tissues of citrus trees from the Punjab Province in Pakistan, several SSO trees exhibited leaf symptoms of vein yellowing and mottle. High-throughput sequencing by Illumina of RNA preparation depleted of ribosomal RNAs from one symptomatic tree, followed by BLAST analyses, allowed identification of a novel virus, tentatively named citrus yellow mottle-associated virus (CiYMaV). Genome features of CiYMaV are typical of members of the genus Mandarivirus (family Alphaflexiviridae). Virus particles with elongated flexuous shape and size resembling those of mandariviruses were observed by transmission electron microscopy. The proteins encoded by CiYMaV share high sequence identity, conserved motifs, and phylogenetic relationships with the corresponding proteins encoded by Indian citrus ringspot virus (ICRSV) and citrus yellow vein clearing virus (CYVCV), the two current members of the genus Mandarivirus. Although CYVCV is the virus most closely related to CiYMaV, the two viruses can be serologically and biologically discriminated from each other. A reverse-transcription PCR method designed to specifically detect CiYMaV revealed high prevalence (62%) of this virus in 120 citrus trees from the Punjab Province, Pakistan, where the novel virus was found mainly in mixed infection with CYVCV and citrus tristeza virus. However, a preliminary survey on samples from 200 citrus trees from the Yunnan Province, China failed to detect CiYMaV in this region, suggesting that the molecular, serological, and biological data provided here are timely and can help to prevent the spread of this virus in citrus-producing countries

    Never Lost in the Middle: Improving Large Language Models via Attention Strengthening Question Answering

    Full text link
    While large language models (LLMs) are equipped with longer text input capabilities than before, they are struggling to seek correct information in long contexts. The "lost in the middle" problem challenges most LLMs, referring to the dramatic decline in accuracy when correct information is located in the middle. To overcome this crucial issue, this paper proposes to enhance the information searching and reflection ability of LLMs in long contexts via specially designed tasks called Attention Strengthening Multi-doc QA (ASM QA). Following these tasks, our model excels in focusing more precisely on the desired information. Experimental results show substantial improvement in Multi-doc QA and other benchmarks, superior to state-of-the-art models by 13.7% absolute gain in shuffled settings, by 21.5% in passage retrieval task. We release our model, Ziya-Reader to promote related research in the community

    Salt Freeze-Thaw Damage Characteristics of Concrete based on Computed Tomography

    Get PDF
    Freeze–thaw damage and salt erosion are important factors that influence the durability of concrete. In this study, degradation laws of concrete in salt freeze–thaw environment were discussed from the microscopic perspective based on the 3D reconstruction of computed tomography images. A damage model based on concrete aggregate volume and porosity was constructed. Furthermore, the main causes of concrete degradation in the salt freeze–thaw environment were analyzed. Results reveal that, with the increase in salt freeze–thaw cycles, the damage of concrete intensifies gradually, and the uniaxial compressive strength declines steadily. Concrete damages have two causes, namely, changes in concrete porosity and variations in concrete aggregate volume. Damages caused by aggregate volume changes are divided into frost heaving and peeling. In accordance with the constructed damage model, the porosity of concrete materials changes slightly, whereas concrete aggregate volume varies significantly. Aggregate volume changes are the main causes of intensified concrete damages and decreased compressive strength. Research conclusions provide theoretical references to disclosing microscopic damage mechanism of concrete in the salt freeze–thaw environment
    • …
    corecore