692 research outputs found

    Reduced Growth in Eustoma under High Nutrient and Phytotoxic Organic Acid Concentrations and Recovery through Hot Water Conditioning of Continuously Cropped Soil

    Get PDF
    Stunted vegetative growth and delayed or absent flowering are commonly observed in eustoma (Eustoma grandiflorum) when cultivated continuously in the same greenhouse soil. These effects are likely caused by the excessive accumulation of soluble salts and/or phytotoxic organic acids in the soil. This study aimed to clarify the mechanism of continuous cropping obstacles and formulate prevention measures of eustoma. Seedlings of eustoma ‘Croma III White’ were grown hydroponically with 0%, 25%, 50%, 75%, 100% (full), 125%, 150%, 175%, or 200% strength of Johnson’s solution. Plant height, leaf area, and shoot dry weight increased steadily as solution strength increased from 25% to 125% [solution electrical conductivity (EC) of 2.4 dS⋅m−1] and then gradually decreased as solution strength further increased from 125% to 200% (solution EC of 3.8 dS⋅m−1). When grown hydroponically in 200% strength Johnson’s solution, plant height, leaf area, and root length increased with increasing equimolar mixtures of organic acids, including maleic acid, benzoic acid, malic acid, and hydroxybenzoic acid, up to 1.2 to 1.6 mM and decreased thereafter. Node number and the percentage of flower bud visibility declined beyond 1.6 mM organic acid mixtures. Plants with 2.0 and 2.4 mM organic acid mixtures had the lowest net photosynthetic rate, stomatal conductance, transpiration, and intercellular carbon dioxide concentration. Plants had normal growth and produced flower buds when the continuously cropped soil was preconditioned with 100 °C reverse-osmosis water before planting

    EmbeddingTree: Hierarchical Exploration of Entity Features in Embedding

    Full text link
    Embedding learning transforms discrete data entities into continuous numerical representations, encoding features/properties of the entities. Despite the outstanding performance reported from different embedding learning algorithms, few efforts were devoted to structurally interpreting how features are encoded in the learned embedding space. This work proposes EmbeddingTree, a hierarchical embedding exploration algorithm that relates the semantics of entity features with the less-interpretable embedding vectors. An interactive visualization tool is also developed based on EmbeddingTree to explore high-dimensional embeddings. The tool helps users discover nuance features of data entities, perform feature denoising/injecting in embedding training, and generate embeddings for unseen entities. We demonstrate the efficacy of EmbeddingTree and our visualization tool through embeddings generated for industry-scale merchant data and the public 30Music listening/playlists dataset.Comment: 5 pages, 3 figures, accepted by PacificVis 202

    How Does Attention Work in Vision Transformers? A Visual Analytics Attempt

    Full text link
    Vision transformer (ViT) expands the success of transformer models from sequential data to images. The model decomposes an image into many smaller patches and arranges them into a sequence. Multi-head self-attentions are then applied to the sequence to learn the attention between patches. Despite many successful interpretations of transformers on sequential data, little effort has been devoted to the interpretation of ViTs, and many questions remain unanswered. For example, among the numerous attention heads, which one is more important? How strong are individual patches attending to their spatial neighbors in different heads? What attention patterns have individual heads learned? In this work, we answer these questions through a visual analytics approach. Specifically, we first identify what heads are more important in ViTs by introducing multiple pruning-based metrics. Then, we profile the spatial distribution of attention strengths between patches inside individual heads, as well as the trend of attention strengths across attention layers. Third, using an autoencoder-based learning solution, we summarize all possible attention patterns that individual heads could learn. Examining the attention strengths and patterns of the important heads, we answer why they are important. Through concrete case studies with experienced deep learning experts on multiple ViTs, we validate the effectiveness of our solution that deepens the understanding of ViTs from head importance, head attention strength, and head attention pattern.Comment: Accepted by PacificVis 2023 and selected to be published in TVC

    PDT: Pretrained Dual Transformers for Time-aware Bipartite Graphs

    Full text link
    Pre-training on large models is prevalent and emerging with the ever-growing user-generated content in many machine learning application categories. It has been recognized that learning contextual knowledge from the datasets depicting user-content interaction plays a vital role in downstream tasks. Despite several studies attempting to learn contextual knowledge via pre-training methods, finding an optimal training objective and strategy for this type of task remains a challenging problem. In this work, we contend that there are two distinct aspects of contextual knowledge, namely the user-side and the content-side, for datasets where user-content interaction can be represented as a bipartite graph. To learn contextual knowledge, we propose a pre-training method that learns a bi-directional mapping between the spaces of the user-side and the content-side. We formulate the training goal as a contrastive learning task and propose a dual-Transformer architecture to encode the contextual knowledge. We evaluate the proposed method for the recommendation task. The empirical studies have demonstrated that the proposed method outperformed all the baselines with significant gains

    Matrix Profile XXVII: A Novel Distance Measure for Comparing Long Time Series

    Full text link
    The most useful data mining primitives are distance measures. With an effective distance measure, it is possible to perform classification, clustering, anomaly detection, segmentation, etc. For single-event time series Euclidean Distance and Dynamic Time Warping distance are known to be extremely effective. However, for time series containing cyclical behaviors, the semantic meaningfulness of such comparisons is less clear. For example, on two separate days the telemetry from an athlete workout routine might be very similar. The second day may change the order in of performing push-ups and squats, adding repetitions of pull-ups, or completely omitting dumbbell curls. Any of these minor changes would defeat existing time series distance measures. Some bag-of-features methods have been proposed to address this problem, but we argue that in many cases, similarity is intimately tied to the shapes of subsequences within these longer time series. In such cases, summative features will lack discrimination ability. In this work we introduce PRCIS, which stands for Pattern Representation Comparison in Series. PRCIS is a distance measure for long time series, which exploits recent progress in our ability to summarize time series with dictionaries. We will demonstrate the utility of our ideas on diverse tasks and datasets.Comment: Accepted at IEEE ICKG 2022. (Previously entitled IEEE ICBK.) Abridged abstract as per arxiv's requirement

    Sharpness-Aware Graph Collaborative Filtering

    Full text link
    Graph Neural Networks (GNNs) have achieved impressive performance in collaborative filtering. However, GNNs tend to yield inferior performance when the distributions of training and test data are not aligned well. Also, training GNNs requires optimizing non-convex neural networks with an abundance of local and global minima, which may differ widely in their performance at test time. Thus, it is essential to choose the minima carefully. Here we propose an effective training schema, called {gSAM}, under the principle that the \textit{flatter} minima has a better generalization ability than the \textit{sharper} ones. To achieve this goal, gSAM regularizes the flatness of the weight loss landscape by forming a bi-level optimization: the outer problem conducts the standard model training while the inner problem helps the model jump out of the sharp minima. Experimental results show the superiority of our gSAM

    FATA-Trans: Field And Time-Aware Transformer for Sequential Tabular Data

    Full text link
    Sequential tabular data is one of the most commonly used data types in real-world applications. Different from conventional tabular data, where rows in a table are independent, sequential tabular data contains rich contextual and sequential information, where some fields are dynamically changing over time and others are static. Existing transformer-based approaches analyzing sequential tabular data overlook the differences between dynamic and static fields by replicating and filling static fields into each transformer, and ignore temporal information between rows, which leads to three major disadvantages: (1) computational overhead, (2) artificially simplified data for masked language modeling pre-training task that may yield less meaningful representations, and (3) disregarding the temporal behavioral patterns implied by time intervals. In this work, we propose FATA-Trans, a model with two field transformers for modeling sequential tabular data, where each processes static and dynamic field information separately. FATA-Trans is field- and time-aware for sequential tabular data. The field-type embedding in the method enables FATA-Trans to capture differences between static and dynamic fields. The time-aware position embedding exploits both order and time interval information between rows, which helps the model detect underlying temporal behavior in a sequence. Our experiments on three benchmark datasets demonstrate that the learned representations from FATA-Trans consistently outperform state-of-the-art solutions in the downstream tasks. We also present visualization studies to highlight the insights captured by the learned representations, enhancing our understanding of the underlying data. Our codes are available at https://github.com/zdy93/FATA-Trans.Comment: This work is accepted by ACM International Conference on Information and Knowledge Management (CIKM) 202
    • …
    corecore