84 research outputs found

    Developing a soil health testing protocol for arable cropping systems in Saskatchewan

    Get PDF
    Farmers are looking for appropriate tools and methods for assessing and interpreting the health status of their soils; however, for Saskatchewan there is no standardized and prairie-based soil health test available. As such, I focused on developing a soil health testing protocol for arable cropping systems in Saskatchewan by building off of the Comprehensive Assessment of Soil Health (CASH) framework developed in the USA. In Sept and Oct 2018, soil samples (0-15, 15-30, and 30-60 cm depths) were collected from 55 arable fields across Saskatchewan—along with a couple native prairie samples. Various soil chemical, physical, and biological attributes were measured (23 attributes in total). Based on the data distribution for each attribute, I developed scoring functions. The results from multivariate analyses were used to determine the weighting factors needed to integrate the individual scores from each soil attribute into a single Saskatchewan Soil Health Score (SSHS). Soil C and N indices (soil organic C, active C, total N, and soil protein) and total P produced the highest weighting factors. I also tested if there were linkages between the soil health scores and crop productivity by assessing the cereal yield trends for the past 10 yrs from the same rural municipalities where the soil samples were collected. A positive relationship between soil health and yields was most apparent during dry years; thus, I recommend further research to explore this linkage at a finer scale. Overall, this research forms the foundation of a promising tool for Saskatchewan producers who are interested in tracking soil health and using the results to inform management practices

    FedDD: Toward Communication-efficient Federated Learning with Differential Parameter Dropout

    Full text link
    Federated Learning (FL) requires frequent exchange of model parameters, which leads to long communication delay, especially when the network environments of clients vary greatly. Moreover, the parameter server needs to wait for the slowest client (i.e., straggler, which may have the largest model size, lowest computing capability or worst network condition) to upload parameters, which may significantly degrade the communication efficiency. Commonly-used client selection methods such as partial client selection would lead to the waste of computing resources and weaken the generalization of the global model. To tackle this problem, along a different line, in this paper, we advocate the approach of model parameter dropout instead of client selection, and accordingly propose a novel framework of Federated learning scheme with Differential parameter Dropout (FedDD). FedDD consists of two key modules: dropout rate allocation and uploaded parameter selection, which will optimize the model parameter uploading ratios tailored to different clients' heterogeneous conditions and also select the proper set of important model parameters for uploading subject to clients' dropout rate constraints. Specifically, the dropout rate allocation is formulated as a convex optimization problem, taking system heterogeneity, data heterogeneity, and model heterogeneity among clients into consideration. The uploaded parameter selection strategy prioritizes on eliciting important parameters for uploading to speedup convergence. Furthermore, we theoretically analyze the convergence of the proposed FedDD scheme. Extensive performance evaluations demonstrate that the proposed FedDD scheme can achieve outstanding performances in both communication efficiency and model convergence, and also possesses a strong generalization capability to data of rare classes

    ObjectSDF++: Improved Object-Compositional Neural Implicit Surfaces

    Full text link
    In recent years, neural implicit surface reconstruction has emerged as a popular paradigm for multi-view 3D reconstruction. Unlike traditional multi-view stereo approaches, the neural implicit surface-based methods leverage neural networks to represent 3D scenes as signed distance functions (SDFs). However, they tend to disregard the reconstruction of individual objects within the scene, which limits their performance and practical applications. To address this issue, previous work ObjectSDF introduced a nice framework of object-composition neural implicit surfaces, which utilizes 2D instance masks to supervise individual object SDFs. In this paper, we propose a new framework called ObjectSDF++ to overcome the limitations of ObjectSDF. First, in contrast to ObjectSDF whose performance is primarily restricted by its converted semantic field, the core component of our model is an occlusion-aware object opacity rendering formulation that directly volume-renders object opacity to be supervised with instance masks. Second, we design a novel regularization term for object distinction, which can effectively mitigate the issue that ObjectSDF may result in unexpected reconstruction in invisible regions due to the lack of constraint to prevent collisions. Our extensive experiments demonstrate that our novel framework not only produces superior object reconstruction results but also significantly improves the quality of scene reconstruction. Code and more resources can be found in \url{https://qianyiwu.github.io/objectsdf++}Comment: ICCV 2023. Project Page: https://qianyiwu.github.io/objectsdf++ Code: https://github.com/QianyiWu/objectsdf_plu

    Alive Caricature from 2D to 3D

    Full text link
    Caricature is an art form that expresses subjects in abstract, simple and exaggerated view. While many caricatures are 2D images, this paper presents an algorithm for creating expressive 3D caricatures from 2D caricature images with a minimum of user interaction. The key idea of our approach is to introduce an intrinsic deformation representation that has a capacity of extrapolation enabling us to create a deformation space from standard face dataset, which maintains face constraints and meanwhile is sufficiently large for producing exaggerated face models. Built upon the proposed deformation representation, an optimization model is formulated to find the 3D caricature that captures the style of the 2D caricature image automatically. The experiments show that our approach has better capability in expressing caricatures than those fitting approaches directly using classical parametric face models such as 3DMM and FaceWareHouse. Moreover, our approach is based on standard face datasets and avoids constructing complicated 3D caricature training set, which provides great flexibility in real applications.Comment: Accepted to CVPR 201

    Explicit Correspondence Matching for Generalizable Neural Radiance Fields

    Full text link
    We present a new generalizable NeRF method that is able to directly generalize to new unseen scenarios and perform novel view synthesis with as few as two source views. The key to our approach lies in the explicitly modeled correspondence matching information, so as to provide the geometry prior to the prediction of NeRF color and density for volume rendering. The explicit correspondence matching is quantified with the cosine similarity between image features sampled at the 2D projections of a 3D point on different views, which is able to provide reliable cues about the surface geometry. Unlike previous methods where image features are extracted independently for each view, we consider modeling the cross-view interactions via Transformer cross-attention, which greatly improves the feature matching quality. Our method achieves state-of-the-art results on different evaluation settings, with the experiments showing a strong correlation between our learned cosine feature similarity and volume density, demonstrating the effectiveness and superiority of our proposed method. Code is at https://github.com/donydchen/matchnerfComment: Code and pre-trained models: https://github.com/donydchen/matchnerf Project Page: https://donydchen.github.io/matchnerf

    Masked Lip-Sync Prediction by Audio-Visual Contextual Exploitation in Transformers

    Full text link
    Previous studies have explored generating accurately lip-synced talking faces for arbitrary targets given audio conditions. However, most of them deform or generate the whole facial area, leading to non-realistic results. In this work, we delve into the formulation of altering only the mouth shapes of the target person. This requires masking a large percentage of the original image and seamlessly inpainting it with the aid of audio and reference frames. To this end, we propose the Audio-Visual Context-Aware Transformer (AV-CAT) framework, which produces accurate lip-sync with photo-realistic quality by predicting the masked mouth shapes. Our key insight is to exploit desired contextual information provided in audio and visual modalities thoroughly with delicately designed Transformers. Specifically, we propose a convolution-Transformer hybrid backbone and design an attention-based fusion strategy for filling the masked parts. It uniformly attends to the textural information on the unmasked regions and the reference frame. Then the semantic audio information is involved in enhancing the self-attention computation. Additionally, a refinement network with audio injection improves both image and lip-sync quality. Extensive experiments validate that our model can generate high-fidelity lip-synced results for arbitrary subjects.Comment: Accepted to SIGGRAPH Asia 2022 (Conference Proceedings). Project page: https://hangz-nju-cuhk.github.io/projects/AV-CA
    • …
    corecore