244 research outputs found

    Monitoring the Coastal Environment Using Remote Sensing and GIS Techniques

    Get PDF
    The coastal zone has been of importance for economic development and ecological restoration due to their rich natural resources and vulnerable ecosystems. Remote sensing techniques have proven to be powerful tools for the monitoring of the Earth’s surface and atmosphere on a global, regional, and even local scale, by providing important coverage, mapping and classification of land cover features such as vegetation, soil, water and forests. This chapter introduced the methods for monitoring the coastal environment using remote sensing and GIS techniques. Case studies of port expansion monitoring in typical coastal regions, together with the coastal environment changes analysis were also presented

    Data-driven approach for modeling Reynolds stress tensor with invariance preservation

    Full text link
    The present study represents a data-driven turbulent model with Galilean invariance preservation based on machine learning algorithm. The fully connected neural network (FCNN) and tensor basis neural network (TBNN) [Ling et al. (2016)] are established. The models are trained based on five kinds of flow cases with Reynolds Averaged Navier-Stokes (RANS) and high-fidelity data. The mappings between two invariant sets, mean strain rate tensor and mean rotation rate tensor as well as additional consideration of invariants of turbulent kinetic energy gradients, and the Reynolds stress anisotropy tensor are trained. The prediction of the Reynolds stress anisotropy tensor is treated as user's defined RANS turbulent model with a modified turbulent kinetic energy transport equation. The results show that both FCNN and TBNN models can provide more accurate predictions of the anisotropy tensor and turbulent state in square duct flow and periodic flow cases compared to the RANS model. The machine learning based turbulent model with turbulent kinetic energy gradient related invariants can improve the prediction precision compared with only mean strain rate tensor and mean rotation rate tensor based models. The TBNN model is able to predict a better flow velocity profile compared with FCNN model due to a prior physical knowledge.Comment: 23 page

    Running with a Mask? The Effect of Air Pollution on Marathon Runners’ Performance

    Get PDF
    Using a sample of over 0.3 million marathon runners in 37 cities and 55 races in China in 2014 and 2015, we estimate the air pollution elasticity of finish time to be 0.041. Our causal identification comes from the exogeneity of air pollution on the race day because runners are required to register a race a few months in advance and we control for city fixed effects, seasonal effects, and weather condition on the race day. Including individual fixed effects also provides consistent evidence. Our study contributes to the emerging literature on the effect of air pollution on short-run productivity, particularly on the performance of athletes engaging outdoor sports and other workers whose jobs require intensive physical activities

    Running with a Mask? The Effect of Air Pollution on Marathon Runners’ Performance

    Get PDF
    Using a sample of over 0.3 million marathon runners in 37 cities and 55 races in China in 2014 and 2015, we estimate the air pollution elasticity of finish time to be 0.041. Our causal identification comes from the exogeneity of air pollution on the race day because runners are required to register a race a few months in advance and we control for city fixed effects, seasonal effects, and weather condition on the race day. Including individual fixed effects also provides consistent evidence. Our study contributes to the emerging literature on the effect of air pollution on short-run productivity, particularly on the performance of athletes engaging outdoor sports and other workers whose jobs require intensive physical activities

    Active Implicit Object Reconstruction using Uncertainty-guided Next-Best-View Optimziation

    Full text link
    Actively planning sensor views during object reconstruction is essential to autonomous mobile robots. This task is usually performed by evaluating information gain from an explicit uncertainty map. Existing algorithms compare options among a set of preset candidate views and select the next-best-view from them. In contrast to these, we take the emerging implicit representation as the object model and seamlessly combine it with the active reconstruction task. To fully integrate observation information into the model, we propose a supervision method specifically for object-level reconstruction that considers both valid and free space. Additionally, to directly evaluate view information from the implicit object model, we introduce a sample-based uncertainty evaluation method. It samples points on rays directly from the object model and uses variations of implicit function inferences as the uncertainty metrics, with no need for voxel traversal or an additional information map. Leveraging the differentiability of our metrics, it is possible to optimize the next-best-view by maximizing the uncertainty continuously. This does away with the traditionally-used candidate views setting, which may provide sub-optimal results. Experiments in simulations and real-world scenes show that our method effectively improves the reconstruction accuracy and the view-planning efficiency of active reconstruction tasks. The proposed system is going to open source at https://github.com/HITSZ-NRSL/ActiveImplicitRecon.git.Comment: 8 pages, 10 figures, Submitted to IEEE Robotics and Automation Letters (RA-L

    Phase Fluctuation Analysis in Functional Brain Networks of Scaling EEG for Driver Fatigue Detection

    Get PDF
    The characterization of complex patterns arising from electroencephalogram (EEG) is an important problem with significant applications in identifying different mental states. Based on the operational EEG of drivers, a method is proposed to characterize and distinguish different EEG patterns. The EEG measurements from seven professional taxi drivers were collected under different states. The phase characterization method was used to calculate the instantaneous phase from the EEG measurements. Then, the optimization of drivers’ EEG was realized through performing common spatial pattern analysis. The structures and scaling components of the brain networks from optimized EEG measurements are sensitive to the EEG patterns. The effectiveness of the method is demonstrated, and its applicability is articulated.</p

    Negative Frames Matter in Egocentric Visual Query 2D Localization

    Full text link
    The recently released Ego4D dataset and benchmark significantly scales and diversifies the first-person visual perception data. In Ego4D, the Visual Queries 2D Localization task aims to retrieve objects appeared in the past from the recording in the first-person view. This task requires a system to spatially and temporally localize the most recent appearance of a given object query, where query is registered by a single tight visual crop of the object in a different scene. Our study is based on the three-stage baseline introduced in the Episodic Memory benchmark. The baseline solves the problem by detection and tracking: detect the similar objects in all the frames, then run a tracker from the most confident detection result. In the VQ2D challenge, we identified two limitations of the current baseline. (1) The training configuration has redundant computation. Although the training set has millions of instances, most of them are repetitive and the number of unique object is only around 14.6k. The repeated gradient computation of the same object lead to an inefficient training; (2) The false positive rate is high on background frames. This is due to the distribution gap between training and evaluation. During training, the model is only able to see the clean, stable, and labeled frames, but the egocentric videos also have noisy, blurry, or unlabeled background frames. To this end, we developed a more efficient and effective solution. Concretely, we bring the training loop from ~15 days to less than 24 hours, and we achieve 0.17% spatial-temporal AP, which is 31% higher than the baseline. Our solution got the first ranking on the public leaderboard. Our code is publicly available at https://github.com/facebookresearch/vq2d_cvpr.Comment: First place winning solution for VQ2D task in CVPR-2022 Ego4D Challenge. Our code is publicly available at https://github.com/facebookresearch/vq2d_cvp

    Where is my Wallet? Modeling Object Proposal Sets for Egocentric Visual Query Localization

    Full text link
    This paper deals with the problem of localizing objects in image and video datasets from visual exemplars. In particular, we focus on the challenging problem of egocentric visual query localization. We first identify grave implicit biases in current query-conditioned model design and visual query datasets. Then, we directly tackle such biases at both frame and object set levels. Concretely, our method solves these issues by expanding limited annotations and dynamically dropping object proposals during training. Additionally, we propose a novel transformer-based module that allows for object-proposal set context to be considered while incorporating query information. We name our module Conditioned Contextual Transformer or CocoFormer. Our experiments show the proposed adaptations improve egocentric query detection, leading to a better visual query localization system in both 2D and 3D configurations. Thus, we are able to improve frame-level detection performance from 26.28% to 31.26 in AP, which correspondingly improves the VQ2D and VQ3D localization scores by significant margins. Our improved context-aware query object detector ranked first and second in the VQ2D and VQ3D tasks in the 2nd Ego4D challenge. In addition to this, we showcase the relevance of our proposed model in the Few-Shot Detection (FSD) task, where we also achieve SOTA results. Our code is available at https://github.com/facebookresearch/vq2d_cvpr.Comment: We ranked first and second in the VQ2D and VQ3D tasks in the 2nd Ego4D challeng

    The genomic and bulked segregant analysis of \u3ci\u3eCurcuma alismatifolia\u3c/i\u3e revealed its diverse bract pigmentation

    Get PDF
    Compared with most flowers where the showy part comprises specialized leaves (petals) directly subtending the reproductive structures, most Zingiberaceae species produce showy ‘‘flowers’’ through modifications of leaves (bracts) subtending the true flowers throughout an inflorescence. Curcuma alismatifolia, belonging to the Zingiberaceae family, a plant species originating from Southeast Asia, has become increasingly popular in the flower market worldwide because of its varied and esthetically pleasing bracts produced in different cultivars. Here, we present the chromosome-scale genome assembly of C. alismatifolia ‘‘Chiang Mai Pink’’ and explore the underlying mechanisms of bract pigmentation. Comparative genomic analysis revealed C. alismatifolia contains a residual signal of wholegenome duplication. Duplicated genes, including pigment-related genes, exhibit functional and structural differentiation resulting in diverse bract colors among C. alismatifolia cultivars. In addition, we identified the key genes that produce different colored bracts in C. alismatifolia, such as F3\u275’H, DFR, ANS and several transcription factors for anthocyanin synthesis, as well as chlH and CAO in the chlorophyll synthesis pathway by conducting transcriptomic analysis, bulked segregant analysis using both DNA and RNA data, and population genomic analysis. This work provides data for understanding the mechanism of bract pigmentation and will accelerate breeding in developing novel cultivars with richly colored bracts in C. alismatifolia and related species. It is also important to understand the variation in the evolution of the Zingiberaceae family
    • 

    corecore