107 research outputs found
The effects of predation risk, shelter, and food availability on the reproduction of Aegean Wall lizards (Podarcis erhardii) in the Greek islands
Reproductive investment, including the average number of offspring produced by an organism, is one of the fundamental characteristics of a species. Among other things, it predicts a speciesâ resilience to environmental disruption: taxa that produce more offspring are able to recover more quickly from environmental perturbations and survive long-term environmental change. Despite the clear importance of this trait, ecologists do not have a good understanding of the primary drivers shaping the reproductive investment of each species. To answer this question, I compare the reproductive efforts of numerous island populations of the Aegean Wall Lizard (Podarcis erhardii), which differ in multiple key environmental characteristics. I test three hypotheses, namely that reproductive investment (measured as clutch size, clutch volume and egg volume) is: 1) positively associated with predation risk [âPredation Risk Hypothesisâ]; 2) positively associated with the presence of reliable vegetation cover that provides shelter [âGravid Female Protection Hypothesisâ], and 3) limited by (and hence positively correlated with) food availability [âFood Limitation Hypothesisâ]. Although field data are consistent with all three hypotheses, statistical analysis shows strong support for the Predation Risk Hypothesis. The result not only shed light on which fundamental forces shape reproductive investment in island vertebrates, but can also help set conservation priorities by identifying the most sensitive populations because reduced reproductive ability can be predicted based on easily quantifiable island characteristics (number of sympatric predator species).Master of ScienceSchool for Environment and SustainabilityUniversity of Michiganhttps://deepblue.lib.umich.edu/bitstream/2027.42/145438/1/Zhao_Yilun_Thesis.pd
MusiCoder: A Universal Music-Acoustic Encoder Based on Transformers
Music annotation has always been one of the critical topics in the field of
Music Information Retrieval (MIR). Traditional models use supervised learning
for music annotation tasks. However, as supervised machine learning approaches
increase in complexity, the increasing need for more annotated training data
can often not be matched with available data. In this paper, a new
self-supervised music acoustic representation learning approach named MusiCoder
is proposed. Inspired by the success of BERT, MusiCoder builds upon the
architecture of self-attention bidirectional transformers. Two pre-training
objectives, including Contiguous Frames Masking (CFM) and Contiguous Channels
Masking (CCM), are designed to adapt BERT-like masked reconstruction
pre-training to continuous acoustic frame domain. The performance of MusiCoder
is evaluated in two downstream music annotation tasks. The results show that
MusiCoder outperforms the state-of-the-art models in both music genre
classification and auto-tagging tasks. The effectiveness of MusiCoder indicates
a great potential of a new self-supervised learning approach to understand
music: first apply masked reconstruction tasks to pre-train a transformer-based
model with massive unlabeled music acoustic data, and then finetune the model
on specific downstream tasks with labeled data
Intelligent optical performance monitor using multi-task learning based artificial neural network
An intelligent optical performance monitor using multi-task learning based
artificial neural network (MTL-ANN) is designed for simultaneous OSNR
monitoring and modulation format identification (MFI). Signals' amplitude
histograms (AHs) after constant module algorithm are selected as the input
features for MTL-ANN. The experimental results of 20-Gbaud NRZ-OOK, PAM4 and
PAM8 signals demonstrate that MTL-ANN could achieve OSNR monitoring and MFI
simultaneously with higher accuracy and stability compared with single-task
learning based ANNs (STL-ANNs). The results show an MFI accuracy of 100% and
OSNR monitoring root-mean-square error of 0.63 dB for the three modulation
formats under consideration. Furthermore, the number of neuron needed for the
single MTL-ANN is almost the half of STL-ANN, which enables reduced-complexity
optical performance monitoring devices for real-time performance monitoring
Another R&D Anomaly?
In this paper, we investigate the relation between stock returns and R&D spending under different market conditions. Our empirical evidence suggests that investorsâ response to R&D activities varies according to stock market status. Following the conventional definitions of markets, we first categorize the market into four different states: slightly up (up by 0-20%), bull (up by more than 20%), slightly down (down by 0-20%), and bear (down by more than 20%). Using firms in high-tech industries from 1992 to 2009 as our sample, we show that investors value R&D spending consistently positively only when the market (proxied by the S&P 500) is up. R&D is valued less in the downward market and R&D response coefficients even turn negative during bear markets. However, earnings response coefficients are consistently positive regardless of market status. The results remain unchanged after we control for beta, bankruptcy risk, size, and different measuring windows. Our findings cannot be explained by risk-based hypothesis. The study advances our understanding of the relation between stock returns and R&D activities by empirically documenting its variations in market valuation across different market states; particularly, we found empirical evidence that R&D response coefficients in the down markets are negative. The study also provides additional input to the ongoing debate on finding the appropriate accounting treatment for intangible assets
NordhausâGaddum-Type Results for the Steiner Gutman Index of Graphs
Building upon the notion of the Gutman index SGut(G), Mao and Das recently introduced the Steiner Gutman index by incorporating Steiner distance for a connected graph G. The Steiner Gutman k-index SGutk (G) of G is defined by SGutk (G) = âSâV(G),|S|=k (âvâS degG (v)) dG (S), in which dG (S) is the Steiner distance of S and degG (v) is the degree of v in G. In this paper, we derive new sharp upper and lower bounds on SGutk, and then investigate the Nordhaus-Gaddum-type results for the parameter SGutk . We obtain sharp upper and lower bounds of SGutk (G) + SGutk (G) and SGutk (G) ¡ SGutk (G) for a connected graph G of order n, m edges, maximum degree â and minimum degree δ
VectorMapNet: End-to-end Vectorized HD Map Learning
Autonomous driving systems require a good understanding of surrounding
environments, including moving obstacles and static High-Definition (HD)
semantic map elements. Existing methods approach the semantic map problem by
offline manual annotation, which suffers from serious scalability issues.
Recent learning-based methods produce dense rasterized segmentation predictions
to construct maps. However, these predictions do not include instance
information of individual map elements and require heuristic post-processing to
obtain vectorized maps. To tackle these challenges, we introduce an end-to-end
vectorized HD map learning pipeline, termed VectorMapNet. VectorMapNet takes
onboard sensor observations and predicts a sparse set of polylines in the
bird's-eye view. This pipeline can explicitly model the spatial relation
between map elements and generate vectorized maps that are friendly to
downstream autonomous driving tasks. Extensive experiments show that
VectorMapNet achieve strong map learning performance on both nuScenes and
Argoverse2 dataset, surpassing previous state-of-the-art methods by 14.2 mAP
and 14.6mAP. Qualitatively, we also show that VectorMapNet is capable of
generating comprehensive maps and capturing more fine-grained details of road
geometry. To the best of our knowledge, VectorMapNet is the first work designed
towards end-to-end vectorized map learning from onboard observations. Our
project website is available at
https://tsinghua-mars-lab.github.io/vectormapnet/
Large Language Models are Effective Table-to-Text Generators, Evaluators, and Feedback Providers
Large language models (LLMs) have shown remarkable ability on controllable
text generation. However, the potential of LLMs in generating text from
structured tables remains largely under-explored. In this paper, we study the
capabilities of LLMs for table-to-text generation tasks, particularly aiming to
investigate their performance in generating natural language statements that
can be logically entailed by a provided table. First, we investigate how LLMs
compare to state-of-the-art table-to-text fine-tuned models, and demonstrate
that LLMs can generate statements with higher faithfulness compared with
previous state-of-the-art fine-tuned models. Given this finding, we next
explore whether LLMs can serve as faithfulness-level automated evaluation
metrics. Through human evaluation, we show that evaluation metrics adopted from
LLMs correlates better with human judgments compared with existing
faithfulness-level metrics. Finally, we demonstrate that LLMs using
chain-of-thought prompting can generate high-fidelity natural language feedback
for other table-to-text models' generations, provide insights for future work
regarding the distillation of text generation capabilities from LLMs to smaller
models.Comment: work in progres
ODSum: New Benchmarks for Open Domain Multi-Document Summarization
Open-domain Multi-Document Summarization (ODMDS) is a critical tool for
condensing vast arrays of documents into coherent, concise summaries. With a
more inter-related document set, there does not necessarily exist a correct
answer for the retrieval, making it hard to measure the retrieving performance.
We propose a rule-based method to process query-based document summarization
datasets into ODMDS datasets. Based on this method, we introduce a novel
dataset, ODSum, a sophisticated case with its document index interdependent and
often interrelated. We tackle ODMDS with the \textit{retrieve-then-summarize}
method, and the performance of a list of retrievers and summarizers is
investigated. Through extensive experiments, we identify variances in
evaluation metrics and provide insights into their reliability. We also found
that LLMs suffer great performance loss from retrieving errors. We further
experimented methods to improve the performance as well as investigate their
robustness against imperfect retrieval. We will release our data and code at
https://github.com/yale-nlp/ODSum
MaskFlownet: Asymmetric Feature Matching with Learnable Occlusion Mask
Feature warping is a core technique in optical flow estimation; however, the
ambiguity caused by occluded areas during warping is a major problem that
remains unsolved. In this paper, we propose an asymmetric occlusion-aware
feature matching module, which can learn a rough occlusion mask that filters
useless (occluded) areas immediately after feature warping without any explicit
supervision. The proposed module can be easily integrated into end-to-end
network architectures and enjoys performance gains while introducing negligible
computational cost. The learned occlusion mask can be further fed into a
subsequent network cascade with dual feature pyramids with which we achieve
state-of-the-art performance. At the time of submission, our method, called
MaskFlownet, surpasses all published optical flow methods on the MPI Sintel,
KITTI 2012 and 2015 benchmarks. Code is available at
https://github.com/microsoft/MaskFlownet.Comment: CVPR 2020 (Oral
- âŚ