Unstructured data, especially text, continues to grow rapidly in various
domains. In particular, in the financial sphere, there is a wealth of
accumulated unstructured financial data, such as the textual disclosure
documents that companies submit on a regular basis to regulatory agencies, such
as the Securities and Exchange Commission (SEC). These documents are typically
very long and tend to contain valuable soft information about a company's
performance. It is therefore of great interest to learn predictive models from
these long textual documents, especially for forecasting numerical key
performance indicators (KPIs). Whereas there has been a great progress in
pre-trained language models (LMs) that learn from tremendously large corpora of
textual data, they still struggle in terms of effective representations for
long documents. Our work fills this critical need, namely how to develop better
models to extract useful information from long textual documents and learn
effective features that can leverage the soft financial and risk information
for text regression (prediction) tasks. In this paper, we propose and implement
a deep learning framework that splits long documents into chunks and utilizes
pre-trained LMs to process and aggregate the chunks into vector
representations, followed by self-attention to extract valuable document-level
features. We evaluate our model on a collection of 10-K public disclosure
reports from US banks, and another dataset of reports submitted by US
companies. Overall, our framework outperforms strong baseline methods for
textual modeling as well as a baseline regression model using only numerical
data. Our work provides better insights into how utilizing pre-trained
domain-specific and fine-tuned long-input LMs in representing long documents
can improve the quality of representation of textual data, and therefore, help
in improving predictive analyses.Comment: 10 pages, 9 figures, 7 table