6,463 research outputs found
CDS Pricing with Counterparty Risk
This thesis focuses on the impact of counterparty-risk in CDS (Credit Default
Swap) pricing. The exponential growth of the Credit Derivatives Market in the
last decade demands an upsurge in the fair valuation of various credit derivatives
such as the Credit Default Swap (CDS), the Collateralized Debt Obligation
(CDO). Financial institutions suffered great losses from Credit Derivatives
in the sub-prime mortgage market during the credit crunch period. Counterparty
risk in CDS contracts has been intensively studied with a focus on losses
for protection buyers due to joint defaults of counterparty and reference entity.
Using a contagion framework introduced by Jarrow and Yu (2001)[48], we
calculate the swap premium rate based on the change of measure technique,
and further extend both the two-firm and three-firm model (with defaultable
protection buyer) with continuous premium payment. The results show more
explanatory power than the discrete case. We improve the continuous contagion
model by relaxing the constant intensity rate assumption and found close
results without loss of generality. Empirically this thesis studies the behaviour
of the historical credit spread of 55 sample corporates/ financial institutions, a
Cox–Ingersoll–Ross model is applied to calibrate spread parameters. A proxy
for counterparty spread is introduced as the difference between the spread over
benchmark rate and spread over swap rate for 5 year maturity CDS. We then
investigate counterparty risk during the crisis and study the shape of term structure
for the counterparty spread, where Rebonato’s framework is deployed to
model the dynamics of the term structure using a regime-switching framework
Linear Context Transform Block
Squeeze-and-Excitation (SE) block presents a channel attention mechanism for
modeling global context via explicitly capturing dependencies across channels.
However, we are still far from understanding how the SE block works. In this
work, we first revisit the SE block, and then present a detailed empirical
study of the relationship between global context and attention distribution,
based on which we propose a simple yet effective module, called Linear Context
Transform (LCT) block. We divide all channels into different groups and
normalize the globally aggregated context features within each channel group,
reducing the disturbance from irrelevant channels. Through linear transform of
the normalized context features, we model global context for each channel
independently. The LCT block is extremely lightweight and easy to be plugged
into different backbone models while with negligible parameters and
computational burden increase. Extensive experiments show that the LCT block
outperforms the SE block in image classification task on the ImageNet and
object detection/segmentation on the COCO dataset with different backbone
models. Moreover, LCT yields consistent performance gains over existing
state-of-the-art detection architectures, e.g., 1.51.7% AP and
1.01.2% AP improvements on the COCO benchmark, irrespective of
different baseline models of varied capacities. We hope our simple yet
effective approach will shed some light on future research of attention-based
models.Comment: AAAI-2020 accepte
An adaptation reference-point-based multiobjective evolutionary algorithm
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.It is well known that maintaining a good balance between convergence and diversity is crucial to the performance of multiobjective optimization algorithms (MOEAs). However, the Pareto front (PF) of multiobjective optimization problems (MOPs) affects the performance of MOEAs, especially reference point-based ones. This paper proposes a reference-point-based adaptive method to study the PF of MOPs according to the candidate solutions of the population. In addition, the proportion and angle function presented selects elites during environmental selection. Compared with five state-of-the-art MOEAs, the proposed algorithm shows highly competitive effectiveness on MOPs with six complex characteristics
Monte Carlo Simulation for the Morphology and Kinetics of Spherulites and Shish-Kebabs in Isothermal Polymer Crystallization
Monte Carlo method is used to capture the evolution of spherulites and shish-kebabs and to predict the crystallization kinetics in isothermal polymer crystallization. Effects of nucleation density and growth rate of spherulites, nucleation density, and length growth rate of shish-kebabs, respectively, on crystallization are investigated. Results show that nucleation densities of both spherulites and shish-kebabs strongly affect crystallization rate as well as morphology. An increase in nucleation density of either spherulites or shish-kebabs leads to a quicker crystallization rate and a smaller average spherulite size. It is also shown that nucleation density of shish-kebabs has a stronger impact on crystallization rate. Growth rate of spherulites and length growth rate of shish-kebabs also have significant effect on crystallization rate and morphology. An increase in growth rate of spherulites or length growth rate of shish-kebabs also speeds up the crystallization rate; additionally, a decrease in growth rate of spherulites or an increase in length growth rate of shish-kebabs results in a more highly anisotropic shish-kebab structure and a smaller average size of spherulites. Results also show that the effect of growth rate of spherulites is more important than the effect of length growth rate of shish-kebabs on crystallization
A Cross-Domain Approach to Analyzing the Short-Run Impact of COVID-19 on the U.S. Electricity Sector
The novel coronavirus disease (COVID-19) has rapidly spread around the globe
in 2020, with the U.S. becoming the epicenter of COVID-19 cases since late
March. As the U.S. begins to gradually resume economic activity, it is
imperative for policymakers and power system operators to take a scientific
approach to understanding and predicting the impact on the electricity sector.
Here, we release a first-of-its-kind cross-domain open-access data hub,
integrating data from across all existing U.S. wholesale electricity markets
with COVID-19 case, weather, cellular location, and satellite imaging data.
Leveraging cross-domain insights from public health and mobility data, we
uncover a significant reduction in electricity consumption across that is
strongly correlated with the rise in the number of COVID-19 cases, degree of
social distancing, and level of commercial activity.Comment: This paper has been accepted for publication by Joule. The manuscript
can also be accessed from EnerarXiv:
http://www.enerarxiv.org/page/thesis.html?id=198
UrbanFM: Inferring Fine-Grained Urban Flows
Urban flow monitoring systems play important roles in smart city efforts
around the world. However, the ubiquitous deployment of monitoring devices,
such as CCTVs, induces a long-lasting and enormous cost for maintenance and
operation. This suggests the need for a technology that can reduce the number
of deployed devices, while preventing the degeneration of data accuracy and
granularity. In this paper, we aim to infer the real-time and fine-grained
crowd flows throughout a city based on coarse-grained observations. This task
is challenging due to two reasons: the spatial correlations between coarse- and
fine-grained urban flows, and the complexities of external impacts. To tackle
these issues, we develop a method entitled UrbanFM based on deep neural
networks. Our model consists of two major parts: 1) an inference network to
generate fine-grained flow distributions from coarse-grained inputs by using a
feature extraction module and a novel distributional upsampling module; 2) a
general fusion subnet to further boost the performance by considering the
influences of different external factors. Extensive experiments on two
real-world datasets, namely TaxiBJ and HappyValley, validate the effectiveness
and efficiency of our method compared to seven baselines, demonstrating the
state-of-the-art performance of our approach on the fine-grained urban flow
inference problem
Gradient-Guided Dynamic Efficient Adversarial Training
Adversarial training is arguably an effective but time-consuming way to train
robust deep neural networks that can withstand strong adversarial attacks. As a
response to the inefficiency, we propose the Dynamic Efficient Adversarial
Training (DEAT), which gradually increases the adversarial iteration during
training. Moreover, we theoretically reveal that the connection of the lower
bound of Lipschitz constant of a given network and the magnitude of its partial
derivative towards adversarial examples. Supported by this theoretical finding,
we utilize the gradient's magnitude to quantify the effectiveness of
adversarial training and determine the timing to adjust the training procedure.
This magnitude based strategy is computational friendly and easy to implement.
It is especially suited for DEAT and can also be transplanted into a wide range
of adversarial training methods. Our post-investigation suggests that
maintaining the quality of the training adversarial examples at a certain level
is essential to achieve efficient adversarial training, which may shed some
light on future studies.Comment: 14 pages, 8 figure
VehicleNet: Learning Robust Visual Representation for Vehicle Re-identification
One fundamental challenge of vehicle re-identification (re-id) is to learn
robust and discriminative visual representation, given the significant
intra-class vehicle variations across different camera views. As the existing
vehicle datasets are limited in terms of training images and viewpoints, we
propose to build a unique large-scale vehicle dataset (called VehicleNet) by
harnessing four public vehicle datasets, and design a simple yet effective
two-stage progressive approach to learning more robust visual representation
from VehicleNet. The first stage of our approach is to learn the generic
representation for all domains (i.e., source vehicle datasets) by training with
the conventional classification loss. This stage relaxes the full alignment
between the training and testing domains, as it is agnostic to the target
vehicle domain. The second stage is to fine-tune the trained model purely based
on the target vehicle set, by minimizing the distribution discrepancy between
our VehicleNet and any target domain. We discuss our proposed multi-source
dataset VehicleNet and evaluate the effectiveness of the two-stage progressive
representation learning through extensive experiments. We achieve the
state-of-art accuracy of 86.07% mAP on the private test set of AICity
Challenge, and competitive results on two other public vehicle re-id datasets,
i.e., VeRi-776 and VehicleID. We hope this new VehicleNet dataset and the
learned robust representations can pave the way for vehicle re-id in the
real-world environments
- …