1,130 research outputs found
A Unified Model for the Two-stage Offline-then-Online Resource Allocation
With the popularity of the Internet, traditional offline resource allocation
has evolved into a new form, called online resource allocation. It features the
online arrivals of agents in the system and the real-time decision-making
requirement upon the arrival of each online agent. Both offline and online
resource allocation have wide applications in various real-world matching
markets ranging from ridesharing to crowdsourcing. There are some emerging
applications such as rebalancing in bike sharing and trip-vehicle dispatching
in ridesharing, which involve a two-stage resource allocation process. The
process consists of an offline phase and another sequential online phase, and
both phases compete for the same set of resources. In this paper, we propose a
unified model which incorporates both offline and online resource allocation
into a single framework. Our model assumes non-uniform and known arrival
distributions for online agents in the second online phase, which can be
learned from historical data. We propose a parameterized linear programming
(LP)-based algorithm, which is shown to be at most a constant factor of
from the optimal. Experimental results on the real dataset show that our
LP-based approaches outperform the LP-agnostic heuristics in terms of
robustness and effectiveness.Comment: Accepted by IJCAI 2020
(http://static.ijcai.org/2020-accepted_papers.html) and SOLE copyright holder
is IJCAI (International Joint Conferences on Artificial Intelligence), all
rights reserve
impact of derivative hedging on risk: evidence from China
The use of derivatives by companies is now increasing and there is a lot of research on the use of derivatives to hedge risk by European and American companies, but relatively little research on Asian companies. This paper selects company data for 2020 and 2021 from 327 companies listed on the Hong Kong Stock Exchange, collects information on the use of derivatives by reading the companies' annual reports, and performs regression analysis in conjunction with the companies' risk and other data. The study finds that the use of derivatives can both reduce and increase company risk. Through further sub-group studies, it was found that the use of derivative hedging was more effective in reducing risk when the company's risk was higher and that the effect of hedging risk became less pronounced when the statistical time lengthened. The findings of this paper are based on a study of Asian companies. A new research perspective is provided to study the impact of derivatives on corporate risk
Distributionally Robust Circuit Design Optimization under Variation Shifts
Due to the significant process variations, designers have to optimize the
statistical performance distribution of nano-scale IC design in most cases.
This problem has been investigated for decades under the formulation of
stochastic optimization, which minimizes the expected value of a performance
metric while assuming that the distribution of process variation is exactly
given. This paper rethinks the variation-aware circuit design optimization from
a new perspective. First, we discuss the variation shift problem, which means
that the actual density function of process variations almost always differs
from the given model and is often unknown. Consequently, we propose to
formulate the variation-aware circuit design optimization as a distributionally
robust optimization problem, which does not require the exact distribution of
process variations. By selecting an appropriate uncertainty set for the
probability density function of process variations, we solve the shift-aware
circuit optimization problem using distributionally robust Bayesian
optimization. This method is validated with both a photonic IC and an
electronics IC. Our optimized circuits show excellent robustness against
variation shifts: the optimized circuit has excellent performance under many
possible distributions of process variations that differ from the given
statistical model. This work has the potential to enable a new research
direction and inspire subsequent research at different levels of the EDA flow
under the setting of variation shift.Comment: accepted by ICCAD 2023, 8 page
DiffMorpher: Unleashing the Capability of Diffusion Models for Image Morphing
Diffusion models have achieved remarkable image generation quality surpassing
previous generative models. However, a notable limitation of diffusion models,
in comparison to GANs, is their difficulty in smoothly interpolating between
two image samples, due to their highly unstructured latent space. Such a smooth
interpolation is intriguing as it naturally serves as a solution for the image
morphing task with many applications. In this work, we present DiffMorpher, the
first approach enabling smooth and natural image interpolation using diffusion
models. Our key idea is to capture the semantics of the two images by fitting
two LoRAs to them respectively, and interpolate between both the LoRA
parameters and the latent noises to ensure a smooth semantic transition, where
correspondence automatically emerges without the need for annotation. In
addition, we propose an attention interpolation and injection technique and a
new sampling schedule to further enhance the smoothness between consecutive
images. Extensive experiments demonstrate that DiffMorpher achieves starkly
better image morphing effects than previous methods across a variety of object
categories, bridging a critical functional gap that distinguished diffusion
models from GANs
A compactness based saliency approach for leakages detection in fluorescein angiogram
This study has developed a novel saliency detection method based on compactness feature for detecting three common types of leakage in retinal fluorescein angiogram: large focal, punctate focal, and vessel segment leakage. Leakage from retinal vessels occurs in a wide range of retinal diseases, such as diabetic maculopathy and paediatric malarial retinopathy. The proposed framework consists of three major steps: saliency detection, saliency refinement and leakage detection. First, the Retinex theory is adapted to address the illumination inhomogeneity problem. Then two saliency cues, intensity and compactness, are proposed for the estimation of the saliency map of each individual superpixel at each level. The saliency maps at different levels over the same cues are fused using an averaging operator. Finally, the leaking sites can be detected by masking the vessel and optic disc regions. The effectiveness of this framework has been evaluated by applying it to different types of leakage images with cerebral malaria. The sensitivity in detecting large focal, punctate focal and vessel segment leakage is 98.1, 88.2 and 82.7 %, respectively, when compared to a reference standard of manual annotations by expert human observers. The developed framework will become a new powerful tool for studying retinal conditions involving retinal leakage
- …