428 research outputs found
Transmission efficiency limit for nonlocal metalenses
The rapidly advancing capabilities in nanophotonic design are enabling
complex functionalities limited mainly by physical bounds. The efficiency of
transmission is a major consideration, but its ultimate limit remains unknown
for most systems. Here, we introduce a matrix formalism that puts a fundamental
bound on the channel-averaged transmission efficiency of any passive
multi-channel optical system based only on energy conservation and the desired
functionality, independent of the interior structure and material composition.
Applying this formalism to diffraction-limited nonlocal metalenses with a wide
field of view, we show that the transmission efficiency must decrease with the
numerical aperture for the commonly adopted designs with equal entrance and
output aperture diameters. We also show that reducing the size of the entrance
aperture can raise the efficiency bound. This work reveals a fundamental limit
on the transmission efficiency as well as providing guidance for the design of
high-efficiency multi-channel optical systems
Robustness and Generalizability of Deepfake Detection: A Study with Diffusion Models
The rise of deepfake images, especially of well-known personalities, poses a
serious threat to the dissemination of authentic information. To tackle this,
we present a thorough investigation into how deepfakes are produced and how
they can be identified. The cornerstone of our research is a rich collection of
artificial celebrity faces, titled DeepFakeFace (DFF). We crafted the DFF
dataset using advanced diffusion models and have shared it with the community
through online platforms. This data serves as a robust foundation to train and
test algorithms designed to spot deepfakes. We carried out a thorough review of
the DFF dataset and suggest two evaluation methods to gauge the strength and
adaptability of deepfake recognition tools. The first method tests whether an
algorithm trained on one type of fake images can recognize those produced by
other methods. The second evaluates the algorithm's performance with imperfect
images, like those that are blurry, of low quality, or compressed. Given varied
results across deepfake methods and image changes, our findings stress the need
for better deepfake detectors. Our DFF dataset and tests aim to boost the
development of more effective tools against deepfakes.Comment: 8 pages, 5 figure
The Impact of Social Movement on Racial Diversification Initiatives: Evidence From the Movie Industry
The movie industry is facing rising advocacy for racially inclusive casting. However, it remains an open question whether the promised benefits of racial diversification will materialize. Using data from 540 movies nested in 258 sequels released from 2008 to 2021, we find that, on average, increasing the number of racial minority actors in the main cast depresses movie evaluations. More importantly, the negative effect of racial diversification attenuates after Black Lives Matter (#BLM), a new media enabled social movement. Further, incorporating insights from tokenism and discrimination theories, we probe the heterogeneity in the bias mitigation effects of #BLM and find movie type and the core production team’s credentials as important boundary conditions. The present research shows that a social movement that seeks to address racial inequality can, indeed, lead to meaningful changes in public opinions toward racial inclusive initiatives. It also provides perspectives for thinking about the mechanisms underlying such changes
Self-Tuned Deep Super Resolution
Deep learning has been successfully applied to image super resolution (SR).
In this paper, we propose a deep joint super resolution (DJSR) model to exploit
both external and self similarities for SR. A Stacked Denoising Convolutional
Auto Encoder (SDCAE) is first pre-trained on external examples with proper data
augmentations. It is then fine-tuned with multi-scale self examples from each
input, where the reliability of self examples is explicitly taken into account.
We also enhance the model performance by sub-model training and selection. The
DJSR model is extensively evaluated and compared with state-of-the-arts, and
show noticeable performance improvements both quantitatively and perceptually
on a wide range of images
Stability and Generalization of -Regularized Stochastic Learning for GCN
Graph convolutional networks (GCN) are viewed as one of the most popular
representations among the variants of graph neural networks over graph data and
have shown powerful performance in empirical experiments. That -based
graph smoothing enforces the global smoothness of GCN, while (soft)
-based sparse graph learning tends to promote signal sparsity to trade
for discontinuity. This paper aims to quantify the trade-off of GCN between
smoothness and sparsity, with the help of a general -regularized
stochastic learning proposed within. While stability-based
generalization analyses have been given in prior work for a second derivative
objectiveness function, our -regularized learning scheme does not
satisfy such a smooth condition. To tackle this issue, we propose a novel SGD
proximal algorithm for GCNs with an inexact operator. For a single-layer GCN,
we establish an explicit theoretical understanding of GCN with the
-regularized stochastic learning by analyzing the stability of our SGD
proximal algorithm. We conduct multiple empirical experiments to validate our
theoretical findings.Comment: Accepted to IJCAI 202
A new fracture permeability model of CBM reservoir with high-dip angle in the southern Junggar Basin, NW China
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was funded by the National Major Research Program for Science and Technology of China (2016ZX05043-001), the National Natural Science Fund of China (grant nos. 41602170 and 41772160), the Royal Society International Exchanges-China NSFC Joint Project (grant nos. 4161101405 and RG13991-10), and Key Research and Development Projects of the Xinjiang Uygur Autonomous Region (2017B03019-01).Peer reviewedPublisher PD
High-efficiency high-NA metalens designed by maximizing the efficiency limit
Theoretical bounds are commonly used to assess the limitations of photonic
design. Here we introduce a more active way to use theoretical bounds,
integrating them into part of the design process and identifying optimal system
parameters that maximize the efficiency limit itself. As an example, we
consider wide-field-of-view high-numerical-aperture metalenses, which can be
used for high-resolution imaging in microscopy and endoscopy, but no existing
design has achieved a high efficiency. By choosing aperture sizes to maximize
an efficiency bound, setting the thickness according to a thickness bound, and
then performing inverse design, we come up with high-numerical-aperture (NA =
0.9) metalens designs with record-high 98% transmission efficiency and 92%
Strehl ratio across all incident angles within a 60-deg field of view, reaching
the maximized bound. This maximizing-efficiency-limit approach applies to any
multi-channel system and can help a wide range of optical devices reach their
highest possible performance
- …