21,313 research outputs found
The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution
Computational approach to the Schottky problem
We present a computational approach to the classical Schottky problem based
on Fay's trisecant identity for genus . For a given Riemann matrix
, the Fay identity establishes linear dependence
of secants in the Kummer variety if and only if the Riemann matrix corresponds
to a Jacobian variety as shown by Krichever. The theta functions in terms of
which these secants are expressed depend on the Abel maps of four arbitrary
points on a Riemann surface. However, there is no concept of an Abel map for
general . To establish linear dependence of the
secants, four components of the vectors entering the theta functions can be
chosen freely. The remaining components are determined by a Newton iteration to
minimize the residual of the Fay identity. Krichever's theorem assures that if
this residual vanishes within the finite numerical precision for a generic
choice of input data, then the Riemann matrix is with this numerical precision
the period matrix of a Riemann surface. The algorithm is compared in genus 4
for some examples to the Schottky-Igusa modular form, known to give the Jacobi
locus in this case. It is shown that the same residuals are achieved by the
Schottky-Igusa form and the approach based on the Fay identity in this case. In
genera 5, 6 and 7, we discuss known examples of Riemann matrices and
perturbations thereof for which the Fay identity is not satisfied
Bayesian Optimization with Conformal Prediction Sets
Bayesian optimization is a coherent, ubiquitous approach to decision-making
under uncertainty, with applications including multi-arm bandits, active
learning, and black-box optimization. Bayesian optimization selects decisions
(i.e. objective function queries) with maximal expected utility with respect to
the posterior distribution of a Bayesian model, which quantifies reducible,
epistemic uncertainty about query outcomes. In practice, subjectively
implausible outcomes can occur regularly for two reasons: 1) model
misspecification and 2) covariate shift. Conformal prediction is an uncertainty
quantification method with coverage guarantees even for misspecified models and
a simple mechanism to correct for covariate shift. We propose conformal
Bayesian optimization, which directs queries towards regions of search space
where the model predictions have guaranteed validity, and investigate its
behavior on a suite of black-box optimization tasks and tabular ranking tasks.
In many cases we find that query coverage can be significantly improved without
harming sample-efficiency.Comment: For code, see
https://www.github.com/samuelstanton/conformal-bayesopt.gi
Patching Weak Convolutional Neural Network Models through Modularization and Composition
Despite great success in many applications, deep neural networks are not
always robust in practice. For instance, a convolutional neuron network (CNN)
model for classification tasks often performs unsatisfactorily in classifying
some particular classes of objects. In this work, we are concerned with
patching the weak part of a CNN model instead of improving it through the
costly retraining of the entire model. Inspired by the fundamental concepts of
modularization and composition in software engineering, we propose a compressed
modularization approach, CNNSplitter, which decomposes a strong CNN model for
-class classification into smaller CNN modules. Each module is a
sub-model containing a part of the convolution kernels of the strong model. To
patch a weak CNN model that performs unsatisfactorily on a target class (TC),
we compose the weak CNN model with the corresponding module obtained from a
strong CNN model. The ability of the weak CNN model to recognize the TC can
thus be improved through patching. Moreover, the ability to recognize non-TCs
is also improved, as the samples misclassified as TC could be classified as
non-TCs correctly. Experimental results with two representative CNNs on three
widely-used datasets show that the averaged improvement on the TC in terms of
precision and recall are 12.54% and 2.14%, respectively. Moreover, patching
improves the accuracy of non-TCs by 1.18%. The results demonstrate that
CNNSplitter can patch a weak CNN model through modularization and composition,
thus providing a new solution for developing robust CNN models.Comment: Accepted at ASE'2
3d mirror symmetry of braided tensor categories
We study the braided tensor structure of line operators in the topological A
and B twists of abelian 3d gauge theories, as accessed via
boundary vertex operator algebras (VOA's). We focus exclusively on abelian
theories. We first find a non-perturbative completion of boundary VOA's in the
B twist, which start out as certain affine Lie superalebras; and we construct
free-field realizations of both A and B-twist VOA's, finding an interesting
interplay with the symmetry fractionalization group of bulk theories. We use
the free-field realizations to establish an isomorphism between A and B VOA's
related by 3d mirror symmetry. Turning to line operators, we extend previous
physical classifications of line operators to include new monodromy defects and
bound states. We also outline a mechanism by which continuous global symmetries
in a physical theory are promoted to higher symmetries in a topological twist
-- in our case, these are infinite one-form symmetries, related to boundary
spectral flow, which structure the categories of lines and control abelian
gauging. Finally, we establish the existence of braided tensor structure on
categories of line operators, viewed as non-semisimple categories of modules
for boundary VOA's. In the A twist, we obtain the categories by extending
modules of symplectic boson VOA's, corresponding to gauging free
hypermultiplets; in the B twist, we instead extend Kazhdan-Lusztig categories
for affine Lie superalgebras. We prove braided tensor equivalences among the
categories of 3d-mirror theories. All results on VOA's and their module
categories are mathematically rigorous; they rely strongly on recently
developed techniques to access non-semisimple extensions.Comment: 158 pages, comments welcome
Explicit spectral gap for Schottky subgroups of
Let be a Schottky subgroup of . We
establish a uniform and explicit lower bound of the second eigenvalue of the
Laplace-Beltrami operator of congruence coverings of the hyperbolic surface
provided the limit set of is thick
enough.Comment: 31 page
A general framework for modelling limit order book dynamics
We present a mathematical framework for modelling the dynamics of limit order books, built on the combination of two modelling ingredients: the order flow -modelled as a general spatial point process- and market clearing, modelled via a deterministic ‘mass transport’ operator acting on distributions of buy and sell orders. At the mathematical level, this corresponds to a natural decomposition of the infinitesimal generator describing the evolution of the limit order book into two operators: the generator of the order flow and the clearing operator. Our model provides a flexible framework for modelling and simulating order book dynamics and studying various scaling limits of discrete order book models. We show that our framework includes previous models as special cases and yields insights into the interaction between the order flow and price dynamics, the use of order book data for prediction of intraday price movements.
The framework also allows for model comparison and the study of the order flow. The modular structure of the model is well-adapted to simulation and allows the stochastic model for the order flow and the clearing mechanism to be specified independently. Then, as a simple demonstration, models with different assumptions on the order intensities are compared. The simulation result shows that orders of relatively large size also play an essential role in the evolution of the order book process.
We further investigate the asymptotic behaviour of the order book processes, including the fluid scaling and the diffusion scaling. The decomposition relation between the order flow and the order book holds when the ask and bid price do not move. In general, the price processes depend on the order flow and the current state of the order book. We prove that as the tick size becomes small, the ask price and bid price converge to the same limiting process.Open Acces
Image classification over unknown and anomalous domains
A longstanding goal in computer vision research is to develop methods that are simultaneously applicable to a broad range of prediction problems. In contrast to this, models often perform best when they are specialized to some task or data type. This thesis investigates the challenges of learning models that generalize well over multiple unknown or anomalous modes and domains in data, and presents new solutions for learning robustly in this setting.
Initial investigations focus on normalization for distributions that contain multiple sources (e.g. images in different styles like cartoons or photos). Experiments demonstrate the extent to which existing modules, batch normalization in particular, struggle with such heterogeneous data, and a new solution is proposed that can better handle data from multiple visual modes, using differing sample statistics for each.
While ideas to counter the overspecialization of models have been formulated in sub-disciplines of transfer learning, e.g. multi-domain and multi-task learning, these usually rely on the existence of meta information, such as task or domain labels. Relaxing this assumption gives rise to a new transfer learning setting, called latent domain learning in this thesis, in which training and inference are carried out over data from multiple visual domains, without domain-level annotations. Customized solutions are required for this, as the performance of standard models degrades: a new data augmentation technique that interpolates between latent domains in an unsupervised way is presented, alongside a dedicated module that sparsely accounts for hidden domains in data, without requiring domain labels to do so.
In addition, the thesis studies the problem of classifying previously unseen or anomalous modes in data, a fundamental problem in one-class learning, and anomaly detection in particular. While recent ideas have been focused on developing self-supervised solutions for the one-class setting, in this thesis new methods based on transfer learning are formulated. Extensive experimental evidence demonstrates that a transfer-based perspective benefits new problems that have recently been proposed in anomaly detection literature, in particular challenging semantic detection tasks
- …