413 research outputs found
An Asymptotic Analysis on Generalized Secretary Problem
As a famous result, the ``37\% Law'' for Secretary Problem has widely
influenced peoples' perception on online decision strategies about choice.
However, using this strategy, too many attractive candidates may be rejected in
the first 37\%, and in practice people also tend to stop
earlier\cite{Bearden_early}. In this paper, we argued that in most cases, the
best-only optimization does not obtain an optimal outcome, while the optimal
cutoff should be . And we also showed that in some strict
objective that only cares several best candidates, skips are still
needed.Comment: low quality as is my undergraduate wor
Noise-Stable Rigid Graphs for Euclidean Embedding
We proposed a new criterion \textit{noise-stability}, which revised the
classical rigidity theory, for evaluation of MDS algorithms which can
truthfully represent the fidelity of global structure reconstruction; then we
proved the noise-stability of the cMDS algorithm in generic conditions, which
provides a rigorous theoretical guarantee for the precision and theoretical
bounds for Euclidean embedding and its application in fields including wireless
sensor network localization and satellite positioning.
Furthermore, we looked into previous work about minimum-cost globally rigid
spanning subgraph, and proposed an algorithm to construct a minimum-cost
noise-stable spanning graph in the Euclidean space, which enabled reliable
localization on sparse graphs of noisy distance constraints with linear numbers
of edges and sublinear costs in total edge lengths. Additionally, this
algorithm also suggests a scheme to reconstruct point clouds from pairwise
distances at a minimum of time complexity, down from for cMDS
Musical Instrument Classification via Low-Dimensional Feature Vectors
Music is a mysterious language that conveys feeling and thoughts via
different tones and timbre. For better understanding of timbre in music, we
chose music data of 6 representative instruments, analysed their timbre
features and classified them. Instead of the current trend of Neural Network
for black-box classification, our project is based on a combination of MFCC and
LPC, and augmented with a 6-dimensional feature vector designed by ourselves
from observation and attempts. In our white-box model, we observed significant
patterns of sound that distinguish different timbres, and discovered some
connection between objective data and subjective senses. With a totally
32-dimensional feature vector and a naive all-pairs SVM, we achieved improved
classification accuracy compared to a single tool. We also attempted to analyze
music pieces downloaded from the Internet, found out different performance on
different instruments, explored the reasons and suggested possible ways to
improve the performance
Bayesian Mechanism Design for Blockchain Transaction Fee Allocation
In blockchain systems, the design of transaction fee mechanisms is essential
for stability and satisfaction for both miners and users. A recent work has
proven the impossibility of collusion-proof mechanisms that achieve both
non-zero miner revenue and Dominating-Strategy-Incentive-Compatible (DSIC) for
users. However, a positive miner revenue is important in practice to motivate
miners. To address this challenge, we consider a Bayesian game setting and
relax the DSIC requirement for users to Bayesian-Nash-Incentive-Compatibility
(BNIC). In particular, we propose an auxiliary mechanism method that makes
connections between BNIC and DSIC mechanisms. With the auxiliary mechanism
method, we design a transaction fee mechanism (TFM) based on the multinomial
logit (MNL) choice model, and prove that the TFM has both BNIC and
collusion-proof properties with an asymptotic constant-factor approximation of
optimal miner revenue for i.i.d. bounded valuations. Our result breaks the
zero-revenue barrier while preserving truthfulness and collusion-proof
properties.Comment: 58 pages, CESC 202
RecRecNet: Rectangling Rectified Wide-Angle Images by Thin-Plate Spline Model and DoF-based Curriculum Learning
The wide-angle lens shows appealing applications in VR technologies, but it
introduces severe radial distortion into its captured image. To recover the
realistic scene, previous works devote to rectifying the content of the
wide-angle image. However, such a rectification solution inevitably distorts
the image boundary, which changes related geometric distributions and misleads
the current vision perception models. In this work, we explore constructing a
win-win representation on both content and boundary by contributing a new
learning model, i.e., Rectangling Rectification Network (RecRecNet). In
particular, we propose a thin-plate spline (TPS) module to formulate the
non-linear and non-rigid transformation for rectangling images. By learning the
control points on the rectified image, our model can flexibly warp the source
structure to the target domain and achieves an end-to-end unsupervised
deformation. To relieve the complexity of structure approximation, we then
inspire our RecRecNet to learn the gradual deformation rules with a DoF (Degree
of Freedom)-based curriculum learning. By increasing the DoF in each curriculum
stage, namely, from similarity transformation (4-DoF) to homography
transformation (8-DoF), the network is capable of investigating more detailed
deformations, offering fast convergence on the final rectangling task.
Experiments show the superiority of our solution over the compared methods on
both quantitative and qualitative evaluations. The code and dataset are
available at https://github.com/KangLiao929/RecRecNet.Comment: Accepted to ICCV 202
Analysis on almost Abelian Lie groups: Groups, subgroups and quotients
The subject of investigation are real almost Abelian Lie groups with their
Lie group theoretical aspects, such as the exponential map, faithful matrix
representations, discrete and connected subgroups, quotients and automorphisms.
The emphasis is put on explicit description of all technical details
ClusterSLAM: A SLAM backend for simultaneous rigid body clustering and motion estimation
We present a practical backend for stereo visual SLAM which can simultaneously discover individual rigid bodies and compute their motions in dynamic environments. While recent factor graph based state optimization algorithms have shown their ability to robustly solve SLAM problems by treating dynamic objects as outliers, the dynamic motions are rarely considered. In this paper, we exploit the consensus of 3D motions among the landmarks extracted from the same rigid body for clustering and estimating static and dynamic objects in a unified manner. Specifically, our algorithm builds a noise-aware motion affinity matrix upon landmarks, and uses agglomerative clustering for distinguishing those rigid bodies. Accompanied by a decoupled factor graph optimization for revising their shape and trajectory, we obtain an iterative scheme to update both cluster assignments and motion estimation reciprocally. Evaluations on both synthetic scenes and KITTI demonstrate the capability of our approach, and further experiments considering online efficiency also show the effectiveness of our method for simultaneous tracking of ego-motion and multiple objects
ClusterSLAM: A SLAM backend for simultaneous rigid body clustering and motion estimation
We present a practical backend for stereo visual SLAM which can simultaneously discover individual rigid bodies and compute their motions in dynamic environments. While recent factor graph based state optimization algorithms have shown their ability to robustly solve SLAM problems by treating dynamic objects as outliers, their dynamic motions are rarely considered. In this paper, we exploit the consensus of 3D motions for landmarks extracted from the same rigid body for clustering, and to identify static and dynamic objects in a unified manner. Specifically, our algorithm builds a noise-aware motion affinity matrix from landmarks, and uses agglomerative clustering to distinguish rigid bodies. Using decoupled factor graph optimization to revise their shapes and trajectories, we obtain an iterative scheme to update both cluster assignments and motion estimation reciprocally. Evaluations on both synthetic scenes and KITTI demonstrate the capability of our approach, and further experiments considering online efficiency also show the effectiveness of our method for simultaneously tracking ego-motion and multiple objects
Measurement of the ratios of branching fractions and
The ratios of branching fractions
and are measured, assuming isospin symmetry, using a
sample of proton-proton collision data corresponding to 3.0 fb of
integrated luminosity recorded by the LHCb experiment during 2011 and 2012. The
tau lepton is identified in the decay mode
. The measured values are
and
, where the first uncertainty is
statistical and the second is systematic. The correlation between these
measurements is . Results are consistent with the current average
of these quantities and are at a combined 1.9 standard deviations from the
predictions based on lepton flavor universality in the Standard Model.Comment: All figures and tables, along with any supplementary material and
additional information, are available at
https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-039.html (LHCb
public pages
Measurement of forward charged hadron flow harmonics in peripheral PbPb collisions at âsNN = 5.02 TeV with the LHCb detector
Flow harmonic coefficients,
v
n
, which are the key to studying the hydrodynamics of the quark-gluon plasma (QGP) created in heavy-ion collisions, have been measured in various collision systems and kinematic regions and using various particle species. The study of flow harmonics in a wide pseudorapidity range is particularly valuable to understand the temperature dependence of the shear viscosity to entropy density ratio of the QGP. This paper presents the first LHCb results of the second- and the third-order flow harmonic coefficients of charged hadrons as a function of transverse momentum in the forward region, corresponding to pseudorapidities between 2.0 and 4.9, using the data collected from PbPb collisions in 2018 at a center-of-mass energy of 5.02
TeV
. The coefficients measured using the two-particle angular correlation analysis method are smaller than the central-pseudorapidity measurements at ALICE and ATLAS from the same collision system but share similar features
- âŠ