448 research outputs found
Pointwise equidistribution for almost smooth functions with an error rate and Weighted L\'evy-Khintchin theorem
The purpose of this article is twofold: to prove a pointwise equidistribution
theorem with an error rate for almost smooth functions, which strengthens the
main result of Kleinbock, Shi and Weiss (2017); and to obtain a
L\'evy-Khintchin theorem for weighted best approximations, which extends the
main theorem of Cheung and Chevallier (2019).
To do so, we employ techniques from homogeneous dynamics and the methods
developed in the work of Cheung-Chevallier (2019) and Shapira-Weiss (2022).Comment: 32 page
Calculation of static transmission errors associated with thermo-elastic coupling contacts of spur gears
The static transmission error is one of the key incentives of gear dynamics and addressed by many scholars. However, the traditional calculation method of static transmission errors of spur gears does not take into account the influence of thermo-elastic coupling caused by the gear temperature field, and it limits the accuracy of the dynamic characteristic analysis. Thus, in this study, the calculation method of static transmission errors associated with thermo-elastic coupling is proposed. Furthermore, the differences between static transmission errors associated with thermo-elastic coupling contacts and traditional method of gear is discussed. The study is helpful to improve the accuracy of dynamic analysis of gear transmission system
Discrete forecast reconciliation
While forecast reconciliation has seen great success for real valued data,
the method has not yet been comprehensively extended to the discrete case. This
paper defines and develops a formal discrete forecast reconciliation framework
based on optimising scoring rules using quadratic programming. The proposed
framework produces coherent joint probabilistic forecasts for count
hierarchical timeTwo discrete reconciliation algorithms are proposed and
compared to generalisations of the top-down and bottom-up approaches to count
data. Two simulation experiments and two empirical examples are conducted to
validate that the proposed reconciliation algorithms improve forecast accuracy.
The empirical applications are to forecast criminal offences in Washington D.C.
and the exceedance of thresholds in age-specific mortality rates in Australia.
Compared to the top-down and bottom-up approaches, the proposed framework shows
superior performance in both simulations and empirical studies
Recommended from our members
A supramolecular radical cation: folding-enhanced electrostatic effect for promoting radical-mediated oxidation.
We report a supramolecular strategy to promote radical-mediated Fenton oxidation by the rational design of a folded host-guest complex based on cucurbit[8]uril (CB[8]). In the supramolecular complex between CB[8] and a derivative of 1,4-diketopyrrolo[3,4-c]pyrrole (DPP), the carbonyl groups of CB[8] and the DPP moiety are brought together through the formation of a folded conformation. In this way, the electrostatic effect of the carbonyl groups of CB[8] is fully applied to highly improve the reactivity of the DPP radical cation, which is the key intermediate of Fenton oxidation. As a result, the Fenton oxidation is extraordinarily accelerated by over 100 times. It is anticipated that this strategy could be applied to other radical reactions and enrich the field of supramolecular radical chemistry in radical polymerization, photocatalysis, and organic radical battery and holds potential in supramolecular catalysis and biocatalysis
Convergence of AdaGrad for Non-convex Objectives: Simple Proofs and Relaxed Assumptions
We provide a simple convergence proof for AdaGrad optimizing non-convex
objectives under only affine noise variance and bounded smoothness assumptions.
The proof is essentially based on a novel auxiliary function that helps
eliminate the complexity of handling the correlation between the numerator and
denominator of AdaGrad's update. Leveraging simple proofs, we are able to
obtain tighter results than existing results \citep{faw2022power} and extend
the analysis to several new and important cases. Specifically, for the
over-parameterized regime, we show that AdaGrad needs only
iterations to ensure the gradient norm
smaller than , which matches the rate of SGD and significantly
tighter than existing rates for AdaGrad.
We then discard the bounded smoothness assumption and consider a realistic
assumption on smoothness called -smooth condition, which allows
local smoothness to grow with the gradient norm. Again based on the auxiliary
function , we prove that AdaGrad succeeds in converging under
-smooth condition as long as the learning rate is lower than a
threshold. Interestingly, we further show that the requirement on learning rate
under the -smooth condition is necessary via proof by contradiction,
in contrast with the case of uniform smoothness conditions where convergence is
guaranteed regardless of learning rate choices. Together, our analyses broaden
the understanding of AdaGrad and demonstrate the power of the new auxiliary
function in the investigations of AdaGrad.Comment: COLT 202
Closing the Gap Between the Upper Bound and the Lower Bound of Adam's Iteration Complexity
Recently, Arjevani et al. [1] established a lower bound of iteration
complexity for the first-order optimization under an -smooth condition and a
bounded noise variance assumption. However, a thorough review of existing
literature on Adam's convergence reveals a noticeable gap: none of them meet
the above lower bound. In this paper, we close the gap by deriving a new
convergence guarantee of Adam, with only an -smooth condition and a bounded
noise variance assumption. Our results remain valid across a broad spectrum of
hyperparameters. Especially with properly chosen hyperparameters, we derive an
upper bound of the iteration complexity of Adam and show that it meets the
lower bound for first-order optimizers. To the best of our knowledge, this is
the first to establish such a tight upper bound for Adam's convergence. Our
proof utilizes novel techniques to handle the entanglement between momentum and
adaptive learning rate and to convert the first-order term in the Descent Lemma
to the gradient norm, which may be of independent interest.Comment: NeurIPS 2023 Accep
End-to-end One-shot Human Parsing
Previous human parsing methods are limited to parsing humans into pre-defined
classes, which is inflexible for practical fashion applications that often have
new fashion item classes. In this paper, we define a novel one-shot human
parsing (OSHP) task that requires parsing humans into an open set of classes
defined by any test example. During training, only base classes are exposed,
which only overlap with part of the test-time classes. To address three main
challenges in OSHP, i.e., small sizes, testing bias, and similar parts, we
devise an End-to-end One-shot human Parsing Network (EOP-Net). Firstly, an
end-to-end human parsing framework is proposed to parse the query image into
both coarse-grained and fine-grained human classes, which embeds rich semantic
information that is shared across different granularities to identify the
small-sized human classes. Then, we gradually smooth the training-time static
prototypes to get robust class representations. Moreover, we employ a dynamic
objective to encourage the network's enhancing features' representational
capability in the early training phase while improving features'
transferability in the late training phase. Therefore, our method can quickly
adapt to the novel classes and mitigate the testing bias issue. In addition, we
add a contrastive loss at the prototype level to enforce inter-class distances,
thereby discriminating the similar parts. For comprehensive evaluations on the
new task, we tailor three existing popular human parsing benchmarks to the OSHP
task. Experiments demonstrate that EOP-Net outperforms representative one-shot
segmentation models by large margins and serves as a strong baseline for
further research. The source code is available at
https://github.com/Charleshhy/One-shot-Human-Parsing.Comment: Accepted to IEEE Trans. Pattern Analysis and Machine Intelligence
(TPAMI
- …