402 research outputs found
Distributed Workplace: a new office typology for the 21st century workstyle
We are now at a critical junction in history where we desperately need to redefine [workplace] so that it can better accomodate the lifestyle\u27s of today\u27s workers
On Hypothesis Transfer Learning of Functional Linear Models
We study the transfer learning (TL) for the functional linear regression
(FLR) under the Reproducing Kernel Hilbert Space (RKHS) framework, observing
the TL techniques in existing high-dimensional linear regression is not
compatible with the truncation-based FLR methods as functional data are
intrinsically infinite-dimensional and generated by smooth underlying
processes. We measure the similarity across tasks using RKHS distance, allowing
the type of information being transferred tied to the properties of the imposed
RKHS. Building on the hypothesis offset transfer learning paradigm, two
algorithms are proposed: one conducts the transfer when positive sources are
known, while the other leverages aggregation techniques to achieve robust
transfer without prior information about the sources. We establish lower bounds
for this learning problem and show the proposed algorithms enjoy a matching
asymptotic upper bound. These analyses provide statistical insights into
factors that contribute to the dynamics of the transfer. We also extend the
results to functional generalized linear models. The effectiveness of the
proposed algorithms is demonstrated on extensive synthetic data as well as a
financial data application.Comment: The results are extended to functional GL
Differentially Private Functional Summaries via the Independent Component Laplace Process
In this work, we propose a new mechanism for releasing differentially private
functional summaries called the Independent Component Laplace Process, or ICLP,
mechanism. By treating the functional summaries of interest as truly
infinite-dimensional objects and perturbing them with the ICLP noise, this new
mechanism relaxes assumptions on data trajectories and preserves higher utility
compared to classical finite-dimensional subspace embedding approaches in the
literature. We establish the feasibility of the proposed mechanism in multiple
function spaces. Several statistical estimation problems are considered, and we
demonstrate by slightly over-smoothing the summary, the privacy cost will not
dominate the statistical error and is asymptotically negligible. Numerical
experiments on synthetic and real datasets demonstrate the efficacy of the
proposed mechanism
Smoothness Adaptive Hypothesis Transfer Learning
Many existing two-phase kernel-based hypothesis transfer learning algorithms
employ the same kernel regularization across phases and rely on the known
smoothness of functions to obtain optimality. Therefore, they fail to adapt to
the varying and unknown smoothness between the target/source and their offset
in practice. In this paper, we address these problems by proposing Smoothness
Adaptive Transfer Learning (SATL), a two-phase kernel ridge
regression(KRR)-based algorithm. We first prove that employing the misspecified
fixed bandwidth Gaussian kernel in target-only KRR learning can achieve minimax
optimality and derive an adaptive procedure to the unknown Sobolev smoothness.
Leveraging these results, SATL employs Gaussian kernels in both phases so that
the estimators can adapt to the unknown smoothness of the target/source and
their offset function. We derive the minimax lower bound of the learning
problem in excess risk and show that SATL enjoys a matching upper bound up to a
logarithmic factor. The minimax convergence rate sheds light on the factors
influencing transfer dynamics and demonstrates the superiority of SATL compared
to non-transfer learning settings. While our main objective is a theoretical
analysis, we also conduct several experiments to confirm our results
Dynamic PlenOctree for Adaptive Sampling Refinement in Explicit NeRF
The explicit neural radiance field (NeRF) has gained considerable interest
for its efficient training and fast inference capabilities, making it a
promising direction such as virtual reality and gaming. In particular,
PlenOctree (POT)[1], an explicit hierarchical multi-scale octree
representation, has emerged as a structural and influential framework. However,
POT's fixed structure for direct optimization is sub-optimal as the scene
complexity evolves continuously with updates to cached color and density,
necessitating refining the sampling distribution to capture signal complexity
accordingly. To address this issue, we propose the dynamic PlenOctree DOT,
which adaptively refines the sample distribution to adjust to changing scene
complexity. Specifically, DOT proposes a concise yet novel hierarchical feature
fusion strategy during the iterative rendering process. Firstly, it identifies
the regions of interest through training signals to ensure adaptive and
efficient refinement. Next, rather than directly filtering out valueless nodes,
DOT introduces the sampling and pruning operations for octrees to aggregate
features, enabling rapid parameter learning. Compared with POT, our DOT
outperforms it by enhancing visual quality, reducing over /
parameters, and providing 1.7/1.9 times FPS for NeRF-synthetic and Tanks
Temples, respectively. Project homepage:https://vlislab22.github.io/DOT.
[1] Yu, Alex, et al. "Plenoctrees for real-time rendering of neural radiance
fields." Proceedings of the IEEE/CVF International Conference on Computer
Vision. 2021.Comment: Accepted by ICCV202
- …