6 research outputs found
Decentralization Reform in Ukraine: Assessment of the Chosen Transformation Model
Ukraine has to reform the spatial organization of power, which involves alteration of the administrative-territorial division in very difficult socio-economic and political conditions. Despite a great interest in the Ukrainian decentralization reform in scientific publications and media, the influence of chosen voluntary consolidation mode on the newly formed territorial communities, including their spatial configuration, economic potential and institutional capability, remains uncovered. Trying to shed some light on the issue, the authors made an attempt to reveal advantages and disadvantages of the selected model of reform on the example of the Perspective Plan of Territorial Communities Formation in Kyiv Region
Continual Learning with Foundation Models: An Empirical Study of Latent Replay
Rapid development of large-scale pre-training has resulted in foundation
models that can act as effective feature extractors on a variety of downstream
tasks and domains. Motivated by this, we study the efficacy of pre-trained
vision models as a foundation for downstream continual learning (CL) scenarios.
Our goal is twofold. First, we want to understand the compute-accuracy
trade-off between CL in the raw-data space and in the latent space of
pre-trained encoders. Second, we investigate how the characteristics of the
encoder, the pre-training algorithm and data, as well as of the resulting
latent space affect CL performance. For this, we compare the efficacy of
various pre-trained models in large-scale benchmarking scenarios with a vanilla
replay setting applied in the latent and in the raw-data space. Notably, this
study shows how transfer, forgetting, task similarity and learning are
dependent on the input data characteristics and not necessarily on the CL
algorithms. First, we show that under some circumstances reasonable CL
performance can readily be achieved with a non-parametric classifier at
negligible compute. We then show how models pre-trained on broader data result
in better performance for various replay sizes. We explain this with
representational similarity and transfer properties of these representations.
Finally, we show the effectiveness of self-supervised pre-training for
downstream domains that are out-of-distribution as compared to the pre-training
domain. We point out and validate several research directions that can further
increase the efficacy of latent CL including representation ensembling. The
diverse set of datasets used in this study can serve as a compute-efficient
playground for further CL research. The codebase is available under
https://github.com/oleksost/latent_CL
Sequoia: A Software Framework to Unify Continual Learning Research
The field of Continual Learning (CL) seeks to develop algorithms that
accumulate knowledge and skills over time through interaction with
non-stationary environments. In practice, a plethora of evaluation procedures
(settings) and algorithmic solutions (methods) exist, each with their own
potentially disjoint set of assumptions. This variety makes measuring progress
in CL difficult. We propose a taxonomy of settings, where each setting is
described as a set of assumptions. A tree-shaped hierarchy emerges from this
view, where more general settings become the parents of those with more
restrictive assumptions. This makes it possible to use inheritance to share and
reuse research, as developing a method for a given setting also makes it
directly applicable onto any of its children. We instantiate this idea as a
publicly available software framework called Sequoia, which features a wide
variety of settings from both the Continual Supervised Learning (CSL) and
Continual Reinforcement Learning (CRL) domains. Sequoia also includes a growing
suite of methods which are easy to extend and customize, in addition to more
specialized methods from external libraries. We hope that this new paradigm and
its first implementation can help unify and accelerate research in CL. You can
help us grow the tree by visiting github.com/lebrice/Sequoia