18 research outputs found
Accuracy guaranties for recovery of block-sparse signals
We introduce a general framework to handle structured models (sparse and
block-sparse with possibly overlapping blocks). We discuss new methods for
their recovery from incomplete observation, corrupted with deterministic and
stochastic noise, using block- regularization. While the current theory
provides promising bounds for the recovery errors under a number of different,
yet mostly hard to verify conditions, our emphasis is on verifiable conditions
on the problem parameters (sensing matrix and the block structure) which
guarantee accurate recovery. Verifiability of our conditions not only leads to
efficiently computable bounds for the recovery error but also allows us to
optimize these error bounds with respect to the method parameters, and
therefore construct estimators with improved statistical properties. To justify
our approach, we also provide an oracle inequality, which links the properties
of the proposed recovery algorithms and the best estimation performance.
Furthermore, utilizing these verifiable conditions, we develop a
computationally cheap alternative to block- minimization, the
non-Euclidean Block Matching Pursuit algorithm. We close by presenting a
numerical study to investigate the effect of different block regularizations
and demonstrate the performance of the proposed recoveries.Comment: Published in at http://dx.doi.org/10.1214/12-AOS1057 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
On a class of optimization-based robust estimators
We consider in this paper the problem of estimating a parameter matrix from
observations which are affected by two types of noise components: (i) a sparse
noise sequence which, whenever nonzero can have arbitrarily large amplitude
(ii) and a dense and bounded noise sequence of "moderate" amount. This is
termed a robust regression problem. To tackle it, a quite general
optimization-based framework is proposed and analyzed. When only the sparse
noise is present, a sufficient bound is derived on the number of nonzero
elements in the sparse noise sequence that can be accommodated by the estimator
while still returning the true parameter matrix. While almost all the
restricted isometry-based bounds from the literature are not verifiable, our
bound can be easily computed through solving a convex optimization problem.
Moreover, empirical evidence tends to suggest that it is generally tight. If in
addition to the sparse noise sequence, the training data are affected by a
bounded dense noise, we derive an upper bound on the estimation error.Comment: To appear in IEEE Transactions on Automatic Contro
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Cryptographic Tools for Privacy Preservation
Data permeates every aspect of our daily life and it is the backbone of our digitalized society. Smartphones, smartwatches and many more smart devices measure, collect, modify and share data in what is known as the Internet of Things.Often, these devices donât have enough computation power/storage space thus out-sourcing some aspects of the data management to the Cloud. Outsourcing computation/storage to a third party poses natural questions regarding the security and privacy of the shared sensitive data.Intuitively, Cryptography is a toolset of primitives/protocols of which security prop- erties are formally proven while Privacy typically captures additional social/legislative requirements that relate more to the concept of âtrustâ between people, âhowâ data is used and/or âwhoâ has access to data. This thesis separates the concepts by introducing an abstract model that classifies data leaks into different types of breaches. Each class represents a specific requirement/goal related to cryptography, e.g. confidentiality or integrity, or related to privacy, e.g. liability, sensitive data management and more.The thesis contains cryptographic tools designed to provide privacy guarantees for different application scenarios. In more details, the thesis:(a) defines new encryption schemes that provide formal privacy guarantees such as theoretical privacy definitions like Differential Privacy (DP), or concrete privacy-oriented applications covered by existing regulations such as the European General Data Protection Regulation (GDPR);(b) proposes new tools and procedures for providing verifiable computationâs guarantees in concrete scenarios for post-quantum cryptography or generalisation of signature schemes;(c) proposes a methodology for utilising Machine Learning (ML) for analysing the effective security and privacy of a crypto-tool and, dually, proposes a secure primitive that allows computing specific ML algorithm in a privacy-preserving way;(d) provides an alternative protocol for secure communication between two parties, based on the idea of communicating in a periodically timed fashion
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Conditional Gradient Methods
The purpose of this survey is to serve both as a gentle introduction and a
coherent overview of state-of-the-art Frank--Wolfe algorithms, also called
conditional gradient algorithms, for function minimization. These algorithms
are especially useful in convex optimization when linear optimization is
cheaper than projections.
The selection of the material has been guided by the principle of
highlighting crucial ideas as well as presenting new approaches that we believe
might become important in the future, with ample citations even of old works
imperative in the development of newer methods. Yet, our selection is sometimes
biased, and need not reflect consensus of the research community, and we have
certainly missed recent important contributions. After all the research area of
Frank--Wolfe is very active, making it a moving target. We apologize sincerely
in advance for any such distortions and we fully acknowledge: We stand on the
shoulder of giants.Comment: 238 pages with many figures. The FrankWolfe.jl Julia package
(https://github.com/ZIB-IOL/FrankWolfe.jl) providces state-of-the-art
implementations of many Frank--Wolfe method
Fault-tolerant feature-based estimation of space debris motion and inertial properties
The exponential increase of the needs of people in the modern society and the contextual
development of the space technologies have led to a significant use of the lower Earthâs
orbits for placing artificial satellites. The current overpopulation of these orbits also
increased the interest of the major space agencies in technologies for the removal of at
least the biggest spacecraft that have reached their end-life or have failed their mission.
One of the key functionalities required in a mission for removing a non-cooperative
spacecraft is the assessment of its kinematics and inertial properties. In a few cases, this
information can be approximated by ground observations. However, a re-assessment
after the rendezvous phase is of critical importance for refining the capture strategies
preventing accidents. The CADET program (CApture and DE-orbiting Technologies),
funded by Regione Piemonte and led by Aviospace s.r.l., involved Politecnico di Torino
in the research for solutions to the above issue.
This dissertation proposes methods and algorithms for estimating the location of
the center of mass, the angular rate, and the moments of inertia of a passive object.
These methods require that the chaser spacecraft be capable of tracking several features
of the target through passive vision sensors. Because of harsh lighting conditions in
the space environment, feature-based methods should tolerate temporary failures in
detecting features. The principal works on this topic do not consider this important
aspect, making it a characteristic trait of the proposed methods. Compared to typical
v
treatments of the estimation problem, the proposed techniques do not depend solely on
state observers. However, methods for recovering missing information, like compressive
sampling techniques, are used for preprocessing input data to support the efficient usage
of state observers. Simulation results showed accuracy properties that are comparable to
those of the best-known methods already proposed in the literature.
The developed algorithms were tested in the laboratory staged by Aviospace s.r.l.,
whose name is CADETLab. The results of the experimental tests suggested the practical
applicability of such algorithms for supporting a real active removal mission
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum