9,087 research outputs found
Direct lunar descent optimisation by finite elements in time approach
In this paper a direct approach to trajectory optimisation, based on Finite Elements in Time (FET) discretisation is presented. Trajectory optimisation is performed combining the effectiveness and flexibility of Finite Elements in Time in solving complex boundary values problems with a common nonlinear programming algorithm. In order to avoid low accuracy proper to direct approaches, a mesh adaptivity strategy is implemented which exploits the ability of finite elements to represent both continuous and discontinuous functions. The effectiveness and accuracy of direct transcription by FET are proved by a selected number of sample problems. Finally an optimal landing manoeuvre is presented to show the power of the proposed approach in solving even complex and realistic problems
Sparse Vector Autoregressive Modeling
The vector autoregressive (VAR) model has been widely used for modeling
temporal dependence in a multivariate time series. For large (and even
moderate) dimensions, the number of AR coefficients can be prohibitively large,
resulting in noisy estimates, unstable predictions and difficult-to-interpret
temporal dependence. To overcome such drawbacks, we propose a 2-stage approach
for fitting sparse VAR (sVAR) models in which many of the AR coefficients are
zero. The first stage selects non-zero AR coefficients based on an estimate of
the partial spectral coherence (PSC) together with the use of BIC. The PSC is
useful for quantifying the conditional relationship between marginal series in
a multivariate process. A refinement second stage is then applied to further
reduce the number of parameters. The performance of this 2-stage approach is
illustrated with simulation results. The 2-stage approach is also applied to
two real data examples: the first is the Google Flu Trends data and the second
is a time series of concentration levels of air pollutants.Comment: 39 pages, 7 figure
Distributed Cooperative Localization in Wireless Sensor Networks without NLOS Identification
In this paper, a 2-stage robust distributed algorithm is proposed for
cooperative sensor network localization using time of arrival (TOA) data
without identification of non-line of sight (NLOS) links. In the first stage,
to overcome the effect of outliers, a convex relaxation of the Huber loss
function is applied so that by using iterative optimization techniques, good
estimates of the true sensor locations can be obtained. In the second stage,
the original (non-relaxed) Huber cost function is further optimized to obtain
refined location estimates based on those obtained in the first stage. In both
stages, a simple gradient descent technique is used to carry out the
optimization. Through simulations and real data analysis, it is shown that the
proposed convex relaxation generally achieves a lower root mean squared error
(RMSE) compared to other convex relaxation techniques in the literature. Also
by doing the second stage, the position estimates are improved and we can
achieve an RMSE close to that of the other distributed algorithms which know
\textit{a priori} which links are in NLOS.Comment: Accepted in WPNC 201
Sparse and Non-Negative BSS for Noisy Data
Non-negative blind source separation (BSS) has raised interest in various
fields of research, as testified by the wide literature on the topic of
non-negative matrix factorization (NMF). In this context, it is fundamental
that the sources to be estimated present some diversity in order to be
efficiently retrieved. Sparsity is known to enhance such contrast between the
sources while producing very robust approaches, especially to noise. In this
paper we introduce a new algorithm in order to tackle the blind separation of
non-negative sparse sources from noisy measurements. We first show that
sparsity and non-negativity constraints have to be carefully applied on the
sought-after solution. In fact, improperly constrained solutions are unlikely
to be stable and are therefore sub-optimal. The proposed algorithm, named nGMCA
(non-negative Generalized Morphological Component Analysis), makes use of
proximal calculus techniques to provide properly constrained solutions. The
performance of nGMCA compared to other state-of-the-art algorithms is
demonstrated by numerical experiments encompassing a wide variety of settings,
with negligible parameter tuning. In particular, nGMCA is shown to provide
robustness to noise and performs well on synthetic mixtures of real NMR
spectra.Comment: 13 pages, 18 figures, to be published in IEEE Transactions on Signal
Processin
FlowNet: Learning Optical Flow with Convolutional Networks
Convolutional neural networks (CNNs) have recently been very successful in a
variety of computer vision tasks, especially on those linked to recognition.
Optical flow estimation has not been among the tasks where CNNs were
successful. In this paper we construct appropriate CNNs which are capable of
solving the optical flow estimation problem as a supervised learning task. We
propose and compare two architectures: a generic architecture and another one
including a layer that correlates feature vectors at different image locations.
Since existing ground truth data sets are not sufficiently large to train a
CNN, we generate a synthetic Flying Chairs dataset. We show that networks
trained on this unrealistic data still generalize very well to existing
datasets such as Sintel and KITTI, achieving competitive accuracy at frame
rates of 5 to 10 fps.Comment: Added supplementary materia
- …