1,370 research outputs found
Optimization of the motion estimation for parallel embedded systems in the context of new video standards
15 pagesInternational audienceThe effciency of video compression methods mainly depends on the motion compensation stage, and the design of effcient motion estimation techniques is still an important issue. An highly accurate motion estimation can significantly reduce the bit-rate, but involves a high computational complexity. This is particularly true for new generations of video compression standards, MPEG AVC and HEVC, which involves techniques such as different reference frames, sub-pixel estimation, variable block sizes. In this context, the design of fast motion estimation solutions is necessary, and can concerned two linked aspects: a high quality algorithm and its effcient implementation. This paper summarizes our main contributions in this domain. In particular, we first present the HME (Hierarchical Motion Estimation) technique. It is based on a multi-level refinement process where the motion estimation vectors are first estimated on a sub-sampled image. The multi-levels decomposition provides robust predictions and is particularly suited for variable block sizes motion estimations. The HME method has been integrated in a AVC encoder, and we propose a parallel implementation of this technique, with the motion estimation at pixel level performed by a DSP processor, and the sub-pixel refinement realized in an FPGA. The second technique that we present is called HDS for Hierarchical Diamond Search. It combines the multi-level refinement of HME, with a fast search at pixel-accuracy inspired by the EPZS method. This paper also presents its parallel implementation onto a multi-DSP platform and the its use in the HEVC context
ICASE/LaRC Workshop on Adaptive Grid Methods
Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field
Block matching algorithm for motion estimation based on Artificial Bee Colony (ABC)
Block matching (BM) motion estimation plays a very important role in video
coding. In a BM approach, image frames in a video sequence are divided into
blocks. For each block in the current frame, the best matching block is
identified inside a region of the previous frame, aiming to minimize the sum of
absolute differences (SAD). Unfortunately, the SAD evaluation is
computationally expensive and represents the most consuming operation in the BM
process. Therefore, BM motion estimation can be approached as an optimization
problem, where the goal is to find the best matching block within a search
space. The simplest available BM method is the full search algorithm (FSA)
which finds the most accurate motion vector through an exhaustive computation
of SAD values for all elements of the search window. Recently, several fast BM
algorithms have been proposed to reduce the number of SAD operations by
calculating only a fixed subset of search locations at the price of poor
accuracy. In this paper, a new algorithm based on Artificial Bee Colony (ABC)
optimization is proposed to reduce the number of search locations in the BM
process. In our algorithm, the computation of search locations is drastically
reduced by considering a fitness calculation strategy which indicates when it
is feasible to calculate or only estimate new search locations. Since the
proposed algorithm does not consider any fixed search pattern or any other
movement assumption as most of other BM approaches do, a high probability for
finding the true minimum (accurate motion vector) is expected. Conducted
simulations show that the proposed method achieves the best balance over other
fast BM algorithms, in terms of both estimation accuracy and computational
cost.Comment: 22 Pages. arXiv admin note: substantial text overlap with
arXiv:1405.4721, arXiv:1406.448
Block matching algorithm based on Harmony Search optimization for motion estimation
Motion estimation is one of the major problems in developing video coding
applications. Among all motion estimation approaches, Block-matching (BM)
algorithms are the most popular methods due to their effectiveness and
simplicity for both software and hardware implementations. A BM approach
assumes that the movement of pixels within a defined region of the current
frame can be modeled as a translation of pixels contained in the previous
frame. In this procedure, the motion vector is obtained by minimizing a certain
matching metric that is produced for the current frame over a determined search
window from the previous frame. Unfortunately, the evaluation of such matching
measurement is computationally expensive and represents the most consuming
operation in the BM process. Therefore, BM motion estimation can be viewed as
an optimization problem whose goal is to find the best-matching block within a
search space. The simplest available BM method is the Full Search Algorithm
(FSA) which finds the most accurate motion vector through an exhaustive
computation of all the elements of the search space. Recently, several fast BM
algorithms have been proposed to reduce the search positions by calculating
only a fixed subset of motion vectors despite lowering its accuracy. On the
other hand, the Harmony Search (HS) algorithm is a population-based
optimization method that is inspired by the music improvisation process in
which a musician searches for harmony and continues to polish the pitches to
obtain a better harmony. In this paper, a new BM algorithm that combines HS
with a fitness approximation model is proposed. The approach uses motion
vectors belonging to the search window as potential solutions. A fitness
function evaluates the matching quality of each motion vector candidate.Comment: 25 Pages. arXiv admin note: substantial text overlap with
arXiv:1405.472
Prostate cancer biochemical recurrence prediction using bpMRI radiomics, clinical and histopathological data
Tese de mestrado integrado em Engenharia BiomĂ©dica e BiofĂsica (Sinais e Imagens MĂ©dicas), Universidade de Lisboa, Faculdade de CiĂŞncias, 2021O cancro da prĂłstata Ă© a segunda doença oncolĂłgica mais frequente nos homens, sendo frequentemente tratado com remoção cirĂşrgica total do ĂłrgĂŁo, denominada prostatectomia radical. Apesar dos avanços no diagnĂłstico e da evolução das terapias cirĂşrgicas, 20–35% dos candidatos a prostatectomia radical com intuito curativo sofrem de recidiva bioquĂmica, uma condição que representa o insucesso do tratamento inicial e tambĂ©m o primeiro sinal de progressĂŁo da doença. Em particular, dois terços dos casos de recidiva bioquĂmica ocorrem dentro de um perĂodo de dois anos. Ocorrendo cedo, este estado implica uma maior agressividade biolĂłgica da doença e um pior prognĂłstico, uma vez que pode dever-se `a presença de doença oculta, localmente avançada ou metastática. Apesar de o prognĂłstico devido ao desenvolvimento de recidiva bioquĂmica variar, em geral está associado a um risco acrescido de desenvolvimento de doença metastática e de mortalidade especĂfica por cancro da prĂłstata, representando assim uma importante preocupação clĂnica apĂłs terapia definitiva. Contudo, os modelos preditivos de recidiva bioquĂmica actuais nĂŁo sĂł falham na explicação da variabilidade dos resultados pĂłs-cirĂşrgicos, como nĂŁo tĂŞm habilidade para intervir cedo no processo de decisĂŁo de tratamento, uma vez que dependem de informação provinda da avaliação histopatolĂłgica da peça cirĂşrgica da prostatectomia ou da biĂłpsia. Actualmente, o exame padrĂŁo para diagnĂłstico e para estadiamento do cancro da prĂłstata Ă© a ressonância magnĂ©tica multiparamĂ©trica, e as caracterĂsticas provindas da avaliação dessas imagens tĂŞm mostrado potencial na caracterização do(s) tumor(es) e para predição de recidiva bioquĂmica. “Radiomics”, a recente metodologia aplicada Ă análise quantitativa de imagens mĂ©dicas tem mostrado ter capacidade de quantificar objectivamente a heterogeneidade macroscĂłpica de tecidos biolĂłgicos como tumores. Esta heterogeneidade detectada tem vindo a sugerir associação a heterogeneidade genĂłmica que, por sua vez, tem demonstrado correlação com resistĂŞncia a tratamento e propensĂŁo metastática. PorĂ©m, o potencial da análise radiĂłmica das imagens de ressonância magnĂ©tica (MRI) multiparamĂ©trica da prĂłstata para previsĂŁo de recidiva bioquĂmica pĂłs-prostatectomia radical ainda nĂŁo foi totalmente aprofundado. Esta dissertação propĂ´s explorar o potencial da análise radiĂłmica aplicada a imagens prĂ©-cirĂşrgicas de ressonância magnĂ©tica biparamĂ©trica da prĂłstata para previsĂŁo de recidiva bioquĂmica, no perĂodo de dois anos apĂłs prostatectomia radical. Este potencial foi avaliado atravĂ©s de modelos predictivos com base em dados radiĂłmicos e parâmetros clĂnico-histopatolĂłgicos comummente adquiridos em trĂŞs fases clĂnicas: prĂ©-biĂłpsia, prĂ©- e pĂłs-cirĂşrgica. 93 pacientes, de um total de 250, foram eleitos para este estudo retrospectivo, dos quais 20 verificaram recidiva bioquĂmica. 33 parâmetros clĂnico-histopatolĂłgicos foram recolhidos e 2715 variáveis radiĂłmicas baseadas em intensidade, forma e textura, foram extraĂdas de todo o volume da prĂłstata caracterizado em imagens originais e filtradas de ressonância magnĂ©tica biparamĂ©trica, nomeadamente, ponderadas em T2, ponderadas em DifusĂŁo, e mapas de coeficiente de difusĂŁo aparente (ADC). Embora os pacientes elegĂveis tenham sido examinados na mesma instituição, as caracterĂsticas do conjunto de imagens eram heterogĂ©neas, sendo necessário aplicar vários passos de processamento para possibilitar uma comparação mais justa. Foi feita correção do campo tendencial (do inglĂŞs, “bias”) e segmentação manual das imagens T2, registo tanto para transposição das delineações do volume de interesse entre as várias modalidades imagiolĂłgicas como para correção de movimento, cálculo de mapas ADC, regularização do campo de visĂŁo, quantização personalizada em tons cinza e reamostragem. Tendo os dados recolhidos uma alta dimensionalidade (nĂşmero de variáveis maior que o nĂşmero de observações), foi escolhida a regressĂŁo logĂstica com penalização L1 (LASSO) para resolver o problema de classificação. O uso da penalização aliada Ă regressĂŁo logĂstica, um mĂ©todo simples e commumente usado em estudos de classificação, permite impedir o sobreajuste provável neste cenário de alta dimensionalidade. AlĂ©m do popular LASSO, recorremos tambĂ©m ao algoritmo Priority-LASSO, um mĂ©todo recente para lidar com dados “ómicos” e desenvolvido com base no LASSO. O Priority-LASSO tem como princĂpio a definição da hierarquia ou prioridade das variáveis de estudo, atravĂ©s do agrupamento dessas mesmas variáveis em blocos sequenciais. Neste trabalho explorámos duas maneiras de agrupar as variáveis (ClĂnico-histopatolĂłgicas vs. RadiĂłmicas e ClĂnico-histopatolĂłgicas vs. T2 vs. DifusĂŁo vs. ADC). AlĂ©m disso, quisemos perceber qual o impacto da ordem destes mesmos blocos no desempenho do modelo. Para tal, testámos todas as permutações de blocos possĂveis (2 e 24, respectivamente) em cada um dos casos. Assim, uma estrutura de aprendizagem automática, composta por mĂ©todos de classificação, validação-cruzada k-fold estratificada e repetida, e análises estatĂsticas, foi desenvolvida para identificar os melhores classificadores, dentro um conjunto de configura¸cËśoes testado para cada um dos trĂŞs cenários clĂnicos simulados. Os algoritmos de regressĂŁo logĂstica penalizada com LASSO e o Priority-LASSO efectuaram conjuntamente a seleção de caracterĂsticas e o ajuste de modelos. Os modelos foram desenvolvidos de forma a optimizar o n´umero de casos positivos de recidiva bioquĂmica atravĂ©s da maximização das mĂ©tricas área sob a curva (AUC) e medida-F (Fmax), derivadas da análise de curva caracterĂstica de operação do receptor (ROC). AlĂ©m da comparação das implementações Priority-LASSO com o caso em que nĂŁo houve agrupamento de variáveis (isto Ă©, LASSO), foram tambĂ©m comparados dois mĂ©todos de normalização de imagens com base no desempenho dos modelos (avaliado por Fmax). Um dos mĂ©todos tinha em conta o sinal de intensidade proveniente da prĂłstata e de tecidos imediatamente circundantes, e outro apenas da prĂłstata. Paralelamente, tambĂ©m o efeito do mĂ©todo de amostragem SMOTE, que permite equilibrar o nĂşmero de casos positivos e negativos durante o processo de aprendizagem do algoritmo, foi avaliado no desempenho dos modelos. Com este mĂ©todo, gerámos casos sintĂ©ticos para a classe positiva (classe minoritária) para recidiva bioquĂmica, a partir dos casos já existentes. O modelo de regressĂŁo logĂstica com Priority-LASSO com a sequĂŞncia de blocos de variáveis ClĂnico-histopatolĂłgicas, T2, DifusĂŁo, ADC e com restrição de esparsidade de cada bloco com o parâmetro pmax = (1,7,0,1), foi seleccionada como a melhor configuração em cada um dos cenários clĂnicos testados, superando os modelos de regressĂŁo logĂstica LASSO. Durante o desenvolvimento dos modelos, e em todos os cenários clĂnicos, os modelos com melhor desempenho obtiveram bons valor mĂ©dios de Fmax (mĂnimo–máximo: 0.702–0.754 e 0.910–0.925 para classe positiva e negativa de recidiva bioquĂmica, respectivamente). Contudo, na validação final com um conjunto de dados independentes, os modelos obtiveram valores Fmax muito baixos para a classe positiva (0.297–0.400), revelando um sobreajuste, apesar do uso de mĂ©todos de penalização. TambĂ©m se verificou grande instabilidade nos atributos seleccionados. Contudo, os modelos obtiveram razoáveis valores de medida-F (0.779–0.833) e de PrecisĂŁo (0.821–0.873) para a classe de recidiva bioquĂmica negativa durante as fases de treino e de validação, pelo que estes modelos poderĂŁo ter valor a ser explorado. Os modelos prĂ©-biĂłpsia tiveram desempenho inferior no treino, mas sofreram menos de sobreajuste. Os classificadores prĂ©-operatĂłrios foram excessivamente optimistas, e os modelos pĂłs-operatĂłrios foram os melhores a detectar correctamente casos negativos de recidiva bioquĂmica. Outros resultados observados incluem a superioridade no desempenho dos modelos baseados em imagens que usaram o mĂ©todo de normalização realizado apenas com o volume da prĂłstata, e o inesperado resultado de que o uso mĂ©todo de amostragem SMOTE nĂŁo ter trazido melhoria na classificação de casos positivos de recorrĂŞncia bioquĂmica, nem nos casos negativos, durante a validação dos modelos. Tendo em contas Ă s variáveis seleccionadas e a sequĂŞncia de prioridade dos melhores modelos Priority-LASSO, concluĂmos que os atributos radiĂłmicos provindos da análise de textura de imagens MRI ponderadas em T2 poderĂŁo ter potencial para distinguir pacientes que nĂŁo irĂŁo sofrer recidiva bioquĂmica inicial, conjuntamente com nĂveis iniciais de antigĂ©nio especĂfico da prĂłstata, num cenário prĂ©-biĂłpsia. A inclusĂŁo de parâmetros prĂ©- ou pĂłs-operatĂłrios nĂŁo adicionou valor substancial para a classificação de casos positivos de recidiva bioquĂmica em conjunto com variáveis radiĂłmicos de MRI biparamĂ©trica. Estudos com alto poder estatĂstico serĂŁo necessários para elucidar acerca do papel de atributos de radiĂłmica baseados em imagens de bpMRI como predictores de recidiva bioquĂmica.Primary prostate cancer is often treated with radical prostatectomy (RP). Yet, 20–35% of males undergoing RP with curative intent will experience biochemical recurrence (BCR). Of those, two-thirds happen within two years, implying a more aggressive disease and poorer prognosis. Current BCR risk stratification tools are bounded to biopsy- or to surgery-derived histopathological evaluation, having limited ability for early treatment decision-making. Magnetic resonance imaging (MRI) is acquired as part of the diagnostic procedure and imaging derived features have shown promise in tumour characterisation and BCR prediction. We investigated the value of imaging features extracted from preoperative biparametric MRI (bpMRI) combined with clinic-histopathological data to develop models to predict two-year post-prostatectomy BCR in three simulated clinical scenarios: pre-biopsy, pre- and postoperative. In a cohort of 20 BCR positive and 73 BCR negative RP-treated patients examined in the same institution, 33 clinico-histopathological variables were retrospectively collected, and 2715 radiomic features (based on intensity, shape and texture) were extracted from the whole-prostate volume imaged in original and filtered T2- and Diffusion-weighted MRI and ADC maps scans. A systematic machine-learning framework comprised of classification, stratified k-fold cross validation and statistical analyses was developed to identify the top performing BCR classifiers’ configurations within three clinical scenarios. LASSO and Priority-LASSO logistic regression algorithms were used for feature selection and model fitting, optimising the amount of correctly classified BCR positive cases through AUC and F-score maximisation (Fmax) derived from ROC curve analysis. We also investigated the impact of two image normalisation methods and SMOTE-based minority oversampling on model performance. Priority-LASSO logistic regression with four-block priority sequence Clinical, T2w, DWI, ADC, with block sparsity restriction pmax = (1,7,0,1) was selected as the best performing model configuration across all clinical scenarios, outperforming LASSO logistic regression models. During development and across the simulated clinical scenarios, top models achieved good median Fmax values (range: 0.702–0.754 and 0.910–0.925 for BCR positive and negative classes, respectively); yet, during validation with an independent set, the models obtained very low Fmax for the target BCR positive class (0.297–0.400), revealing model overfitting. We also observed instability in the selected features. However, models attained reasonably good F-score (0.779–0.833) and Precision (0.821–0.873) for BCR negative class during training and validation phases, making these models worth exploring. Pre-biopsy models had lower performances in training but suffered less from overfitting. Preoperative classifiers were overoptimistic, and postoperative models were the most successful in detecting BCR negative cases. T2w-MRI textured-based radiomic features may have potential to distinguish negative BCR patients together with baseline prostate-specific antigen (PSA) levels in a pre-biopsy scenario. The inclusion of pre- or postoperative variables did not substantially add value to BCR positive cases classification with bpMRI radiomic features. Highly powered studies with curated imaging data are needed to elucidate the role of bpMRI radiomic features as predictors of BCR
Nonlinear Bayesian filtering based on mixture of orthogonal expansions
This dissertation addresses the problem of parameter and state estimation of nonlinear dynamical systems and its applications for satellites in Low Earth Orbits. The main focus in Bayesian filtering methods is to recursively estimate the state a posteriori probability density function conditioned on available measurements. Exact optimal solution to the nonlinear Bayesian filtering problem is intractable as it requires knowledge of infinite number of parameters. Bayes' probability distribution can be approximated by mixture of orthogonal expansion of probability density function in terms of higher order moments of the distribution. In general, better series approximations to Bayes' distribution can be achieved using higher order moment terms. However, use of such density function increases computational complexity especially for multivariate systems. Mixture of orthogonally expanded probability density functions based on lower order moment terms is suggested to approximate the Bayes' probability density function. The main novelty of this thesis is development of new Bayes' filtering algorithms based on single and mixture series using a Monte Carlo simulation approach. Furthermore, based on an earlier work by Culver [1] for an exact solution to Bayesian filtering based on Taylor series and third order orthogonal expansion of probability density function, a new filtering algorithm utilizing a mixture of orthogonal expansion for such density function is derived. In this new extension, methods to compute parameters of such finite mixture distributions are developed for optimal filtering performance. The results have shown better performances over other filtering methods such as Extended Kalman Filter and Particle Filter under sparse measurement availability. For qualitative and quantitative performance the filters have been simulated for orbit determination of a satellite through radar measurements / Global Positioning System and optical navigation for a lunar orbiter. This provides a new unified view on use of orthogonally expanded probability density functions for nonlinear Bayesian filtering based on Taylor series and Monte Carlo simulations under sparse measurements. Another new contribution of this work is analysis on impact of process noise in mathematical models of nonlinear dynamical systems. Analytical solutions for nonlinear differential equations of motion have a different level of time varying process noise. Analysis of the process noise for Low Earth Orbital models is carried out using the Gauss Legendre Differential Correction method. Furthermore, a new parameter estimation algorithm for Epicyclic orbits by Hashida and Palmer [2], based on linear least squares has been developed. The foremost contribution of this thesis is the concept of nonlinear Bayesian estimation based on mixture of orthogonal expansions to improve estimation accuracy under sparse measurements. •.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Optimization Of Zonal Wavefront Estimation And Curvature Measurements
Optical testing in adverse environments, ophthalmology and applications where characterization by curvature is leveraged all have a common goal: accurately estimate wavefront shape. This dissertation investigates wavefront sensing techniques as applied to optical testing based on gradient and curvature measurements. Wavefront sensing involves the ability to accurately estimate shape over any aperture geometry, which requires establishing a sampling grid and estimation scheme, quantifying estimation errors caused by measurement noise propagation, and designing an instrument with sufficient accuracy and sensitivity for the application. Starting with gradient-based wavefront sensing, a zonal least-squares wavefront estimation algorithm for any irregular pupil shape and size is presented, for which the normal matrix equation sets share a pre-defined matrix. A Gerchberg–Saxton iterative method is employed to reduce the deviation errors in the estimated wavefront caused by the pre-defined matrix across discontinuous boundary. The results show that the RMS deviation error of the estimated wavefront from the original wavefront can be less than λ/130~ λ/150 (for λ equals 632.8nm) after about twelve iterations and less than λ/100 after as few as four iterations. The presented approach to handling irregular pupil shapes applies equally well to wavefront estimation from curvature data. A defining characteristic for a wavefront estimation algorithm is its error propagation behavior. The error propagation coefficient can be formulated as a function of the eigenvalues of the wavefront estimation-related matrices, and such functions are established for each of the basic estimation geometries (i.e. Fried, Hudgin and Southwell) with a serial numbering scheme, where a square sampling grid array is sequentially indexed row by row. The results show that with the wavefront piston-value fixed, the odd-number grid sizes yield lower error propagation than the even-number grid sizes for all geometries. The Fried geometry either allows sub-sized wavefront estimations within the testing domain or yields a two-rank deficient estimation matrix over the full aperture; but the latter usually suffers from high error propagation and the waffle mode problem. Hudgin geometry offers an error propagator between those of the Southwell and the Fried geometries. For both wavefront gradient-based and wavefront difference-based estimations, the Southwell geometry is shown to offer the lowest error propagation with the minimum-norm least-squares solution. Noll’s theoretical result, which was extensively used as a reference in the previous literature for error propagation estimate, corresponds to the Southwell geometry with an odd-number grid size. For curvature-based wavefront sensing, a concept for a differential Shack-Hartmann (DSH) curvature sensor is proposed. This curvature sensor is derived from the basic Shack-Hartmann sensor with the collimated beam split into three output channels, along each of which a lenslet array is located. Three Hartmann grid arrays are generated by three lenslet arrays. Two of the lenslets shear in two perpendicular directions relative to the third one. By quantitatively comparing the Shack-Hartmann grid coordinates of the three channels, the differentials of the wavefront slope at each Shack-Hartmann grid point can be obtained, so the Laplacian curvatures and twist terms will be available. The acquisition of the twist terms using a Hartmann-based sensor allows us to uniquely determine the principal curvatures and directions more accurately than prior methods. Measurement of local curvatures as opposed to slopes is unique because curvature is intrinsic to the wavefront under test, and it is an absolute as opposed to a relative measurement. A zonal least-squares-based wavefront estimation algorithm was developed to estimate the wavefront shape from the Laplacian curvature data, and validated. An implementation of the DSH curvature sensor is proposed and an experimental system for this implementation was initiated. The DSH curvature sensor shares the important features of both the Shack-Hartmann slope sensor and Roddier’s curvature sensor. It is a two-dimensional parallel curvature sensor. Because it is a curvature sensor, it provides absolute measurements which are thus insensitive to vibrations, tip/tilts, and whole body movements. Because it is a two-dimensional sensor, it does not suffer from other sources of errors, such as scanning noise. Combined with sufficient sampling and a zonal wavefront estimation algorithm, both low and mid frequencies of the wavefront may be recovered. Notice that the DSH curvature sensor operates at the pupil of the system under test, therefore the difficulty associated with operation close to the caustic zone is avoided. Finally, the DSH-curvature-sensor-based wavefront estimation does not suffer from the 2π-ambiguity problem, so potentially both small and large aberrations may be measured
The Global sphere reconstruction (GSR) - Demonstrating an independent implementation of the astrometric core solution for Gaia
Context. The Gaia ESA mission will estimate the astrometric and physical data
of more than one billion objects, providing the largest and most precise
catalog of absolute astrometry in the history of Astronomy. The core of this
process, the so-called global sphere reconstruction, is represented by the
reduction of a subset of these objects which will be used to define the
celestial reference frame. As the Hipparcos mission showed, and as is inherent
to all kinds of absolute measurements, possible errors in the data reduction
can hardly be identified from the catalog, thus potentially introducing
systematic errors in all derived work. Aims. Following up on the lessons
learned from Hipparcos, our aim is thus to develop an independent sphere
reconstruction method that contributes to guarantee the quality of the
astrometric results without fully reproducing the main processing chain.
Methods. Indeed, given the unfeasibility of a complete replica of the data
reduction pipeline, an astrometric verification unit (AVU) was instituted by
the Gaia Data Processing and Analysis Consortium (DPAC). One of its jobs is to
implement and operate an independent global sphere reconstruction (GSR),
parallel to the baseline one (AGIS, namely Astrometric Global Iterative
Solution) but limited to the primary stars and for validation purposes, to
compare the two results, and to report on any significant differences. Results.
Tests performed on simulated data show that GSR is able to reproduce at the
sub-as level the results of the AGIS demonstration run presented in
Lindegren et al. (2012). Conclusions. Further development is ongoing to improve
on the treatment of real data and on the software modules that compare the AGIS
and GSR solutions to identify possible discrepancies above the tolerance level
set by the accuracy of the Gaia catalog.Comment: Accepted for publication on Astronomy & Astrophysic
Adaptive filtering applications to satellite navigation
PhDDifferential Global Navigation Satellite Systems employ the extended Kalman filter to estimate the reference position error. High accuracy integrated navigation systems have the ability to mix traditional inertial sensor outputs with navigation satellite based position information and can be used to develop high accuracy landing systems for aircraft.
This thesis considers a host of estimation problems associated with aircraft navigation systems that currently rely on the extended Kalman filter and proposes to use a nonlinear estimation algorithm, the unscented Kalman filter (UKF) that does not rely on Jacobian linearisation. The objective is to develop high accuracy positioning algorithms to facilitate the use of GNSS or DGNSS for aircraft landing. Firstly, the position error in a typical satellite navigation problem depends on the accuracy of the orbital ephemeris. The thesis presents results for the prediction of the orbital ephemeris from a customised navigation satellite receiver's data message. The SDP4/SDP8 algorithms and suitable noise models are used to establish the measured data. Secondly, the differential station common mode position error not including the contribution due to errors in the ephemeris is usually estimated by employing an EKF. The thesis then considers the application of the UKF to the mixing problem, so as to facilitate the mixing of measurements made by either a GNSS or a DGNSS and a variety of low cost or high-precision INS sensors.
Precise, adaptive UKFs and a suitable nonlinear propagation method are used to estimate the orbit ephemeris and the differential position and the navigation filter mixing errors. The results indicate the method is particularly suitable for estimating the orbit ephemeris of navigation satellites and the differential position and navigation filter mixing errors, thus facilitating interoperable DGNSS operation for aircraft landing
- …