1,370 research outputs found

    Optimization of the motion estimation for parallel embedded systems in the context of new video standards

    Get PDF
    15 pagesInternational audienceThe effciency of video compression methods mainly depends on the motion compensation stage, and the design of effcient motion estimation techniques is still an important issue. An highly accurate motion estimation can significantly reduce the bit-rate, but involves a high computational complexity. This is particularly true for new generations of video compression standards, MPEG AVC and HEVC, which involves techniques such as different reference frames, sub-pixel estimation, variable block sizes. In this context, the design of fast motion estimation solutions is necessary, and can concerned two linked aspects: a high quality algorithm and its effcient implementation. This paper summarizes our main contributions in this domain. In particular, we first present the HME (Hierarchical Motion Estimation) technique. It is based on a multi-level refinement process where the motion estimation vectors are first estimated on a sub-sampled image. The multi-levels decomposition provides robust predictions and is particularly suited for variable block sizes motion estimations. The HME method has been integrated in a AVC encoder, and we propose a parallel implementation of this technique, with the motion estimation at pixel level performed by a DSP processor, and the sub-pixel refinement realized in an FPGA. The second technique that we present is called HDS for Hierarchical Diamond Search. It combines the multi-level refinement of HME, with a fast search at pixel-accuracy inspired by the EPZS method. This paper also presents its parallel implementation onto a multi-DSP platform and the its use in the HEVC context

    ICASE/LaRC Workshop on Adaptive Grid Methods

    Get PDF
    Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field

    Advances in video motion analysis research for mature and emerging application areas

    Get PDF

    Block matching algorithm for motion estimation based on Artificial Bee Colony (ABC)

    Full text link
    Block matching (BM) motion estimation plays a very important role in video coding. In a BM approach, image frames in a video sequence are divided into blocks. For each block in the current frame, the best matching block is identified inside a region of the previous frame, aiming to minimize the sum of absolute differences (SAD). Unfortunately, the SAD evaluation is computationally expensive and represents the most consuming operation in the BM process. Therefore, BM motion estimation can be approached as an optimization problem, where the goal is to find the best matching block within a search space. The simplest available BM method is the full search algorithm (FSA) which finds the most accurate motion vector through an exhaustive computation of SAD values for all elements of the search window. Recently, several fast BM algorithms have been proposed to reduce the number of SAD operations by calculating only a fixed subset of search locations at the price of poor accuracy. In this paper, a new algorithm based on Artificial Bee Colony (ABC) optimization is proposed to reduce the number of search locations in the BM process. In our algorithm, the computation of search locations is drastically reduced by considering a fitness calculation strategy which indicates when it is feasible to calculate or only estimate new search locations. Since the proposed algorithm does not consider any fixed search pattern or any other movement assumption as most of other BM approaches do, a high probability for finding the true minimum (accurate motion vector) is expected. Conducted simulations show that the proposed method achieves the best balance over other fast BM algorithms, in terms of both estimation accuracy and computational cost.Comment: 22 Pages. arXiv admin note: substantial text overlap with arXiv:1405.4721, arXiv:1406.448

    Block matching algorithm based on Harmony Search optimization for motion estimation

    Full text link
    Motion estimation is one of the major problems in developing video coding applications. Among all motion estimation approaches, Block-matching (BM) algorithms are the most popular methods due to their effectiveness and simplicity for both software and hardware implementations. A BM approach assumes that the movement of pixels within a defined region of the current frame can be modeled as a translation of pixels contained in the previous frame. In this procedure, the motion vector is obtained by minimizing a certain matching metric that is produced for the current frame over a determined search window from the previous frame. Unfortunately, the evaluation of such matching measurement is computationally expensive and represents the most consuming operation in the BM process. Therefore, BM motion estimation can be viewed as an optimization problem whose goal is to find the best-matching block within a search space. The simplest available BM method is the Full Search Algorithm (FSA) which finds the most accurate motion vector through an exhaustive computation of all the elements of the search space. Recently, several fast BM algorithms have been proposed to reduce the search positions by calculating only a fixed subset of motion vectors despite lowering its accuracy. On the other hand, the Harmony Search (HS) algorithm is a population-based optimization method that is inspired by the music improvisation process in which a musician searches for harmony and continues to polish the pitches to obtain a better harmony. In this paper, a new BM algorithm that combines HS with a fitness approximation model is proposed. The approach uses motion vectors belonging to the search window as potential solutions. A fitness function evaluates the matching quality of each motion vector candidate.Comment: 25 Pages. arXiv admin note: substantial text overlap with arXiv:1405.472

    Prostate cancer biochemical recurrence prediction using bpMRI radiomics, clinical and histopathological data

    Get PDF
    Tese de mestrado integrado em Engenharia Biomédica e Biofísica (Sinais e Imagens Médicas), Universidade de Lisboa, Faculdade de Ciências, 2021O cancro da próstata é a segunda doença oncológica mais frequente nos homens, sendo frequentemente tratado com remoção cirúrgica total do órgão, denominada prostatectomia radical. Apesar dos avanços no diagnóstico e da evolução das terapias cirúrgicas, 20–35% dos candidatos a prostatectomia radical com intuito curativo sofrem de recidiva bioquímica, uma condição que representa o insucesso do tratamento inicial e também o primeiro sinal de progressão da doença. Em particular, dois terços dos casos de recidiva bioquímica ocorrem dentro de um período de dois anos. Ocorrendo cedo, este estado implica uma maior agressividade biológica da doença e um pior prognóstico, uma vez que pode dever-se `a presença de doença oculta, localmente avançada ou metastática. Apesar de o prognóstico devido ao desenvolvimento de recidiva bioquímica variar, em geral está associado a um risco acrescido de desenvolvimento de doença metastática e de mortalidade específica por cancro da próstata, representando assim uma importante preocupação clínica após terapia definitiva. Contudo, os modelos preditivos de recidiva bioquímica actuais não só falham na explicação da variabilidade dos resultados pós-cirúrgicos, como não têm habilidade para intervir cedo no processo de decisão de tratamento, uma vez que dependem de informação provinda da avaliação histopatológica da peça cirúrgica da prostatectomia ou da biópsia. Actualmente, o exame padrão para diagnóstico e para estadiamento do cancro da próstata é a ressonância magnética multiparamétrica, e as características provindas da avaliação dessas imagens têm mostrado potencial na caracterização do(s) tumor(es) e para predição de recidiva bioquímica. “Radiomics”, a recente metodologia aplicada à análise quantitativa de imagens médicas tem mostrado ter capacidade de quantificar objectivamente a heterogeneidade macroscópica de tecidos biológicos como tumores. Esta heterogeneidade detectada tem vindo a sugerir associação a heterogeneidade genómica que, por sua vez, tem demonstrado correlação com resistência a tratamento e propensão metastática. Porém, o potencial da análise radiómica das imagens de ressonância magnética (MRI) multiparamétrica da próstata para previsão de recidiva bioquímica pós-prostatectomia radical ainda não foi totalmente aprofundado. Esta dissertação propôs explorar o potencial da análise radiómica aplicada a imagens pré-cirúrgicas de ressonância magnética biparamétrica da próstata para previsão de recidiva bioquímica, no período de dois anos após prostatectomia radical. Este potencial foi avaliado através de modelos predictivos com base em dados radiómicos e parâmetros clínico-histopatológicos comummente adquiridos em três fases clínicas: pré-biópsia, pré- e pós-cirúrgica. 93 pacientes, de um total de 250, foram eleitos para este estudo retrospectivo, dos quais 20 verificaram recidiva bioquímica. 33 parâmetros clínico-histopatológicos foram recolhidos e 2715 variáveis radiómicas baseadas em intensidade, forma e textura, foram extraídas de todo o volume da próstata caracterizado em imagens originais e filtradas de ressonância magnética biparamétrica, nomeadamente, ponderadas em T2, ponderadas em Difusão, e mapas de coeficiente de difusão aparente (ADC). Embora os pacientes elegíveis tenham sido examinados na mesma instituição, as características do conjunto de imagens eram heterogéneas, sendo necessário aplicar vários passos de processamento para possibilitar uma comparação mais justa. Foi feita correção do campo tendencial (do inglês, “bias”) e segmentação manual das imagens T2, registo tanto para transposição das delineações do volume de interesse entre as várias modalidades imagiológicas como para correção de movimento, cálculo de mapas ADC, regularização do campo de visão, quantização personalizada em tons cinza e reamostragem. Tendo os dados recolhidos uma alta dimensionalidade (número de variáveis maior que o número de observações), foi escolhida a regressão logística com penalização L1 (LASSO) para resolver o problema de classificação. O uso da penalização aliada à regressão logística, um método simples e commumente usado em estudos de classificação, permite impedir o sobreajuste provável neste cenário de alta dimensionalidade. Além do popular LASSO, recorremos também ao algoritmo Priority-LASSO, um método recente para lidar com dados “ómicos” e desenvolvido com base no LASSO. O Priority-LASSO tem como princípio a definição da hierarquia ou prioridade das variáveis de estudo, através do agrupamento dessas mesmas variáveis em blocos sequenciais. Neste trabalho explorámos duas maneiras de agrupar as variáveis (Clínico-histopatológicas vs. Radiómicas e Clínico-histopatológicas vs. T2 vs. Difusão vs. ADC). Além disso, quisemos perceber qual o impacto da ordem destes mesmos blocos no desempenho do modelo. Para tal, testámos todas as permutações de blocos possíveis (2 e 24, respectivamente) em cada um dos casos. Assim, uma estrutura de aprendizagem automática, composta por métodos de classificação, validação-cruzada k-fold estratificada e repetida, e análises estatísticas, foi desenvolvida para identificar os melhores classificadores, dentro um conjunto de configura¸c˜oes testado para cada um dos três cenários clínicos simulados. Os algoritmos de regressão logística penalizada com LASSO e o Priority-LASSO efectuaram conjuntamente a seleção de características e o ajuste de modelos. Os modelos foram desenvolvidos de forma a optimizar o n´umero de casos positivos de recidiva bioquímica através da maximização das métricas área sob a curva (AUC) e medida-F (Fmax), derivadas da análise de curva característica de operação do receptor (ROC). Além da comparação das implementações Priority-LASSO com o caso em que não houve agrupamento de variáveis (isto é, LASSO), foram também comparados dois métodos de normalização de imagens com base no desempenho dos modelos (avaliado por Fmax). Um dos métodos tinha em conta o sinal de intensidade proveniente da próstata e de tecidos imediatamente circundantes, e outro apenas da próstata. Paralelamente, também o efeito do método de amostragem SMOTE, que permite equilibrar o número de casos positivos e negativos durante o processo de aprendizagem do algoritmo, foi avaliado no desempenho dos modelos. Com este método, gerámos casos sintéticos para a classe positiva (classe minoritária) para recidiva bioquímica, a partir dos casos já existentes. O modelo de regressão logística com Priority-LASSO com a sequência de blocos de variáveis Clínico-histopatológicas, T2, Difusão, ADC e com restrição de esparsidade de cada bloco com o parâmetro pmax = (1,7,0,1), foi seleccionada como a melhor configuração em cada um dos cenários clínicos testados, superando os modelos de regressão logística LASSO. Durante o desenvolvimento dos modelos, e em todos os cenários clínicos, os modelos com melhor desempenho obtiveram bons valor médios de Fmax (mínimo–máximo: 0.702–0.754 e 0.910–0.925 para classe positiva e negativa de recidiva bioquímica, respectivamente). Contudo, na validação final com um conjunto de dados independentes, os modelos obtiveram valores Fmax muito baixos para a classe positiva (0.297–0.400), revelando um sobreajuste, apesar do uso de métodos de penalização. Também se verificou grande instabilidade nos atributos seleccionados. Contudo, os modelos obtiveram razoáveis valores de medida-F (0.779–0.833) e de Precisão (0.821–0.873) para a classe de recidiva bioquímica negativa durante as fases de treino e de validação, pelo que estes modelos poderão ter valor a ser explorado. Os modelos pré-biópsia tiveram desempenho inferior no treino, mas sofreram menos de sobreajuste. Os classificadores pré-operatórios foram excessivamente optimistas, e os modelos pós-operatórios foram os melhores a detectar correctamente casos negativos de recidiva bioquímica. Outros resultados observados incluem a superioridade no desempenho dos modelos baseados em imagens que usaram o método de normalização realizado apenas com o volume da próstata, e o inesperado resultado de que o uso método de amostragem SMOTE não ter trazido melhoria na classificação de casos positivos de recorrência bioquímica, nem nos casos negativos, durante a validação dos modelos. Tendo em contas às variáveis seleccionadas e a sequência de prioridade dos melhores modelos Priority-LASSO, concluímos que os atributos radiómicos provindos da análise de textura de imagens MRI ponderadas em T2 poderão ter potencial para distinguir pacientes que não irão sofrer recidiva bioquímica inicial, conjuntamente com níveis iniciais de antigénio específico da próstata, num cenário pré-biópsia. A inclusão de parâmetros pré- ou pós-operatórios não adicionou valor substancial para a classificação de casos positivos de recidiva bioquímica em conjunto com variáveis radiómicos de MRI biparamétrica. Estudos com alto poder estatístico serão necessários para elucidar acerca do papel de atributos de radiómica baseados em imagens de bpMRI como predictores de recidiva bioquímica.Primary prostate cancer is often treated with radical prostatectomy (RP). Yet, 20–35% of males undergoing RP with curative intent will experience biochemical recurrence (BCR). Of those, two-thirds happen within two years, implying a more aggressive disease and poorer prognosis. Current BCR risk stratification tools are bounded to biopsy- or to surgery-derived histopathological evaluation, having limited ability for early treatment decision-making. Magnetic resonance imaging (MRI) is acquired as part of the diagnostic procedure and imaging derived features have shown promise in tumour characterisation and BCR prediction. We investigated the value of imaging features extracted from preoperative biparametric MRI (bpMRI) combined with clinic-histopathological data to develop models to predict two-year post-prostatectomy BCR in three simulated clinical scenarios: pre-biopsy, pre- and postoperative. In a cohort of 20 BCR positive and 73 BCR negative RP-treated patients examined in the same institution, 33 clinico-histopathological variables were retrospectively collected, and 2715 radiomic features (based on intensity, shape and texture) were extracted from the whole-prostate volume imaged in original and filtered T2- and Diffusion-weighted MRI and ADC maps scans. A systematic machine-learning framework comprised of classification, stratified k-fold cross validation and statistical analyses was developed to identify the top performing BCR classifiers’ configurations within three clinical scenarios. LASSO and Priority-LASSO logistic regression algorithms were used for feature selection and model fitting, optimising the amount of correctly classified BCR positive cases through AUC and F-score maximisation (Fmax) derived from ROC curve analysis. We also investigated the impact of two image normalisation methods and SMOTE-based minority oversampling on model performance. Priority-LASSO logistic regression with four-block priority sequence Clinical, T2w, DWI, ADC, with block sparsity restriction pmax = (1,7,0,1) was selected as the best performing model configuration across all clinical scenarios, outperforming LASSO logistic regression models. During development and across the simulated clinical scenarios, top models achieved good median Fmax values (range: 0.702–0.754 and 0.910–0.925 for BCR positive and negative classes, respectively); yet, during validation with an independent set, the models obtained very low Fmax for the target BCR positive class (0.297–0.400), revealing model overfitting. We also observed instability in the selected features. However, models attained reasonably good F-score (0.779–0.833) and Precision (0.821–0.873) for BCR negative class during training and validation phases, making these models worth exploring. Pre-biopsy models had lower performances in training but suffered less from overfitting. Preoperative classifiers were overoptimistic, and postoperative models were the most successful in detecting BCR negative cases. T2w-MRI textured-based radiomic features may have potential to distinguish negative BCR patients together with baseline prostate-specific antigen (PSA) levels in a pre-biopsy scenario. The inclusion of pre- or postoperative variables did not substantially add value to BCR positive cases classification with bpMRI radiomic features. Highly powered studies with curated imaging data are needed to elucidate the role of bpMRI radiomic features as predictors of BCR

    Nonlinear Bayesian filtering based on mixture of orthogonal expansions

    Get PDF
    This dissertation addresses the problem of parameter and state estimation of nonlinear dynamical systems and its applications for satellites in Low Earth Orbits. The main focus in Bayesian filtering methods is to recursively estimate the state a posteriori probability density function conditioned on available measurements. Exact optimal solution to the nonlinear Bayesian filtering problem is intractable as it requires knowledge of infinite number of parameters. Bayes' probability distribution can be approximated by mixture of orthogonal expansion of probability density function in terms of higher order moments of the distribution. In general, better series approximations to Bayes' distribution can be achieved using higher order moment terms. However, use of such density function increases computational complexity especially for multivariate systems. Mixture of orthogonally expanded probability density functions based on lower order moment terms is suggested to approximate the Bayes' probability density function. The main novelty of this thesis is development of new Bayes' filtering algorithms based on single and mixture series using a Monte Carlo simulation approach. Furthermore, based on an earlier work by Culver [1] for an exact solution to Bayesian filtering based on Taylor series and third order orthogonal expansion of probability density function, a new filtering algorithm utilizing a mixture of orthogonal expansion for such density function is derived. In this new extension, methods to compute parameters of such finite mixture distributions are developed for optimal filtering performance. The results have shown better performances over other filtering methods such as Extended Kalman Filter and Particle Filter under sparse measurement availability. For qualitative and quantitative performance the filters have been simulated for orbit determination of a satellite through radar measurements / Global Positioning System and optical navigation for a lunar orbiter. This provides a new unified view on use of orthogonally expanded probability density functions for nonlinear Bayesian filtering based on Taylor series and Monte Carlo simulations under sparse measurements. Another new contribution of this work is analysis on impact of process noise in mathematical models of nonlinear dynamical systems. Analytical solutions for nonlinear differential equations of motion have a different level of time varying process noise. Analysis of the process noise for Low Earth Orbital models is carried out using the Gauss Legendre Differential Correction method. Furthermore, a new parameter estimation algorithm for Epicyclic orbits by Hashida and Palmer [2], based on linear least squares has been developed. The foremost contribution of this thesis is the concept of nonlinear Bayesian estimation based on mixture of orthogonal expansions to improve estimation accuracy under sparse measurements. •.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Optimization Of Zonal Wavefront Estimation And Curvature Measurements

    Get PDF
    Optical testing in adverse environments, ophthalmology and applications where characterization by curvature is leveraged all have a common goal: accurately estimate wavefront shape. This dissertation investigates wavefront sensing techniques as applied to optical testing based on gradient and curvature measurements. Wavefront sensing involves the ability to accurately estimate shape over any aperture geometry, which requires establishing a sampling grid and estimation scheme, quantifying estimation errors caused by measurement noise propagation, and designing an instrument with sufficient accuracy and sensitivity for the application. Starting with gradient-based wavefront sensing, a zonal least-squares wavefront estimation algorithm for any irregular pupil shape and size is presented, for which the normal matrix equation sets share a pre-defined matrix. A Gerchberg–Saxton iterative method is employed to reduce the deviation errors in the estimated wavefront caused by the pre-defined matrix across discontinuous boundary. The results show that the RMS deviation error of the estimated wavefront from the original wavefront can be less than λ/130~ λ/150 (for λ equals 632.8nm) after about twelve iterations and less than λ/100 after as few as four iterations. The presented approach to handling irregular pupil shapes applies equally well to wavefront estimation from curvature data. A defining characteristic for a wavefront estimation algorithm is its error propagation behavior. The error propagation coefficient can be formulated as a function of the eigenvalues of the wavefront estimation-related matrices, and such functions are established for each of the basic estimation geometries (i.e. Fried, Hudgin and Southwell) with a serial numbering scheme, where a square sampling grid array is sequentially indexed row by row. The results show that with the wavefront piston-value fixed, the odd-number grid sizes yield lower error propagation than the even-number grid sizes for all geometries. The Fried geometry either allows sub-sized wavefront estimations within the testing domain or yields a two-rank deficient estimation matrix over the full aperture; but the latter usually suffers from high error propagation and the waffle mode problem. Hudgin geometry offers an error propagator between those of the Southwell and the Fried geometries. For both wavefront gradient-based and wavefront difference-based estimations, the Southwell geometry is shown to offer the lowest error propagation with the minimum-norm least-squares solution. Noll’s theoretical result, which was extensively used as a reference in the previous literature for error propagation estimate, corresponds to the Southwell geometry with an odd-number grid size. For curvature-based wavefront sensing, a concept for a differential Shack-Hartmann (DSH) curvature sensor is proposed. This curvature sensor is derived from the basic Shack-Hartmann sensor with the collimated beam split into three output channels, along each of which a lenslet array is located. Three Hartmann grid arrays are generated by three lenslet arrays. Two of the lenslets shear in two perpendicular directions relative to the third one. By quantitatively comparing the Shack-Hartmann grid coordinates of the three channels, the differentials of the wavefront slope at each Shack-Hartmann grid point can be obtained, so the Laplacian curvatures and twist terms will be available. The acquisition of the twist terms using a Hartmann-based sensor allows us to uniquely determine the principal curvatures and directions more accurately than prior methods. Measurement of local curvatures as opposed to slopes is unique because curvature is intrinsic to the wavefront under test, and it is an absolute as opposed to a relative measurement. A zonal least-squares-based wavefront estimation algorithm was developed to estimate the wavefront shape from the Laplacian curvature data, and validated. An implementation of the DSH curvature sensor is proposed and an experimental system for this implementation was initiated. The DSH curvature sensor shares the important features of both the Shack-Hartmann slope sensor and Roddier’s curvature sensor. It is a two-dimensional parallel curvature sensor. Because it is a curvature sensor, it provides absolute measurements which are thus insensitive to vibrations, tip/tilts, and whole body movements. Because it is a two-dimensional sensor, it does not suffer from other sources of errors, such as scanning noise. Combined with sufficient sampling and a zonal wavefront estimation algorithm, both low and mid frequencies of the wavefront may be recovered. Notice that the DSH curvature sensor operates at the pupil of the system under test, therefore the difficulty associated with operation close to the caustic zone is avoided. Finally, the DSH-curvature-sensor-based wavefront estimation does not suffer from the 2π-ambiguity problem, so potentially both small and large aberrations may be measured

    The Global sphere reconstruction (GSR) - Demonstrating an independent implementation of the astrometric core solution for Gaia

    Get PDF
    Context. The Gaia ESA mission will estimate the astrometric and physical data of more than one billion objects, providing the largest and most precise catalog of absolute astrometry in the history of Astronomy. The core of this process, the so-called global sphere reconstruction, is represented by the reduction of a subset of these objects which will be used to define the celestial reference frame. As the Hipparcos mission showed, and as is inherent to all kinds of absolute measurements, possible errors in the data reduction can hardly be identified from the catalog, thus potentially introducing systematic errors in all derived work. Aims. Following up on the lessons learned from Hipparcos, our aim is thus to develop an independent sphere reconstruction method that contributes to guarantee the quality of the astrometric results without fully reproducing the main processing chain. Methods. Indeed, given the unfeasibility of a complete replica of the data reduction pipeline, an astrometric verification unit (AVU) was instituted by the Gaia Data Processing and Analysis Consortium (DPAC). One of its jobs is to implement and operate an independent global sphere reconstruction (GSR), parallel to the baseline one (AGIS, namely Astrometric Global Iterative Solution) but limited to the primary stars and for validation purposes, to compare the two results, and to report on any significant differences. Results. Tests performed on simulated data show that GSR is able to reproduce at the sub-ÎĽ\muas level the results of the AGIS demonstration run presented in Lindegren et al. (2012). Conclusions. Further development is ongoing to improve on the treatment of real data and on the software modules that compare the AGIS and GSR solutions to identify possible discrepancies above the tolerance level set by the accuracy of the Gaia catalog.Comment: Accepted for publication on Astronomy & Astrophysic

    Adaptive filtering applications to satellite navigation

    Get PDF
    PhDDifferential Global Navigation Satellite Systems employ the extended Kalman filter to estimate the reference position error. High accuracy integrated navigation systems have the ability to mix traditional inertial sensor outputs with navigation satellite based position information and can be used to develop high accuracy landing systems for aircraft. This thesis considers a host of estimation problems associated with aircraft navigation systems that currently rely on the extended Kalman filter and proposes to use a nonlinear estimation algorithm, the unscented Kalman filter (UKF) that does not rely on Jacobian linearisation. The objective is to develop high accuracy positioning algorithms to facilitate the use of GNSS or DGNSS for aircraft landing. Firstly, the position error in a typical satellite navigation problem depends on the accuracy of the orbital ephemeris. The thesis presents results for the prediction of the orbital ephemeris from a customised navigation satellite receiver's data message. The SDP4/SDP8 algorithms and suitable noise models are used to establish the measured data. Secondly, the differential station common mode position error not including the contribution due to errors in the ephemeris is usually estimated by employing an EKF. The thesis then considers the application of the UKF to the mixing problem, so as to facilitate the mixing of measurements made by either a GNSS or a DGNSS and a variety of low cost or high-precision INS sensors. Precise, adaptive UKFs and a suitable nonlinear propagation method are used to estimate the orbit ephemeris and the differential position and the navigation filter mixing errors. The results indicate the method is particularly suitable for estimating the orbit ephemeris of navigation satellites and the differential position and navigation filter mixing errors, thus facilitating interoperable DGNSS operation for aircraft landing
    • …
    corecore