6,347 research outputs found

    Fast quantitative susceptibility mapping with L1-regularization and automatic parameter selection

    Get PDF
    Purpose To enable fast reconstruction of quantitative susceptibility maps with total variation penalty and automatic regularization parameter selection. Methods ℓ[subscript 1]-Regularized susceptibility mapping is accelerated by variable splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and fast Fourier transforms. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared with the nonlinear conjugate gradient (CG) solver, the proposed method is 20 times faster. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering, and ℓ[subscript 1]-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 min using MATLAB on a standard workstation compared with 22 min using the CG solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 min, which would have taken 4 h with the CG algorithm. The proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5 times faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional blood oxygen level–dependent susceptibility mapping, where processing of the massive time series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion

    Covariate assisted screening and estimation

    Full text link
    Consider a linear model Y=Xβ+zY=X\beta+z, where X=Xn,pX=X_{n,p} and zN(0,In)z\sim N(0,I_n). The vector β\beta is unknown but is sparse in the sense that most of its coordinates are 00. The main interest is to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao [Nonlinear Time Series: Nonparametric and Parametric Methods (2003) Springer]) and the change-point problem (Bhattacharya [In Change-Point Problems (South Hadley, MA, 1992) (1994) 28-56 IMS]), we are primarily interested in the case where the Gram matrix G=XXG=X'X is nonsparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the covariate assisted screening and estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage, which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage screen and clean [Fan and Song Ann. Statist. 38 (2010) 3567-3604; Wasserman and Roeder Ann. Statist. 37 (2009) 2178-2201] procedure, where we first identify candidates of these submodels by patching and screening, and then re-examine each candidate to remove false positives.Comment: Published in at http://dx.doi.org/10.1214/14-AOS1243 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    An Adaptive Semi-Parametric and Context-Based Approach to Unsupervised Change Detection in Multitemporal Remote-Sensing Images

    Get PDF
    In this paper, a novel automatic approach to the unsupervised identification of changes in multitemporal remote-sensing images is proposed. This approach, unlike classical ones, is based on the formulation of the unsupervised change-detection problem in terms of the Bayesian decision theory. In this context, an adaptive semi-parametric technique for the unsupervised estimation of the statistical terms associated with the gray levels of changed and unchanged pixels in a difference image is presented. Such a technique exploits the effectivenesses of two theoretically well-founded estimation procedures: the reduced Parzen estimate (RPE) procedure and the expectation-maximization (EM) algorithm. Then, thanks to the resulting estimates and to a Markov Random Field (MRF) approach used to model the spatial-contextual information contained in the multitemporal images considered, a change detection map is generated. The adaptive semi-parametric nature of the proposed technique allows its application to different kinds of remote-sensing images. Experimental results, obtained on two sets of multitemporal remote-sensing images acquired by two different sensors, confirm the validity of the proposed approach

    Fractional Integration and Business Cycles Features

    Get PDF
    We show in this article that fractionally integrated univariate models for GDP may lead to a better replication of business cycle characteristics. We firstly show that the business cycle features are clearly affected by the degree of integration as well as by the other short run components of the series. Then, we model the real GDP in France, the UK and the US by means of fractionally ARIMA (ARFIMA) models, and show that the three time series can be specified in terms of this type of models with orders of integration higher than one but smaller than two. Comparing the ARFIMA specifications with those based on ARIMA models, we show via simulations that the former better describes the business cycles features of the data at least for the cases of the UK and the US.

    Forecasting Irish inflation using ARIMA models

    Get PDF
    This paper outlines the practical steps which need to be undertaken to use autoregressive integrated moving average (ARIMA) time series models for forecasting Irish inflation. A framework for ARIMA forecasting is drawn up. It considers two alternative approaches to the issue of identifying ARIMA models - the Box Jenkins approach and the objective penalty function methods. The emphasis is on forecast performance which suggests more focus on minimising out-of-sample forecast errors than on maximising in-sample 'goodness of fit'. Thus, the approach followed is unashamedly one of 'model mining' with the aim of optimising forecast performance. Practical issues in ARIMA time series forecasting are illustrated with reference to the harmonised index of consumer prices (HICP) and some of its major sub-components.

    Modified RTK-GNSS for Challenging Environments

    Get PDF
    Real-Time Kinematic Global Navigation Satellite System (RTK-GNSS) is currently the premier technique for achieving centimeter-level accuracy quickly and easily. However, the robustness of RTK-GNSS diminishes in challenging environments due to severe multipath effects and a limited number of available GNSS signals. This is a pressing issue, especially for GNSS users in the navigation industry. This paper proposes and evaluates several methodologies designed to overcome these issues by enhancing the availability and reliability of RTK-GNSS solutions in urban environments. Our novel approach involves the integration of conventional methods with a new technique that leverages surplus satellites—those not initially used for positioning—to more reliably detect incorrect fix solutions. We conducted three tests in densely built-up areas within the Tokyo region. The results demonstrate that our approach not only surpasses the fix rate of the latest commercial receivers and a popular open-source RTK-GNSS program but also improves positional reliability to levels comparable to or exceeding those of the aforementioned commercial technology
    corecore