3,502 research outputs found

    A Theoretically Guaranteed Deep Optimization Framework for Robust Compressive Sensing MRI

    Full text link
    Magnetic Resonance Imaging (MRI) is one of the most dynamic and safe imaging techniques available for clinical applications. However, the rather slow speed of MRI acquisitions limits the patient throughput and potential indi cations. Compressive Sensing (CS) has proven to be an efficient technique for accelerating MRI acquisition. The most widely used CS-MRI model, founded on the premise of reconstructing an image from an incompletely filled k-space, leads to an ill-posed inverse problem. In the past years, lots of efforts have been made to efficiently optimize the CS-MRI model. Inspired by deep learning techniques, some preliminary works have tried to incorporate deep architectures into CS-MRI process. Unfortunately, the convergence issues (due to the experience-based networks) and the robustness (i.e., lack real-world noise modeling) of these deeply trained optimization methods are still missing. In this work, we develop a new paradigm to integrate designed numerical solvers and the data-driven architectures for CS-MRI. By introducing an optimal condition checking mechanism, we can successfully prove the convergence of our established deep CS-MRI optimization scheme. Furthermore, we explicitly formulate the Rician noise distributions within our framework and obtain an extended CS-MRI network to handle the real-world nosies in the MRI process. Extensive experimental results verify that the proposed paradigm outperforms the existing state-of-the-art techniques both in reconstruction accuracy and efficiency as well as robustness to noises in real scene

    Structure-Constrained Basis Pursuit for Compressively Sensing Speech

    Get PDF
    Compressed Sensing (CS) exploits the sparsity of many signals to enable sampling below the Nyquist rate. If the original signal is sufficiently sparse, the Basis Pursuit (BP) algorithm will perfectly reconstruct the original signal. Unfortunately many signals that intuitively appear sparse do not meet the threshold for sufficient sparsity . These signals require so many CS samples for accurate reconstruction that the advantages of CS disappear. This is because Basis Pursuit/Basis Pursuit Denoising only models sparsity. We developed a Structure-Constrained Basis Pursuit that models the structure of somewhat sparse signals as upper and lower bound constraints on the Basis Pursuit Denoising solution. We applied it to speech, which seems sparse but does not compress well with CS, and gained improved quality over Basis Pursuit Denoising. When a single parameter (i.e. the phone) is encoded, Normalized Mean Squared Error (NMSE) decreases by between 16.2% and 1.00% when sampling with CS between 1/10 and 1/2 the Nyquist rate, respectively. When bounds are coded as a sum of Gaussians, NMSE decreases between 28.5% and 21.6% in the same range. SCBP can be applied to any somewhat sparse signal with a predictable structure to enable improved reconstruction quality with the same number of samples

    Explaining business model innovation processes: A problem formulation and problem solving perspective

    Get PDF
    This study explains the business model innovation processes in industrial firms. Drawing on three case studies of leading business-to-business firms shifting from product-based to service-based business models, it introduces problems as a theoretical concept to explain business model innovation processes. We show how formulating and solving problems guide the search for a viable business model and why some problem formulation and solving activities lead firms to shift between backward-looking and forward-looking searches. The decision to shift to a forward-looking search is triggered by the perception of failure to continue with an established way of working, while the shift to a backward-looking search is based on the perception of high alternative costs. We contribute to the business model innovation and servitization literature by theorizing the process of business model innovation and providing implications for managers

    Flexible Multi-layer Sparse Approximations of Matrices and Applications

    Get PDF
    The computational cost of many signal processing and machine learning techniques is often dominated by the cost of applying certain linear operators to high-dimensional vectors. This paper introduces an algorithm aimed at reducing the complexity of applying linear operators in high dimension by approximately factorizing the corresponding matrix into few sparse factors. The approach relies on recent advances in non-convex optimization. It is first explained and analyzed in details and then demonstrated experimentally on various problems including dictionary learning for image denoising, and the approximation of large matrices arising in inverse problems

    IoT for measurements and measurements for IoT

    Get PDF
    The thesis is framed in the broad strand of the Internet of Things, providing two parallel paths. On one hand, it deals with the identification of operational scenarios in which the IoT paradigm could be innovative and preferable to pre-existing solutions, discussing in detail a couple of applications. On the other hand, the thesis presents methodologies to assess the performance of technologies and related enabling protocols for IoT systems, focusing mainly on metrics and parameters related to the functioning of the physical layer of the systems

    Enhancing entrepreneurial innovation through industry-led accelerators: corporate-new venture dynamics and organizational redesign in a port maritime ecosystem

    Get PDF
    This PhD dissertation studies the management and design of corporate accelerators, in particular, industry-led value chain corporate accelerators. I addressed a multi-faceted research question about the novelty, corporate impact, dynamics and design of industry-led accelerators. Using a longitudinal, inductive, multiple-case embedded research design that analyses the industrial accelerator interface, the relationships between incumbent firms and external new ventures and the R&D/innovation units of established firms in a port maritime complex, this dissertation addresses this multi-faceted research question and it makes five core contributions. First, it positions, for the first time, the corporate accelerator phenomena at the intersection of fundamental management research streams, including organizational design, dynamic capabilities and corporate entrepreneurship. Second, it conducts the first study of the promising model of industry-led accelerator by inductively generating a four-step framework of how these accelerators work: i) co-define a broad innovation remit, ii) generate an innovation funnel to attract start-ups and scale-ups, iii) mutual sensing via flexible matching iv) select for scale and investment. Third, it finds striking counter-intuitive evidence in that the industry-led accelerator not only accelerates external new ventures but rather the corporate partners themselves by triggering them to internalize the lean start-up method and redesign their R&D/innovation processes and routines. To explain this, I inductively developed a four-phases process model of corporate entrepreneurial capability-building, comprising: a) attracting, b) strategic fit sensing, c) shaping and d) internalizing. Fourth, this dissertation uncovers three novel tensions—internalization, implementation and role—at the incumbent - new venture interface and develops a new ecological and symbiotically-inspired framework for tension identification and mitigation in industrial acceleration contexts. Fifth, and finally, using the frameworks and process models developed, this dissertation proposes a new toolkit (industrial acceleration design canvas and workshops) to orient practitioners when strategizing, designing and sustaining corporate new venture ecosystem acceleration initiatives.Open Acces

    Saddle-to-Saddle Dynamics in Diagonal Linear Networks

    Full text link
    In this paper we fully describe the trajectory of gradient flow over diagonal linear networks in the limit of vanishing initialisation. We show that the limiting flow successively jumps from a saddle of the training loss to another until reaching the minimum â„“1\ell_1-norm solution. This saddle-to-saddle dynamics translates to an incremental learning process as each saddle corresponds to the minimiser of the loss constrained to an active set outside of which the coordinates must be zero. We explicitly characterise the visited saddles as well as the jumping times through a recursive algorithm reminiscent of the LARS algorithm used for computing the Lasso path. Our proof leverages a convenient arc-length time-reparametrisation which enables to keep track of the heteroclinic transitions between the jumps. Our analysis requires negligible assumptions on the data, applies to both under and overparametrised settings and covers complex cases where there is no monotonicity of the number of active coordinates. We provide numerical experiments to support our findings
    • …
    corecore