36 research outputs found

    Deployment and on-orbit shape modifications for a large space telescope using magnetostriction

    Get PDF
    The Hubble Space Telescope, with its 2.4-m primary mirror, enabled notable scientific progress and discoveries, like for instance the acceleration of the expansion of the universe. Twenty-six years later, NASA is about the launch the next generation of space telescopes, namely the James Webb Space Telescope, with a diameter of 6.5 m. However the primary mirrors' limited size reduces the performance and thus possible scientific outcome of space telescope missions and the astronomers' desire for larger apertures will surely outstrip the ability of rocket fairings to accommodate these larger apertures. In response to the desire for larger mirrors, deployable mirrors are the logical choice. The APERTURE mission presents a feasible approach toward the reality of deployable diffraction-limited ultraviolet-visible (UV-Vis) mirrors of 16-m diameter or larger. APERTURE uses a membrane mirror that will be folded like an umbrella and then deployed in space. Thanks to a magnetic smart material coating and a magnetic write head, post deployment corrections will be applied to the surface figure. The feasibility study of the concept has been done in the context of a NIAC Phase I study which is the result of a collaboration between Northwestern University and the University of Illinois at Urbana-Champaign. A video of the concept has been produced for more clarity. The design and analysis of the folded shape have been carried out to check that the telescope can be effectively stored in a Delta IV Heavy rocket fairing. Then the deployment of the primary mirror has been investigated and two different mechanisms have been selected. The feasibility of post-deployment shape corrections has been studied and the impact of different key design parameters has been computed as a first step towards design optimization. A preliminary design has been obtained which also uses the results of the work carried out at Northwestern University. Finally, a work plan and test campaign have been produced for the potential Phase II of the project

    Further Development Of Aperture: A Precise Extremely Large Reflective Telescope Using Re-Configurable Elements

    Get PDF
    One of the pressing needs for space ultraviolet-visible astronomy is a design to allow larger mirrors than the James Webb Space Telescope primary. The diameter of the rocket fairing limits the mirror diameter such that all future missions calling for mirrors up to 16 meters in diameter or larger will require a mirror that is deployed post-launch. In response to the deployment requirement, we address the issues of this concept called "A Precise Extremely Large Reflective Telescope Using Reconfigurable Elements (APERTURE) with both hardware experiments and software simulations... We designed and built several fixtures with O-rings to hold a membrane. We established a coating process to make a membrane that was coated on one side with Cr and the other side with Cr-Terfenol-D-NiCo. The Terfenol-D (T-D hereafter) is the MSM (Magnetic Smart Memory) we use. We bought and established a procedure for measuring a deformation over time and purchased a Shack Hartmann system from Imagine Optic (https://www.imagine-optic.com). The first substrate we used was DuPont (TM) Kapton polyimide film. Due to material creep, we found the stability over a 48-hour period with a Kapton substrate was not as good as desired (greater than 1 micron). We then switched to CP1 Polyimide. We found the CP1 much more stable to creep, being stable from about 3 hours to 48 hours to within a measurement error to below approximately 0.1 micron. We produced a 13 micron maximum deviation on a 50-millimeter-diameter piece of CP1 (25 microns thick). The T-D coating was about 2 microns, and the other layers, about 10 nanometers. The magnetic field at the base was about 0.1 teslas. We can make the T-D film at least 5 times thicker and the magnetic field at least 5 times stronger, and hence make deformations as much as 25 times larger. We have a formed a collaboration produced at the NIAC (NASA Innovative Advanced Concepts) mid-term review with Dr. Ron Shiri of Goddard Space Flight Center (GSFC) to explore making controlled deviations on lambda/14-lambda/20 scales which are required to bring a surface to the diffraction limit. We carried out only preliminary work on Si using a Coordinate Measuring Machine (CMM), which produced deviations on the 1 micron level. We are still working on a program to bring to GSFC a flat enough (radius of curvature greater than 10 microns) -coated a Si piece with Cr, T-D, NiCo. Then we plan to carry out tests with an interferometer. Further, we formed a new collaboration with Prof. Rajan Vaidyanathan of the University of Central Florida to replace the CP1 with a shape memory alloy (SMA). With his collaboration, we acquired new Federal funding outside of NASA to explore the use of SMAs (we use NiTi). Our preliminary results indicate that we can produce deformations greater than 1 micron on approximately 100 microns thick. Furthermore we have shown that the NiTi can deploy to better than 1 micron of its set original and then trained shape

    Stratégies d'optimisation proximales et de points intérieurs en reconstruction d'images

    Get PDF
    Inverse problems in image processing can be solved by diverse techniques, such as classical variational methods, recent deep learning approaches, or Bayesian strategies. Although relying on different principles, these methods all require efficient optimization algorithms. The proximity operator appears as a crucial tool in many iterative solvers for nonsmooth optimization problems. In this thesis, we illustrate the versatility of proximal algorithms by incorporating them within each one of the aforementioned resolution methods.First, we consider a variational formulation including a set of constraints and a composite objective function. We present PIPA, a novel proximal interior point algorithm for solving the considered optimization problem. This algorithm includes variable metrics for acceleration purposes. We derive convergence guarantees for PIPA and show in numerical experiments that it compares favorably with state-of-the-art algorithms in two challenging image processing applications.In a second part, we investigate a neural network architecture called iRestNet, obtained by unfolding a proximal interior point algorithm over a fixed number of iterations. iRestNet requires the expression of the logarithmic barrier proximity operator and of its first derivatives, which we provide for three useful types of constraints. Then, we derive conditions under which this optimization-inspired architecture is robust to an input perturbation. We conduct several image deblurring experiments, in which iRestNet performs well with respect to a variational approach and to state-of-the-art deep learning methods.The last part of this thesis focuses on a stochastic sampling method for solving inverse problems in a Bayesian setting. We present an accelerated proximal unadjusted Langevin algorithm called PP-ULA. This scheme is incorporated into a hybrid Gibbs sampler used to perform joint deconvolution and segmentation of ultrasound images. PP-ULA employs the majorize-minimize principle to address non log-concave priors. As shown in numerical experiments, PP-ULA leads to a significant time reduction and to very satisfactory deconvolution and segmentation results on both simulated and real ultrasound data.Les problèmes inverses en traitement d'images peuvent être résolus en utilisant des méthodes variationnelles classiques, des approches basées sur l'apprentissage profond, ou encore des stratégies bayésiennes. Bien que différentes, ces approches nécessitent toutes des algorithmes d'optimisation efficaces. L'opérateur proximal est un outil important pour la minimisation de fonctions non lisses. Dans cette thèse, nous illustrons la polyvalence des algorithmes proximaux en les introduisant dans chacune des trois méthodes de résolution susmentionnées.Tout d'abord, nous considérons une formulation variationnelle sous contraintes dont la fonction objectif est composite. Nous développons PIPA, un nouvel algorithme proximal de points intérieurs permettant de résoudre ce problème. Dans le but d'accélérer PIPA, nous y incluons une métrique variable. La convergence de PIPA est prouvée sous certaines conditions et nous montrons que cette méthode est plus rapide que des algorithmes de l'état de l'art au travers de deux exemples numériques en traitement d'images.Dans une deuxième partie, nous étudions iRestNet, une architecture neuronale obtenue en déroulant un algorithme proximal de points intérieurs. iRestNet nécessite l'expression de l'opérateur proximal de la barrière logarithmique et des dérivées premières de cet opérateur. Nous fournissons ces expressions pour trois types de contraintes. Nous montrons ensuite que sous certaines conditions, cette architecture est robuste à une perturbation sur son entrée. Enfin, iRestNet démontre de bonnes performances pratiques en restauration d'images par rapport à une approche variationnelle et à d'autres méthodes d'apprentissage profond.La dernière partie de cette thèse est consacrée à l'étude d'une méthode d'échantillonnage stochastique pour résoudre des problèmes inverses dans un cadre bayésien. Nous proposons une version accélérée de l'algorithme proximal de Langevin non ajusté, baptisée PP-ULA. Cet algorithme est incorporé à un échantillonneur de Gibbs hybride utilisé pour réaliser la déconvolution et la segmentation d'images ultrasonores. PP-ULA utilise le principe de majoration-minimisation afin de gérer les distributions non log-concaves. Comme le montrent nos expériences réalisées sur des données ultrasonores simulées et réelles, PP-ULA permet une importante réduction du temps d'exécution tout en produisant des résultats de déconvolution et de segmentation très satisfaisants

    PIPA: A New Proximal Interior Point Algorithm for Large Scale Convex Optimization

    Get PDF
    International audienceInterior point methods have been known for decades to be useful for the resolution of small to medium size constrained optimization problems. These approaches have the benefit of ensuring feasibility of the iterates through a logarithmic barrier. We propose to incorporate a proximal forward-backward step in the resolution of the barrier subproblem to account for non-necessarily differentiable terms arising in the objective function. The combination of this scheme with a novel line-search strategy gives rise to the so-called Proximal Interior Point Algorithm (PIPA) suitable for the minimization of the sum of a smooth convex function and a non-smooth convex one under general convex constraints. The convergence of PIPA is secured under mild assumptions. As demonstrated by numerical experiments carried out on a large-scale hyperspectral image unmixing application, the proposed method outperforms the state-of-the-art

    A Proximal Interior Point Algorithm with Applications to Image Processing

    No full text
    International audienceIn this article, we introduce a new proximal interior point algorithm (PIPA). This algorithm is able to handle convex optimization problems involving various constraints where the objective function is the sum of a Lipschitz differentiable term and a possibly nons-mooth one. Each iteration of PIPA involves the minimization of a merit function evaluated for decaying values of a logarithmic barrier parameter. This inner minimization is performed thanks to a finite number of subiterations of a variable metric forward-backward method employing a line search strategy. The convergence of this latter step as well as the convergence the global method itself are analyzed. The numerical efficiency of the proposed approach is demonstrated in two image processing applications

    Geometry-Texture Decomposition/Reconstruction Using a Proximal Interior Point Algorithm

    Get PDF
    International audienceThe geometry-texture decomposition of images produced by X-Ray Computed Tomography (CT) is a challenging inverse problem which is usually performed in two steps: reconstruction and decomposition. Decomposition can be used for instance to produce an approximate segmentation of the image, but this one can be compromised by artifacts and noise arising from the acquisition and reconstruction processes. We propose a geometry-texture decomposition based on a TV-Laplacian model, well-suited for segmentation and edge detection. The corresponding joint reconstruction and decomposition task from CT data is then formulated as a convex constrained minimization problem. We use our recently introduced proximal interior point method to solve this inverse problem in a reliable manner. Numerical experiments on realistic images of material samples illustrate the practical efficiency of the proposed approach. Our algorithm indeed compares favorably with a state-of-the-art method
    corecore