417 research outputs found

    Design of a portable observatory control system

    Get PDF
    In this thesis, we synthesize the development of a new concept of operation of small robotic telescopes operated over the Internet. Our design includes a set of improvements in control algorithmic and hardware of several critical points of the list of subsystems necessary to obtain suitable data from a telescope. We can synthesize the principal contributions of this thesis into five independent innovations: - An advanced drive closed-loop control: We designed an innovative hardware and software solution for controlling a telescope position at high precision and high robustness. - A complete Telescope Control System (TCS): We implemented a light and portable software using advanced astronomical algorithms libraries for optimally compute in real-time the telescope positioning. This software also provides a new multiple simultaneous pointing models system using state machines which allows reaching higher pointing precision and longer exposure times with external guiding telescopes. - A distributed software architecture (CoolObs): CoolObs is the implementation of a ZeroC-ICE framework allowing the control, interaction, and communication of all the peripherals present in an astronomical observatory. - A patented system for dynamic collimation of optics: SAPACAN is a mechanical parallel arrangement and its associated software used for active compensation of low-frequency aberration variations in small telescopes. - Collimation estimation algorithms: A sensor-less AO algorithm have been applied by the analysis of images obtained with the field camera. This algorithm can detect effects of lousy collimation. The measured misalignments can later feed corrections to a device like SAPACAN. Due to the constant presence of new technologies in the field of astronomy, it had been one of the first fields to introduce material which was not democratized at this time such as Coupled Charged Devices, internet, adaptive optics, remote and robotic control of devices. However, every time one of these new technologies was included in the field it was necessary to design software protocol according to the epoch’s state of the art software. Then with the democratization of the same devices, years after the definition of their protocols, the same communication rules tend to be used to keep backward compatibility with old - and progressively unused- devices. When using lots of cumulated software knowledge such as with robotic observing, we can dig in several nonsenses in the commonly used architectures due to the previously explained reasons. The described situation is the reason why we will propose as follows a new concept of considering an observatory as an entity and not a separated list of independent peripherals. We will describe the application of this concept in the field or robotic telescopes and implement it in various completely different examples to show its versatility and robustness. First of all, we will give a short introduction of the astronomical concepts which will be used all along the document, in a second part, we will expose a state of the art of the current solutions used in the different subsystems of an observing facility and explain why they fail in being used in small telescopes. The principal section will be dedicated to detail and explain each of the five innovations enumerated previously, and finally, we will present the fabrication and integration of these solutions. We will show here how the joint use of all of them allowed obtaining satisfactory outstanding results in the robotic use of a new prototype and on the adaptation on several existing refurbished telescopes. Finally, we dedicate the last chapter of this thesis to resuming the conclusions of our work.En esta tesis, presentamos el desarrollo de un nuevo concepto de operación de telescopio robótica operados a través de Internet. Nuestro diseño incluye un conjunto de mejoras de los algoritmos de control y hardware de varios puntos críticos de la lista de subsistemas necesarios para obtener datos de calidad científica con un telescopio. Podemos sintetizar las principales contribuciones de esta tesis en cinco innovaciones independientes: - Un control de motor avanzado en bucle cerrado: Diseñamos un hardware y software innovadores para controlar la posición y movimiento fino de un telescopio con alta precisión y alta robustez. - Un software de control de telescopio (TCS) integrado: Implementamos un software ligero y portátil que ocupa bibliotecas de algoritmos astronómicos avanzados para calcular de manera óptima y en tiempo real la posición teórica del telescopio. Este software también proporciona un software innovador de modelo de pointing múltiples simultáneos. Esto permite alcanzar una mayor precisión de seguimiento y así ocupar tiempos de integración más importante ocupando un telescopio de guía mecánicamente apartado al telescopio principal. - Una arquitectura de software distribuido (CoolObs): CoolObs es una implementación de software ocupando la plataforma de desarrollo ZeroC-ICE la cual permite el control, la interacción y la comunicación de todos los periféricos presentes en un observatorio astronómico. - Un sistema patentado para la colimación dinámica de la óptica: SAPACAN es un sistema mecánico de movimiento paralelo y su software asociado. Se puede ocupar para compensar activamente las aberraciones ópticas de bajo orden en pequeños telescopios. - Algoritmos de estimación de colimación: Se desarrolló un algoritmo de óptica adaptiva sin sensor en base al análisis de imágenes obtenidas con una cámara cerca del plano focal del telescopio. Este algoritmo puede detectar efectos de mala colimación de las ópticas. Los desajustes, una vez medidos, pueden posteriormente ser aplicados como correcciones a un dispositivo como SAPACAN. Astronomía es un terreno propicio al desarrollo de nuevas tecnologías y, debido a esto, los protocolos de comunicación entre periféricos pueden ser obsoletos porque se han escritos en etapas tempranas de existencia de estas nuevas tecnologías. Las mejoras se han hecho de a poco para mantener la compatibilidad de los sistemas ya existentes, ocupando un planteamiento general de la problemática de control de telescopios robóticos, proponemos un nuevo concepto de observatorio robótico visto como una entidad y no una lista de periféricos independientes. A lo largo de esta tesis, describiremos la aplicación de este concepto en el campo de telescopios robóticos e implementarlo en varios sistemas independientes y variados para mostrar la versatilidad y robustez de la propuesta.Postprint (published version

    Iterative Solvers for Physics-based Simulations and Displays

    Full text link
    La génération d’images et de simulations réalistes requiert des modèles complexes pour capturer tous les détails d’un phénomène physique. Les équations mathématiques qui composent ces modèles sont compliquées et ne peuvent pas être résolues analytiquement. Des procédures numériques doivent donc être employées pour obtenir des solutions approximatives à ces modèles. Ces procédures sont souvent des algorithmes itératifs, qui calculent une suite convergente vers la solution désirée à partir d’un essai initial. Ces méthodes sont une façon pratique et efficace de calculer des solutions à des systèmes complexes, et sont au coeur de la plupart des méthodes de simulation modernes. Dans cette thèse par article, nous présentons trois projets où les algorithmes itératifs jouent un rôle majeur dans une méthode de simulation ou de rendu. Premièrement, nous présentons une méthode pour améliorer la qualité visuelle de simulations fluides. En créant une surface de haute résolution autour d’une simulation existante, stabilisée par une méthode itérative, nous ajoutons des détails additionels à la simulation. Deuxièmement, nous décrivons une méthode de simulation fluide basée sur la réduction de modèle. En construisant une nouvelle base de champ de vecteurs pour représenter la vélocité d’un fluide, nous obtenons une méthode spécifiquement adaptée pour améliorer les composantes itératives de la simulation. Finalement, nous présentons un algorithme pour générer des images de haute qualité sur des écrans multicouches dans un contexte de réalité virtuelle. Présenter des images sur plusieurs couches demande des calculs additionels à coût élevé, mais nous formulons le problème de décomposition des images afin de le résoudre efficacement avec une méthode itérative simple.Realistic computer-generated images and simulations require complex models to properly capture the many subtle behaviors of each physical phenomenon. The mathematical equations underlying these models are complicated, and cannot be solved analytically. Numerical procedures must thus be used to obtain approximate solutions. These procedures are often iterative algorithms, where an initial guess is progressively improved to converge to a desired solution. Iterative methods are a convenient and efficient way to compute solutions to complex systems, and are at the core of most modern simulation methods. In this thesis by publication, we present three papers where iterative algorithms play a major role in a simulation or rendering method. First, we propose a method to improve the visual quality of fluid simulations. By creating a high-resolution surface representation around an input fluid simulation, stabilized with iterative methods, we introduce additional details atop of the simulation. Second, we describe a method to compute fluid simulations using model reduction. We design a novel vector field basis to represent fluid velocity, creating a method specifically tailored to improve all iterative components of the simulation. Finally, we present an algorithm to compute high-quality images for multifocal displays in a virtual reality context. Displaying images on multiple display layers incurs significant additional costs, but we formulate the image decomposition problem so as to allow an efficient solution using a simple iterative algorithm

    Understanding camera trade-offs through a Bayesian analysis of light field projections - A revision

    Get PDF
    Computer vision has traditionally focused on extracting structure,such as depth, from images acquired using thin-lens or pinholeoptics. The development of computational imaging is broadening thisscope; a variety of unconventional cameras do not directly capture atraditional image anymore, but instead require the jointreconstruction of structure and image information. For example, recentcoded aperture designs have been optimized to facilitate the jointreconstruction of depth and intensity. The breadth of imaging designs requires new tools to understand the tradeoffs implied bydifferent strategies. This paper introduces a unified framework for analyzing computational imaging approaches.Each sensor element is modeled as an inner product over the 4D light field.The imaging task is then posed as Bayesian inference: giventhe observed noisy light field projections and a new prior on light field signals, estimate the original light field. Under common imaging conditions, we compare theperformance of various camera designs using 2D light field simulations. Thisframework allows us to better understand the tradeoffs of each camera type and analyze their limitations

    Foundations and Methods for GPU based Image Synthesis

    Get PDF
    Effects such as global illumination, caustics, defocus and motion blur are an integral part of generating images that are perceived as realistic pictures and cannot be distinguished from photographs. In general, two different approaches exist to render images: ray tracing and rasterization. Ray tracing is a widely used technique for production quality rendering of images. The image quality and physical correctness are more important than the time needed for rendering. Generating these effects is a very compute and memory intensive process and can take minutes to hours for a single camera shot. Rasterization on the other hand is used to render images if real-time constraints have to be met (e.g. computer games). Often specialized algorithms are used to approximate these complex effects to achieve plausible results while sacrificing image quality for performance. This thesis is split into two parts. In the first part we look at algorithms and load-balancing schemes for general purpose computing on graphics processing units (GPUs). Most of the ray tracing related algorithms (e.g. KD-tree construction or bidirectional path tracing) have unpredictable memory requirements. Dynamic memory allocation on GPUs suffers from global synchronization required to keep the state of current allocations. We present a method to reduce this overhead on massively parallel hardware architectures. In particular, we merge small parallel allocation requests from different threads that can occur while exploiting SIMD style parallelism. We speed-up the dynamic allocation using a set of constraints that can be applied to a large class of parallel algorithms. To achieve the image quality needed for feature films GPU-cluster are often used to cope with the amount of computation needed. We present a framework that employs a dynamic load balancing approach and applies fair scheduling to minimize the average execution time of spawned computational tasks. The load balancing capabilities are shown by handling irregular workloads: a bidirectional path tracer allowing renderings of complex effects at near interactive frame rates. In the second part of the thesis we try to reduce the image quality gap between production and real-time rendering. Therefore, an adaptive acceleration structure for screen-space ray tracing is presented that represents the scene geometry by planar approximations. The benefit is a fast method to skip empty space and compute exact intersection points based on the planar approximation. This technique allows simulating complex phenomena including depth-of-field rendering and ray traced reflections at real-time frame rates. To handle motion blur in combination with transparent objects we present a unified rendering approach that decouples space and time sampling. Thereby, we can achieve interactive frame rates by reusing fragments during the sampling step. The scene geometry that is potentially visible at any point in time for the duration of a frame is rendered in a rasterization step and stored in temporally varying fragments. We perform spatial sampling to determine all temporally varying fragments that intersect with a specific viewing ray at any point in time. Viewing rays can be sampled according to the lens uv-sampling to incorporate depth-of-field. In a final temporal sampling step, we evaluate the pre-determined viewing ray/fragment intersections for one or multiple points in time. This allows incorporating standard shading effects including and resulting in a physically plausible motion and defocus blur for transparent and opaque objects

    A Bayesian approach for energy-based estimation of acoustic aberrations in high intensity focused ultrasound treatment

    Get PDF
    High intensity focused ultrasound is a non-invasive method for treatment of diseased tissue that uses a beam of ultrasound to generate heat within a small volume. A common challenge in application of this technique is that heterogeneity of the biological medium can defocus the ultrasound beam. Here we reduce the problem of refocusing the beam to the inverse problem of estimating the acoustic aberration due to the biological tissue from acoustic radiative force imaging data. We solve this inverse problem using a Bayesian framework with a hierarchical prior and solve the inverse problem using a Metropolis-within-Gibbs algorithm. The framework is tested using both synthetic and experimental datasets. We demonstrate that our approach has the ability to estimate the aberrations using small datasets, as little as 32 sonication tests, which can lead to significant speedup in the treatment process. Furthermore, our approach is compatible with a wide range of sonication tests and can be applied to other energy-based measurement techniques

    Understanding camera trade-offs through a Bayesian analysis of light field projections

    Get PDF
    Computer vision has traditionally focused on extracting structure,such as depth, from images acquired using thin-lens or pinhole optics. The development of computational imaging is broadening this scope; a variety of unconventional cameras do not directly capture a traditional image anymore, but instead require the joint reconstruction of structure and image information. For example, recent coded aperture designs have been optimized to facilitate the joint reconstruction of depth and intensity. The breadth of imaging designs requires new tools to understand the tradeoffs implied by different strategies.This paper introduces a unified framework for analyzing computational imagingapproaches. Each sensor element is modeled as an inner product over the 4D light field. The imaging task is then posed as Bayesian inference: given the observed noisy light field projections and a new prior on light field signals, estimatethe original light field. Under common imaging conditions, we compare the performance of various camera designs using 2D light field simulations. This framework allows us to better understand the tradeoffs of each camera type andanalyze their limitations
    • …
    corecore