97 research outputs found

    An evaluation of the GAMA/StarPU frameworks for heterogeneous platforms : the progressive photon mapping algorithm

    Get PDF
    Dissertação de mestrado em Engenharia InformáticaRecent evolution of high performance computing moved towards heterogeneous platforms: multiple devices with different architectures, characteristics and programming models, share application workloads. To aid the programmer to efficiently explore these heterogeneous platforms several frameworks have been under development. These dynamically manage the available computing resources through workload scheduling and data distribution, dealing with the inherent difficulties of different programming models and memory accesses. Among other frameworks, these include GAMA and StarPU. The GAMA framework aims to unify the multiple execution and memory models of each different device in a computer system, into a single, hardware agnostic model. It was designed to efficiently manage resources with both regular and irregular applications, and currently only supports conventional CPU devices and CUDA-enabled accelerators. StarPU has similar goals and features with a wider user based community, but it lacks a single programming model. The main goal of this dissertation was an in-depth evaluation of a heterogeneous framework using a complex application as a case study. GAMA provided the starting vehicle for training, while StarPU was the selected framework for a thorough evaluation. The progressive photon mapping irregular algorithm was the selected case study. The evaluation goal was to assert the StarPU effectiveness with a robust irregular application, and make a high-level comparison with the still under development GAMA, to provide some guidelines for GAMA improvement. Results show that two main factors contribute to the performance of applications written with StarPU: the consideration of data transfers in the performance model, and chosen scheduler. The study also allowed some caveats to be found within the StarPU API. Although this have no effect on performance, they present a challenge for new coming developers. Both these analysis resulted in a better understanding of the framework, and a comparative analysis with GAMA could be made, pointing out the aspects where GAMA could be further improved upon.A recente evolução da computação de alto desempenho é em direção ao uso de plataformas heterogéneas: múltiplos dispositivos com diferentes arquiteturas, características e modelos de programação, partilhando a carga computacional das aplicações. De modo a ajudar o programador a explorar eficientemente estas plataformas, várias frameworks têm sido desenvolvidas. Estas frameworks gerem os recursos computacionais disponíveis, tratando das dificuldades inerentes dos diferentes modelos de programação e acessos à memória. Entre outras frameworks, estas incluem o GAMA e o StarPU. O GAMA tem o objetivo de unificar os múltiplos modelos de execução e memória de cada dispositivo diferente num sistema computacional, transformando-os num único modelo, independente do hardware utilizado. A framework foi desenhada de forma a gerir eficientemente os recursos, tanto para aplicações regulares como irregulares, e atualmente suporta apenas CPUs convencionais e aceleradores CUDA. O StarPU tem objetivos e funcionalidades idênticos, e também uma comunidade mais alargada, mas não possui um modelo de programação único O objetivo principal desta dissertação foi uma avaliação profunda de uma framework heterogénea, usando uma aplicação complexa como caso de estudo. O GAMA serviu como ponto de partida para treino e ambientação, enquanto que o StarPU foi a framework selecionada para uma avaliação mais profunda. O algoritmo irregular de progressive photon mapping foi o caso de estudo escolhido. O objetivo da avaliação foi determinar a eficácia do StarPU com uma aplicação robusta, e fazer uma análise de alto nível com o GAMA, que ainda está em desenvolvimento, para forma a providenciar algumas sugestões para o seu melhoramento. Os resultados mostram que são dois os principais factores que contribuem para a performance de aplicação escritas com auxílio do StarPU: a avaliação dos tempos de transferência de dados no modelo de performance, e a escolha do escalonador. O estudo permitiu também avaliar algumas lacunas na API do StarPU. Embora estas não tenham efeitos visíveis na eficiencia da framework, eles tornam-se um desafio para recém-chegados ao StarPU. Ambas estas análisos resultaram numa melhor compreensão da framework, e numa análise comparativa com o GAMA, onde são apontados os possíveis aspectos que o este tem a melhorar.Fundação para a Ciência e a Tecnologia (FCT) - Program UT Austin | Portuga

    Fast algorithm for real-time rings reconstruction

    Get PDF
    The GAP project is dedicated to study the application of GPU in several contexts in which real-time response is important to take decisions. The definition of real-time depends on the application under study, ranging from answer time of μs up to several hours in case of very computing intensive task. During this conference we presented our work in low level triggers [1] [2] and high level triggers [3] in high energy physics experiments, and specific application for nuclear magnetic resonance (NMR) [4] [5] and cone-beam CT [6]. Apart from the study of dedicated solution to decrease the latency due to data transport and preparation, the computing algorithms play an essential role in any GPU application. In this contribution, we show an original algorithm developed for triggers application, to accelerate the ring reconstruction in RICH detector when it is not possible to have seeds for reconstruction from external trackers

    Enhancing Monte Carlo Particle Transport for Modern Many-Core Architectures

    Get PDF
    Since near the very beginning of electronic computing, Monte Carlo particle transport has been a fundamental approach for solving computational physics problems. Due to the high computational demands and inherently parallel nature of these applications, Monte Carlo transport applications are often performed in the supercomputing environment. That said, supercomputers are changing, as parallelism has dramatically increased with each supercomputer node, including regular inclusion of many-core devices. Monte Carlo transport, like all applications that run on supercomputers, will be forced to make significant changes to their designs in order to utilize these new architectures effectively. This dissertation presents solutions for central challenges that face Monte Carlo particle transport in this changing environment, specifically in the areas of threading models, tracking algorithms, tally data collection, and heterogenous load balancing. In addition, the dissertation culminates with a study that combines all of the presented techniques in a production application at scale on Lawrence Livermore National Laboratory's RZAnsel Supercomputer

    Fred: A GPU-accelerated fast-Monte Carlo code for rapid treatment plan recalculation in ion beam therapy

    Get PDF
    Ion beam therapy is a rapidly growing technique for tumor radiation therapy. Ions allow for a high dose deposition in the tumor region, while sparing the surrounding healthy tissue. For this reason, the highest possible accuracy in the calculation of dose and its spatial distribution is required in treatment planning. On one hand, commonly used treatment planning software solutions adopt a simplified beam-body interaction model by remapping pre-calculated dose distributions into a 3D water-equivalent representation of the patient morphology. On the other hand, Monte Carlo (MC) simulations, which explicitly take into account all the details in the interaction of particles with human tissues, are considered to be the most reliable tool to address the complexity of mixed field irradiation in a heterogeneous environment. However, full MC calculations are not routinely used in clinical practice because they typically demand substantial computational resources. Therefore MC simulations are usually only used to check treatment plans for a restricted number of difficult cases. The advent of general-purpose programming GPU cards prompted the development of trimmed-down MC-based dose engines which can significantly reduce the time needed to recalculate a treatment plan with respect to standard MC codes in CPU hardware. In this work, we report on the development of fred, a new MC simulation platform for treatment planning in ion beam therapy. The code can transport particles through a 3D voxel grid using a class II MC algorithm. Both primary and secondary particles are tracked and their energy deposition is scored along the trajectory. Effective models for particle-medium interaction have been implemented, balancing accuracy in dose deposition with computational cost. Currently, the most refined module is the transport of proton beams in water: single pencil beam dose-depth distributions obtained with fred agree with those produced by standard MC codes within 1-2% of the Bragg peak in the therapeutic energy range. A comparison with measurements taken at the CNAO treatment center shows that the lateral dose tails are reproduced within 2% in the field size factor test up to 20 cm. The tracing kernel can run on GPU hardware, achieving 10 million primary on a single card. This performance allows one to recalculate a proton treatment plan at 1% of the total particles in just a few minutes

    Accelerating Reconfigurable Financial Computing

    Get PDF
    This thesis proposes novel approaches to the design, optimisation, and management of reconfigurable computer accelerators for financial computing. There are three contributions. First, we propose novel reconfigurable designs for derivative pricing using both Monte-Carlo and quadrature methods. Such designs involve exploring techniques such as control variate optimisation for Monte-Carlo, and multi-dimensional analysis for quadrature methods. Significant speedups and energy savings are achieved using our Field-Programmable Gate Array (FPGA) designs over both Central Processing Unit (CPU) and Graphical Processing Unit (GPU) designs. Second, we propose a framework for distributing computing tasks on multi-accelerator heterogeneous clusters. In this framework, different computational devices including FPGAs, GPUs and CPUs work collaboratively on the same financial problem based on a dynamic scheduling policy. The trade-off in speed and in energy consumption of different accelerator allocations is investigated. Third, we propose a mixed precision methodology for optimising Monte-Carlo designs, and a reduced precision methodology for optimising quadrature designs. These methodologies enable us to optimise throughput of reconfigurable designs by using datapaths with minimised precision, while maintaining the same accuracy of the results as in the original designs

    Coprocessor integration for real-time event processing in particle physics detectors

    Get PDF
    Els experiments de física d’altes energies actuals disposen d’acceleradors amb més energía, sensors més precisos i formes més flexibles de recopilar les dades. Aquesta ràpida evolució requereix de més capacitat de càlcul; els processadors massivament paral·lels, com ara les targes acceleradores gràfiques, ens posen a l’abast aquesta major capacitat de càlcul a un cost sensiblement inferior a les CPUs tradicionals. L’ús d’aquest tipus de processadors requereix, però, de nous algoritmes i nous enfocaments de l’organització de les dades que són difícils d’integrar en els programaris actuals. En aquest treball s’exploren els problemes derivats de l’ús d’algoritmes paral·lels en els entorns de programari existents, orientats a CPUs, i es proposa una solució, en forma de servei, que comunica amb els diversos pipelines que processen els esdeveniments procedents de les col·lisions de partícules, recull les dades en lots i els envia als algoritmes corrent sobre els processadors massivament paral·lels. Aquest servei s’integra en Gaudí - l’entorn de software de dos dels quatre experiments principals del Gran Col·lisionador d’Hadrons. S’examina el sobrecost que el servei afegeix als algoritmes paral·lels. S’estudia un cas d´ùs del servei per fer una reconstrucció paral·lela de les traces detectades en el VELO Pixel, el subdetector encarregat de la detecció de vèrtex en l’upgrade de LHCb. Per aquest cas, s’observen les característiques del rendiment en funció de la mida dels lots de dades. Finalment, les conclusions en posen en el context dels requeriments del sistema de trigger de LHCb.La física de altas energías dispone actualmente de aceleradores con energías mayores, sensores más precisos y métodos de recopilación de datos más flexibles que nunca. Su rápido progreso necesita aún más potencia de cálculo; el hardware masivamente paralelo, como las unidades de procesamiento gráfico, nos brinda esta potencia a un coste mucho más bajo que las CPUs tradicionales. Sin embargo, para usar eficientemente este hardware necesitamos algoritmos nuevos y nuevos enfoques de organización de datos difíciles de integrarse con el software existente. En este trabajo, se investiga cómo se pueden usar estos algoritmos paralelos en las infraestructuras de software ya existentes y que están orientadas a CPUs. Se propone una solución en forma de un servicio que comunica con los diversos pipelines que procesan los eventos de las correspondientes colisiones de particulas, reúne los datos en lotes y se los entrega a los algoritmos paralelos acelerados por hardware. Este servicio se integra con Gaudí — la infraestructura del entorno de software que usan dos de los cuatro gran experimentos del Gran Colisionador de Hadrones. Se examinan los costes añadidos por el servicio en los algoritmos paralelos. Se estudia un caso de uso del servicio para ejecutar un algoritmo paralelo para el VELO Pixel (el subdetector encargado de la localización de vértices en el upgrade del experimento LHCb) y se estudian las características de rendimiento de los distintos tamaños de lotes de datos. Finalmente, las conclusiones se contextualizan dentro la perspectiva de los requerimientos para el sistema de trigger de LHCb.High-energy physics experiments today have higher energies, more accurate sensors, and more flexible means of data collection than ever before. Their rapid progress requires ever more computational power; and massively parallel hardware, such as graphics cards, holds the promise to provide this power at a much lower cost than traditional CPUs. Yet, using this hardware requires new algorithms and new approaches to organizing data that can be difficult to integrate with existing software. In this work, I explore the problem of using parallel algorithms within existing CPU-orientated frameworks and propose a compromise between the different trade-offs. The solution is a service that communicates with multiple event-processing pipelines, gathers data into batches, and submits them to hardware-accelerated parallel algorithms. I integrate this service with Gaudi — a framework underlying the software environments of two of the four major experiments at the Large Hadron Collider. I examine the overhead the service adds to parallel algorithms. I perform a case study of using the service to run a parallel track reconstruction algorithm for the LHCb experiment's prospective VELO Pixel subdetector and look at the performance characteristics of using different data batch sizes. Finally, I put the findings into perspective within the context of the LHCb trigger's requirements

    Generating renderers

    Get PDF
    Most production renderers developed for the film industry are huge pieces of software that are able to render extremely complex scenes. Unfortunately, they are implemented using the currently available programming models that are not well suited to modern computing hardware like CPUs with vector units or GPUs. Thus, they have to deal with the added complexity of expressing parallelism and using hardware features in those models. Since compilers cannot alone optimize and generate efficient programs for any type of hardware, because of the large optimization spaces and the complexity of the underlying compiler problems, programmers have to rely on compiler-specific hardware intrinsics or write non-portable code. The consequence of these limitations is that programmers resort to writing the same code twice when they need to port their algorithm on a different architecture, and that the code itself becomes difficult to maintain, as algorithmic details are buried under hardware details. Thankfully, there are solutions to this problem, taking the form of Domain-Specific Lan- guages. As their name suggests, these languages are tailored for one domain, and compilers can therefore use domain-specific knowledge to optimize algorithms and choose the best execution policy for a given target hardware. In this thesis, we opt for another way of encoding domain- specific knowledge: We implement a generic, high-level, and declarative rendering and traversal library in a functional language, and later refine it for a target machine by providing partial evaluation annotations. The partial evaluator then specializes the entire renderer according to the available knowledge of the scene: Shaders are specialized when their inputs are known, and in general, all redundant computations are eliminated. Our results show that the generated renderers are faster and more portable than renderers written with state-of-the-art competing libraries, and that in comparison, our rendering library requires less implementation effort.Die meisten in der Filmindustrie zum Einsatz kommenden Renderer sind riesige Softwaresysteme, die in der Lage sind, extrem aufwendige Szenen zu rendern. Leider sind diese mit den aktuell verfügbaren Programmiermodellen implementiert, welche nicht gut geeignet sind für moderne Rechenhardware wie CPUs mit Vektoreinheiten oder GPUs. Deshalb müssen Entwickler sich mit der zusätzlichen Komplexität auseinandersetzen, Parallelismus und Hardwarefunktionen in diesen Programmiermodellen auszudrücken. Da Compiler nicht selbständig optimieren und effiziente Programme für jeglichen Typ Hardware generieren können, wegen des großen Optimierungsraumes und der Komplexität des unterliegenden Kompilierungsproblems, müssen Programmierer auf Compiler-spezifische Hardware-“Intrinsics” zurückgreifen, oder nicht portierbaren Code schreiben. Die Konsequenzen dieser Limitierungen sind, dass Programmierer darauf zurückgreifen den gleichen Code zweimal zu schreiben, wenn sie ihre Algorithmen für eine andere Architektur portieren müssen, und dass der Code selbst schwer zu warten wird, da algorithmische Details unter Hardwaredetails verloren gehen. Glücklicherweise gibt es Lösungen für dieses Problem, in der Form von DSLs. Diese Sprachen sind maßgeschneidert für eine Domäne und Compiler können deshalb Domänenspezifisches Wissen nutzen, um Algorithmen zu optimieren und die beste Ausführungsstrategie für eine gegebene Zielhardware zu wählen. In dieser Dissertation wählen wir einen anderen Weg, Domänenspezifisches Wissen zu enkodieren: Wir implementieren eine generische, high-level und deklarative Rendering- und Traversierungsbibliothek in einer funktionalen Programmiersprache, und verfeinern sie später für eine Zielmaschine durch Bereitstellung von Annotationen für die partielle Auswertung. Der “Partial Evaluator” spezialisiert dann den kompletten Renderer, basierend auf dem verfügbaren Wissen über die Szene: Shader werden spezialisiert, wenn ihre Eingaben bekannt sind, und generell werden alle redundanten Berechnungen eliminiert. Unsere Ergebnisse zeigen, dass die generierten Renderer schneller und portierbarer sind, als Renderer geschrieben mit den aktuellen Techniken konkurrierender Bibliotheken und dass, im Vergleich, unsere Rendering Bibliothek weniger Implementierungsaufwand erfordert.This work was supported by the Federal Ministry of Education and Research (BMBF) as part of the Metacca and ProThOS projects as well as by the Intel Visual Computing Institute (IVCI) and Cluster of Excellence on Multimodal Computing and Interaction (MMCI) at Saarland University. Parts of it were also co-funded by the European Union(EU), as part of the Dreamspace project

    4-D Tomographic Inference: Application to SPECT and MR-driven PET

    Get PDF
    Emission tomographic imaging is framed in the Bayesian and information theoretic framework. The first part of the thesis is inspired by the new possibilities offered by PET-MR systems, formulating models and algorithms for 4-D tomography and for the integration of information from multiple imaging modalities. The second part of the thesis extends the models described in the first part, focusing on the imaging hardware. Three key aspects for the design of new imaging systems are investigated: criteria and efficient algorithms for the optimisation and real-time adaptation of the parameters of the imaging hardware; learning the characteristics of the imaging hardware; exploiting the rich information provided by depthof- interaction (DOI) and energy resolving devices. The document concludes with the description of the NiftyRec software toolkit, developed to enable 4-D multi-modal tomographic inference

    Contributions to the improvement of image quality in CBCT and CBμCT and application in the development of a CBμCT system

    Get PDF
    During the last years cone-beam x-ray CT (CBCT) has been established as a widespread imaging technique and a feasible alternative to conventional CT for dedicated imaging tasks for which the limited flexibility offered by conventional CT advises the development of dedicated designs. CBCT systems are starting to be routinely used in image guided radiotherapy; image guided surgery using C-arms; scan of body parts such as the sinuses, the breast or extremities; and, especially, in preclinical small-animal imaging, often coupled to molecular imaging systems. Despite the research efforts advocated to the advance of CBCT, the challenges introduced by the use of large cone angles and two-dimensional detectors are a field of vigorous research towards the improvement of CBCT image quality. Moreover, systems for small-animal imaging add to the challenges posed by clinical CBCT the need of higher resolution to obtain equivalent image quality in much smaller subjects. This thesis contributes to the progress of CBCT imaging by addressing a variety of issues affecting image quality in CBCT in general and in CBCT for small-animal imaging (CBμCT). As part of this work we have assessed and optimized the performance of CBμCT systems for different imaging tasks. To this end, we have developed a new CBμCT system with variable geometry and all the required software tools for acquisition, calibration and reconstruction. The system served as a tool for the optimization of the imaging process and for the study of image degradation effects in CBμCT, as well as a platform for biological research using small animals. The set of tools for the accurate study of CBCT was completed by developing a fast Monte Carlo simulation engine based on GPUs, specifically devoted to the realistic estimation of scatter and its effects on image quality in arbitrary CBCT configurations, with arbitrary spectra, detector response, and antiscatter grids. This new Monte Carlo engine outperformed current simulation platforms by more than an order of magnitude. Due to the limited options for simulation of spectra in microfocus x-ray sources used in CBμCT, we contributed in this thesis a new spectra generation model based on an empirical model for conventional radiology and mammography sources modified in accordance to experimental data. The new spectral model showed good agreement with experimental exposure and attenuation data for different materials. The developed tools for CBμCT research were used for the study of detector performance in terms of dynamic range. The dynamic range of the detector was characterized together with its effect on image quality. As a result, a new simple method for the extension of the dynamic range of flat-panel detectors was proposed and evaluated. The method is based on a modified acquisition process and a mathematical treatment of the acquired data. Scatter is usually identified as one of the major causes of image quality degradation in CBCT. For this reason the developed Monte Carlo engine was applied to the in-depth study of the effects of scatter for a representative range of CBCT embodiments used in the clinical and preclinical practice. We estimated the amount and spatial distribution of the total scatter fluence and the individual components within. The effect of antiscatter grids in improving image quality and in noise was also evaluated. We found a close relation between scatter and the air gap of the system, in line with previous results in the literature. We also observed a non-negligible contribution of forward-directed scatter that is responsible to a great extent for streak artifacts in CBCT. The spatial distribution of scatter was significantly affected by forward scatter, somewhat challenging the usual assumption that the scatter distribution mostly contains low-frequencies. Antiscatter grids showed to be effective for the reduction of cupping, but they showed a much lower performance when dealing with streaks and a shift toward high frequencies of the scatter distributions. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------A lo largo de los últimos años, el TAC de rayos X de haz cónico (CBCT, de “conebeam” CT) se ha posicionado como una de las técnicas de imagen más ampliamente usadas. El CBCT se ha convertido en una alternativa factible al TAC convencional en tareas de imagen específicas para las que la flexibilidad limitada ofrecida por este hace recomendable el desarrollo de sistemas de imagen dedicados. De esta forma, el CBCT está empezando a usarse de forma rutinaria en varios campos entre los que se incluyen la radioterapia guiada por imagen, la cirugía guiada por imagen usando arcos en C, imagen de partes de la anatomía en las que el TAC convencional no es apropiado, como los senos nasales, las extremidades o la mama, y, especialmente el campo de imagen preclínica con pequeño animal. Los sistemas CBCT usados en este último campo se encuentran habitualmente combinados con sistemas de imagen molecular. A pesar del trabajo de investigación dedicado al avance de la técnica CBCT en los últimos años, los retos introducidos por el uso de haces cónicos y de detectores bidimensionales son un campo candente para la investigación médica, con el objetivo de obtener una calidad de imagen equivalente o superior a la proporcionada por el TAC convencional. En el caso de imagen preclínica, a los retos generados por el uso de CBCT se une la necesidad de una mayor resolución de imagen que permita observar estructuras anatómicas con el mismo nivel de detalle obtenido para humanos. Esta tesis contribuye al progreso del CBCT mediante el estudio de usa serie de efectos que afectan a la calidad de imagen de CBCT en general y en el ámbito preclínico en particular. Como parte de este trabajo, hemos evaluado y optimizado el rendimiento de sistemas CBCT preclínicos en función de la tarea de imagen concreta. Con este fin se ha desarrollado un sistema CBCT para pequeños animales con geometría variable y todas las herramientas necesarias para la adquisición, calibración y reconstrucción de imagen. El sistema sirve como base para la optimización de protocolos de adquisición y para el estudio de fuentes de degradación de imagen además de constituir una plataforma para la investigación biológica en pequeño animal. El conjunto de herramientas para el estudio del CBCT se completó con el desarrollo de una plataforma acelerada de simulación Monte Carlo basada en GPUs, optimizada para la estimación de radiación dispersa en CBCT y sus efectos en la calidad de imagen. La plataforma desarrollada supera el rendimiento de las actuales en más de un orden de magnitud y permite la inclusión de espectros policromáticos de rayos X, de la respuesta realista del detector y de rejillas antiscatter. Debido a las escasas opciones ofrecidas por la literatura para la estimación de espectros de rayos X para fuentes microfoco usadas en imagen preclínica, en esta tesis se incluye el desarrollo de un nuevo modelo de generación de espectros, basado en un modelo existente para fuentes usadas en radiología y mamografía. El modelo fue modificado a partir de datos experimentales. La precisión del modelo presentado se comprobó mediante datos experimentales de exposición y atenuación para varios materiales. Las herramientas desarrolladas se usaron para estudiar el rendimiento de detectores de rayos tipo flat-panel en términos de rango dinámico, explorando los límites impuestos por el mismo en la calidad de imagen. Como resultado se propuso y evaluó un método para la extensión del rango dinámico de este tipo de detectores. El método se basa en la modificación del proceso de adquisición de imagen y en una etapa de postproceso de los datos adquiridos. El simulador Monte Carlo se empleó para el estudio detallado de la naturaleza, distribución espacial y efectos de la radiación dispersa en un rango de sistemas CBCT que cubre el espectro de aplicaciones propuestas en el entorno clínico y preclínico. Durante el estudio se inspeccionó la cantidad y distribución espacial de radiación dispersa y de sus componentes individuales y el efecto causado por la inclusión de rejillas antiscatter en términos de mejora de calidad de imagen y de ruido en la imagen. La distribución de radiación dispersa mostró una acentuada relación con la distancia entre muestra y detector en el equipo, en línea con resultados publicados previamente por otros autores. También se encontró una influencia no despreciable de componentes de radiación dispersa con bajos ángulos de desviación, poniendo en tela de juicio la tradicional asunción que considera que la distribución espacial de la radiación dispersa está formada casi exclusivamente por componentes de muy baja frecuencia. Las rejillas antiscatter demostraron ser efectivas para la reducción del artefacto de cupping, pero su efectividad para tratar artefactos en forma de línea (principalmente formados por radiación dispersa con bajo ángulo de desviación) resultó mucho menor. La inclusión de estas rejillas también enfatiza las componentes de alta frecuencia de la distribución espacial de la radiación dispersa
    corecore