66 research outputs found

    Toward robust and efficient physically-based rendering

    Get PDF
    Le rendu fondĂ© sur la physique est utilisĂ© pour le design, l'illustration ou l'animation par ordinateur. Ce type de rendu produit des images photo-rĂ©alistes en rĂ©solvant les Ă©quations qui dĂ©crivent le transport de la lumiĂšre dans une scĂšne. Bien que ces Ă©quations soient connues depuis longtemps, et qu'un grand nombre d'algorithmes aient Ă©tĂ© dĂ©veloppĂ©s pour les rĂ©soudre, il n'en existe pas qui puisse gĂ©rer de maniĂšre efficace toutes les scĂšnes possibles. PlutĂŽt qu'essayer de dĂ©velopper un nouvel algorithme de simulation d'Ă©clairage, nous proposons d'amĂ©liorer la robustesse de la plupart des mĂ©thodes utilisĂ©es Ă  ce jour et/ou qui sont amenĂ©es Ă  ĂȘtre dĂ©veloppĂ©es dans les annĂ©es Ă  venir. Nous faisons cela en commençant par identifier les sources de non-robustesse dans un moteur de rendu basĂ© sur la physique, puis en dĂ©veloppant des mĂ©thodes permettant de minimiser leur impact. Le rĂ©sultat de ce travail est un ensemble de mĂ©thodes utilisant diffĂ©rents outils mathĂ©matiques et algorithmiques, chacune de ces mĂ©thodes visant Ă  amĂ©liorer une partie spĂ©cifique d'un moteur de rendu. Nous examinons aussi comment les architectures matĂ©rielles actuelles peuvent ĂȘtre utilisĂ©es Ă  leur maximum afin d'obtenir des algorithmes plus rapides, sans ajouter d'approximations. Bien que les contributions prĂ©sentĂ©es dans cette thĂšse aient vocation Ă  ĂȘtre combinĂ©es, chacune d'entre elles peut ĂȘtre utilisĂ©e seule : elles sont techniquement indĂ©pendantes les unes des autres.Physically-based rendering is used for design, illustration or computer animation. It consists in producing photorealistic images by solving the equations which describe how light travels in a scene. Although these equations have been known for a long time and many algorithms for light simulation have been developed, no algorithm exists to solve them efficiently for any scene. Instead of trying to develop a new algorithm devoted to light simulation, we propose to enhance the robustness of most methods used nowadays and/or which can be developed in the years to come. We do this by first identifying the sources of non-robustness in a physically-based rendering engine, and then addressing them by specific algorithms. The result is a set of methods based on different mathematical or algorithmic methods, each aiming at improving a different part of a rendering engine. We also investigate how the current hardware architectures can be used at their maximum to produce more efficient algorithms, without adding approximations. Although the contributions presented in this dissertation are meant to be combined, each of them can be used in a standalone way: they have been designed to be internally independent of each other

    Globally Adaptive Control Variate for Robust Numerical Integration

    Get PDF
    International audienceMany methods in computer graphics require the integration of functions on low- to-middle-dimensional spaces. However, no available method can handle all the possible integrands accurately and rapidly. This paper presents a robust numerical integration method, able to handle arbitrary non-singular scalar or vector-valued functions defined on low-to-middle-dimensional spaces. Our method combines control variate, globally adaptive subdivision and Monte-Carlo estimation to achieve fast and accurate computations of any non-singular integral. The runtime is linear with respect to standard deviation while standard Monte-Carlo methods are quadratic. We additionally show through numerical tests that our method is extremely stable from a computation time and memory footprint point-of-view, assessing its robustness. We demonstrate our method on a partic- ipating media voxelization application, which requires the computation of several millions integrals for complex media

    Development of a Humanized HLA-A2.1/DP4 Transgenic Mouse Model and the Use of This Model to Map HLA-DP4-Restricted Epitopes of HBV Envelope Protein

    Get PDF
    A new homozygous humanized transgenic mouse strain, HLA-A2.1+/+HLA-DP4+/+ hCD4+/+mCD4−/−IAÎČ−/−ÎČ2m−/− (HLA-A2/DP4), was obtained by crossing the previously characterized HLA-A2+/+ÎČ2m−/− (A2) mouse and our previously created HLA-DP4+/+ hCD4+/+mCD4−/−IAÎČ−/− (DP4) mouse. We confirmed that the transgenes (HLA-A2, HLA-DP4, hCD4) inherited from the parental A2 and DP4 mice are functional in the HLA-A2/DP4 mice. After immunizing HLA-A2/DP4 mice with a hepatitis B DNA vaccine, hepatitis B virus-specific antibodies, HLA-A2-restricted and HLA-DP4-restricted responses were observed to be similar to those in naturally infected humans. Therefore, the present study demonstrated that HLA-A2/DP4 transgenic mice can faithfully mimic human cellular responses. Furthermore, we reported four new HLA-DP4-restricted epitopes derived from HBsAg that were identified in both vaccinated HLA-A2/DP4 mice and HLA-DP4-positive human individuals. The HLA-A2/DP4 mouse model is a promising preclinical animal model carrying alleles present to more than a quarter of the human population. This model should facilitate the identification of novel HLA-A2- and HLA-DP4-restricted epitopes and vaccine development as well as the characterization of HLA-DP4-restricted responses against infection in humans

    Detection chain and electronic readout of the QUBIC instrument

    Get PDF
    The Q and U Bolometric Interferometer for Cosmology (QUBIC) Technical Demonstrator (TD) aiming to shows the feasibility of the combination of interferometry and bolometric detection. The electronic readout system is based on an array of 128 NbSi Transition Edge Sensors cooled at 350mK readout with 128 SQUIDs at 1K controlled and amplified by an Application Specific Integrated Circuit at 40K. This readout design allows a 128:1 Time Domain Multiplexing. We report the design and the performance of the detection chain in this paper. The technological demonstrator unwent a campaign of test in the lab. Evaluation of the QUBIC bolometers and readout electronics includes the measurement of I-V curves, time constant and the Noise Equivalent Power. Currently the mean Noise Equivalent Power is ~ 2 x 10⁻Âč⁶ W/√Hz

    Detection chain and electronic readout of the QUBIC instrument

    Get PDF
    The Q and U Bolometric Interferometer for Cosmology (QUBIC) Technical Demonstrator (TD) aiming to shows the feasibility of the combination of interferometry and bolometric detection. The electronic readout system is based on an array of 128 NbSi Transition Edge Sensors cooled at 350mK readout with 128 SQUIDs at 1K controlled and amplified by an Application Specific Integrated Circuit at 40K. This readout design allows a 128:1 Time Domain Multiplexing. We report the design and the performance of the detection chain in this paper. The technological demonstrator unwent a campaign of test in the lab. Evaluation of the QUBIC bolometers and readout electronics includes the measurement of I-V curves, time constant and the Noise Equivalent Power. Currently the mean Noise Equivalent Power is ~ 2 x 10⁻Âč⁶ W/√Hz

    Planck intermediate results: XVI. Profile likelihoods for cosmological parameters

    Get PDF
    We explore the 2013 Planck likelihood function with a high-precision multi-dimensional minimizer (Minuit). This allows a refinement of the ΛCDM best-fit solution with respect to previously-released results, and the construction of frequentist confidence intervals using profile likelihoods. The agreement with the cosmological results from the Bayesian framework is excellent, demonstrating the robustness of the Planck results to the statistical methodology. We investigate the inclusion of neutrino masses, where more significant differences may appear due to the non-Gaussian nature of the posterior mass distribution. By applying the Feldman-Cousins prescription, we again obtain results very similar to those of the Bayesian methodology. However, the profile-likelihood analysis of the cosmic microwave background (CMB) combination (Planck+WP+highL) reveals a minimum well within the unphysical negative-mass region. We show that inclusion of the Planck CMB-lensing information regularizes this issue, and provide a robust frequentist upper limit ∑ mÎœ ≀ 0.26 eV (95% confidence) from the CMB+lensing+BAO data combination. Reproduced with permission from Astronomy & Astrophysics, © ESO 201

    Planck intermediate results. XIX. An overview of the polarized thermal emission from Galactic dust

    Get PDF
    This paper presents an overview of the polarized sky as seen by Planck HFI at 353 GHz, which is the most sensitive Planck channel for dust polarization. We construct and analyse maps of dust polarization fraction and polarization angle at 1° resolution, taking into account noise bias and possible systematic effects. The sensitivity of the Planck HFI polarization measurements allows for the first time a mapping of Galactic dust polarized emission on large scales, including low column density regions. We find that the maximum observed dust polarization fraction is high (pmax = 19.8%), in particular in some regions of moderate hydrogen column density (NH < 2 × 1021 cm-2). The polarization fraction displays a large scatter at NH below a few 1021 cm-2. There is a general decrease in the dust polarization fraction with increasing column density above NH ≃ 1 × 1021 cm-2 and in particular a sharp drop above NH ≃ 1.5 × 1022 cm-2. We characterize the spatial structure of the polarization angle using the angle dispersion function. We find that the polarization angle is ordered over extended areas of several square degrees, separated by filamentary structures of high angle dispersion function. These appear as interfaces where the sky projection of the magnetic field changes abruptly without variations in the column density. The polarization fraction is found to be anti-correlated with the dispersion of polarization angles. These results suggest that, at the resolution of 1°, depolarization is due mainly to fluctuations in the magnetic field orientation along the line of sight, rather than to the loss of grain alignment in shielded regions. We also compare the polarization of thermal dust emission with that of synchrotron measured with Planck, low-frequency radio data, and Faraday rotation measurements toward extragalactic sources. These components bear resemblance along the Galactic plane and in some regions such as the Fan and North Polar Spur regions. The poor match observed in other regions shows, however, that dust, cosmic-ray electrons, and thermal electrons generally sample different parts of the line of sight. Reproduced with permission, © ESO, 201

    The Athena X-ray Integral Field Unit: a consolidated design for the system requirement review of the preliminary definition phase

    Full text link
    The Athena X-ray Integral Unit (X-IFU) is the high resolution X-ray spectrometer, studied since 2015 for flying in the mid-30s on the Athena space X-ray Observatory, a versatile observatory designed to address the Hot and Energetic Universe science theme, selected in November 2013 by the Survey Science Committee. Based on a large format array of Transition Edge Sensors (TES), it aims to provide spatially resolved X-ray spectroscopy, with a spectral resolution of 2.5 eV (up to 7 keV) over an hexagonal field of view of 5 arc minutes (equivalent diameter). The X-IFU entered its System Requirement Review (SRR) in June 2022, at about the same time when ESA called for an overall X-IFU redesign (including the X-IFU cryostat and the cooling chain), due to an unanticipated cost overrun of Athena. In this paper, after illustrating the breakthrough capabilities of the X-IFU, we describe the instrument as presented at its SRR, browsing through all the subsystems and associated requirements. We then show the instrument budgets, with a particular emphasis on the anticipated budgets of some of its key performance parameters. Finally we briefly discuss on the ongoing key technology demonstration activities, the calibration and the activities foreseen in the X-IFU Instrument Science Center, and touch on communication and outreach activities, the consortium organisation, and finally on the life cycle assessment of X-IFU aiming at minimising the environmental footprint, associated with the development of the instrument. Thanks to the studies conducted so far on X-IFU, it is expected that along the design-to-cost exercise requested by ESA, the X-IFU will maintain flagship capabilities in spatially resolved high resolution X-ray spectroscopy, enabling most of the original X-IFU related scientific objectives of the Athena mission to be retained. (abridged).Comment: 48 pages, 29 figures, Accepted for publication in Experimental Astronomy with minor editin

    The Athena X-ray Integral Field Unit: a consolidated design for the system requirement review of the preliminary definition phase

    Get PDF
    The Athena X-ray Integral Unit (X-IFU) is the high resolution X-ray spectrometer studied since 2015 for flying in the mid-30s on the Athena space X-ray Observatory. Athena is a versatile observatory designed to address the Hot and Energetic Universe science theme, as selected in November 2013 by the Survey Science Committee. Based on a large format array of Transition Edge Sensors (TES), X-IFU aims to provide spatially resolved X-ray spectroscopy, with a spectral resolution of 2.5 eV (up to 7 keV) over a hexagonal field of view of 5 arc minutes (equivalent diameter). The X-IFU entered its System Requirement Review (SRR) in June 2022, at about the same time when ESA called for an overall X-IFU redesign (including the X-IFU cryostat and the cooling chain), due to an unanticipated cost overrun of Athena. In this paper, after illustrating the breakthrough capabilities of the X-IFU, we describe the instrument as presented at its SRR (i.e. in the course of its preliminary definition phase, so-called B1), browsing through all the subsystems and associated requirements. We then show the instrument budgets, with a particular emphasis on the anticipated budgets of some of its key performance parameters, such as the instrument efficiency, spectral resolution, energy scale knowledge, count rate capability, non X-ray background and target of opportunity efficiency. Finally, we briefly discuss the ongoing key technology demonstration activities, the calibration and the activities foreseen in the X-IFU Instrument Science Center, touch on communication and outreach activities, the consortium organisation and the life cycle assessment of X-IFU aiming at minimising the environmental footprint, associated with the development of the instrument. Thanks to the studies conducted so far on X-IFU, it is expected that along the design-to-cost exercise requested by ESA, the X-IFU will maintain flagship capabilities in spatially resolved high resolution X-ray spectroscopy, enabling most of the original X-IFU related scientific objectives of the Athena mission to be retained. The X-IFU will be provided by an international consortium led by France, The Netherlands and Italy, with ESA member state contributions from Belgium, Czech Republic, Finland, Germany, Poland, Spain, Switzerland, with additional contributions from the United States and Japan.The French contribution to X-IFU is funded by CNES, CNRS and CEA. This work has been also supported by ASI (Italian Space Agency) through the Contract 2019-27-HH.0, and by the ESA (European Space Agency) Core Technology Program (CTP) Contract No. 4000114932/15/NL/BW and the AREMBES - ESA CTP No.4000116655/16/NL/BW. This publication is part of grant RTI2018-096686-B-C21 funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe”. This publication is part of grant RTI2018-096686-B-C21 and PID2020-115325GB-C31 funded by MCIN/AEI/10.13039/501100011033
    • 

    corecore