7,582 research outputs found

    ISO continuum observations of quasars at z=1-4 I.Spectral energy distributions of quasars from the UV to far-infrared

    Get PDF
    Eight luminous quasars with 30<MB<27 -30 < M_B < -27 at z = 1.4 - 3.7 have been observed in the mid- and far-infrared using ISO. All the quasars have been detected in the mid-infrared bands of ISOCAM, while no far-infrared detections have been made with ISOPHOT. Supplementing ISO observations with photometry in the optical and near-infrared made from the ground mostly within 17 months after the ISO observations, SEDs (Spectral Energy Distributions) from the UV to far-infrared have been obtained. SEDs (Spectral Energy Distributions) from the UV to far-infrared have been obtained while supplementing ISO observations with photometry in the optical and near-infrared made from the ground within 17 months. The SEDs are compared with the MED (Mean spectral Energy Distributions) of low-redshift quasars with 27<MB<22-27 < M_B < -22. It is shown that our far-infrared observations were limited by confusion noise due to crowded sources.Comment: 9 pages, 3 figures: accepted for publication in Astronomy and Astrophysic

    An application of Six Sigma to reduce waste

    Get PDF
    Six Sigma has been considered a powerful business strategy that employs a well-structured continuous improvement methodology to reduce process variability and drive out waste within the business processes using effective application of statistical tools and techniques. Although there is a wider acceptance of Six Sigma in many organizations today, there appears to be virtually no in-depth case study of Six Sigma in the existing literature. This involves how the Six Sigma methodology has been used, how Six Sigma tools and techniques have been applied and how the benefits have been generated. This paper presents a case study illustrating the effective use of Six Sigma to reduce waste in a coating process. It describes in detail how the project was selected and how the Six Sigma methodology was applied. It also shows how various tools and techniques within the Six Sigma methodology have been employed to achieve substantial financial benefits

    Per-Pixel Extrusion Mapping with Correct Silhouette

    Get PDF
    Per-pixel extrusion mapping consists of creating a virtual geometry stored in a texture over a polygon model without increasing its density. There are four types of extrusion mapping, namely, basic extrusion, outward extrusion, beveled extrusion, and chamfered extrusion. These different techniques produce satisfactory results in the case of plane surfaces, but when it is about the curved surfaces, the silhouette is not visible at the edges of the extruded forms on the 3D surface geometry because they not take into account the curvature of the 3D meshes. In this paper, we presented an improvement that consists of using a curved ray-tracing to correct the silhouette problem by combining the per-pixel extrusion mapping techniques and the quadratic approximation computed at each vertex of the 3D mesh

    Computational Particle Physics for Event Generators and Data Analysis

    Full text link
    High-energy physics data analysis relies heavily on the comparison between experimental and simulated data as stressed lately by the Higgs search at LHC and the recent identification of a Higgs-like new boson. The first link in the full simulation chain is the event generation both for background and for expected signals. Nowadays event generators are based on the automatic computation of matrix element or amplitude for each process of interest. Moreover, recent analysis techniques based on the matrix element likelihood method assign probabilities for every event to belong to any of a given set of possible processes. This method originally used for the top mass measurement, although computing intensive, has shown its power at LHC to extract the new boson signal from the background. Serving both needs, the automatic calculation of matrix element is therefore more than ever of prime importance for particle physics. Initiated in the eighties, the techniques have matured for the lowest order calculations (tree-level), but become complex and CPU time consuming when higher order calculations involving loop diagrams are necessary like for QCD processes at LHC. New calculation techniques for next-to-leading order (NLO) have surfaced making possible the generation of processes with many final state particles (up to 6). If NLO calculations are in many cases under control, although not yet fully automatic, even higher precision calculations involving processes at 2-loops or more remain a big challenge. After a short introduction to particle physics and to the related theoretical framework, we will review some of the computing techniques that have been developed to make these calculations automatic. The main available packages and some of the most important applications for simulation and data analysis, in particular at LHC will also be summarized.Comment: 19 pages, 11 figures, Proceedings of CCP (Conference on Computational Physics) Oct. 2012, Osaka (Japan) in IOP Journal of Physics: Conference Serie

    Is the evidence for dark energy secure?

    Full text link
    Several kinds of astronomical observations, interpreted in the framework of the standard Friedmann-Robertson-Walker cosmology, have indicated that our universe is dominated by a Cosmological Constant. The dimming of distant Type Ia supernovae suggests that the expansion rate is accelerating, as if driven by vacuum energy, and this has been indirectly substantiated through studies of angular anisotropies in the cosmic microwave background (CMB) and of spatial correlations in the large-scale structure (LSS) of galaxies. However there is no compelling direct evidence yet for (the dynamical effects of) dark energy. The precision CMB data can be equally well fitted without dark energy if the spectrum of primordial density fluctuations is not quite scale-free and if the Hubble constant is lower globally than its locally measured value. The LSS data can also be satisfactorily fitted if there is a small component of hot dark matter, as would be provided by neutrinos of mass 0.5 eV. Although such an Einstein-de Sitter model cannot explain the SNe Ia Hubble diagram or the position of the `baryon acoustic oscillation' peak in the autocorrelation function of galaxies, it may be possible to do so e.g. in an inhomogeneous Lemaitre-Tolman-Bondi cosmology where we are located in a void which is expanding faster than the average. Such alternatives may seem contrived but this must be weighed against our lack of any fundamental understanding of the inferred tiny energy scale of the dark energy. It may well be an artifact of an oversimplified cosmological model, rather than having physical reality.Comment: 12 pages, 5 figures; to appear in a special issue of General Relativity and Gravitation, eds. G.F.R. Ellis et al; Changes: references reformatted in journal style - text unchange

    Historical Costume Simulation

    Get PDF
    The aim of this study is to produce accurate reproductions of digital clothing from historical sources and to investigate the implications of developing it for online museum exhibits. In order to achieve this, the study is going through several stages. Firstly, the theoretical background of the main issues will be established through the review of various published papers on 3D apparel CAD, drape and digital curation. Next, using a 3D apparel CAD system, this study attempts the realistic visualization of the costumes based on the establishment of a valid simulation reference. This paper reports the pilot exercise carried out to scope the requirements for going forward

    Assessment of Preconditioner for a USM3D Hierarchical Adaptive Nonlinear Method (HANIM) (Invited)

    Get PDF
    Enhancements to the previously reported mixed-element USM3D Hierarchical Adaptive Nonlinear Iteration Method (HANIM) framework have been made to further improve robustness, efficiency, and accuracy of computational fluid dynamic simulations. The key enhancements include a multi-color line-implicit preconditioner, a discretely consistent symmetry boundary condition, and a line-mapping method for the turbulence source term discretization. The USM3D iterative convergence for the turbulent flows is assessed on four configurations. The configurations include a two-dimensional (2D) bump-in-channel, the 2D NACA 0012 airfoil, a three-dimensional (3D) bump-in-channel, and a 3D hemisphere cylinder. The Reynolds Averaged Navier Stokes (RANS) solutions have been obtained using a Spalart-Allmaras turbulence model and families of uniformly refined nested grids. Two types of HANIM solutions using line- and point-implicit preconditioners have been computed. Additional solutions using the point-implicit preconditioner alone (PA) method that broadly represents the baseline solver technology have also been computed. The line-implicit HANIM shows superior iterative convergence in most cases with progressively increasing benefits on finer grids

    Two-Dimensional Matter: Order, Curvature and Defects

    Get PDF
    Many systems in nature and the synthetic world involve ordered arrangements of units on two-dimensional surfaces. We review here the fundamental role payed by both the topology of the underlying surface and its detailed curvature. Topology dictates certain broad features of the defect structure of the ground state but curvature-driven energetics controls the detailed structured of ordered phases. Among the surprises are the appearance in the ground state of structures that would normally be thermal excitations and thus prohibited at zero temperature. Examples include excess dislocations in the form of grain boundary scars for spherical crystals above a minimal system size, dislocation unbinding for toroidal hexatics, interstitial fractionalization in spherical crystals and the appearance of well-separated disclinations for toroidal crystals. Much of the analysis leads to universal predictions that do not depend on the details of the microscopic interactions that lead to order in the first place. These predictions are subject to test by the many experimental soft and hard matter systems that lead to curved ordered structures such as colloidal particles self-assembling on droplets of one liquid in a second liquid. The defects themselves may be functionalized to create ligands with directional bonding. Thus nano to meso scale superatoms may be designed with specific valency for use in building supermolecules and novel bulk materials. Parameters such as particle number, geometrical aspect ratios and anisotropy of elastic moduli permit the tuning of the precise architecture of the superatoms and associated supermolecules. Thus the field has tremendous potential from both a fundamental and materials science/supramolecular chemistry viewpoint.Comment: Review article, 102 pages, 59 figures, submitted to Advances in Physic

    Active compensation of aperture discontinuities for WFIRST-AFTA: analytical and numerical comparison of propagation methods and preliminary results with a WFIRST-AFTA-like pupil

    Full text link
    The new frontier in the quest for the highest contrast levels in the focal plane of a coronagraph is now the correction of the large diffractive artifacts effects introduced at the science camera by apertures of increasing complexity. The coronagraph for the WFIRST/AFTA mission will be the first of such instruments in space with a two Deformable Mirrors wavefront control system. Regardless of the control algorithm for these multi Deformable Mirrors, they will have to rely on quick and accurate simulation of the propagation effects introduced by the out-of-pupil surface. In the first part of this paper, we present the analytical description of the different approximations to simulate these propagation effects. In Annex A, we prove analytically that, in the special case of surfaces inducing a converging beam, the Fresnel method yields high fidelity for simulations of these effects. We provide numerical simulations showing this effect. In the second part, we use these tools in the framework of the Active Compensation of Aperture Discontinuities technique (ACAD) applied to pupil geometries similar to WFIRST-AFTA. We present these simulations in the context of the optical layout of the High-contrast imager for Complex Aperture Telescopes, which will test ACAD on a optical bench. The results of this analysis show that using the ACAD method, an apodized pupil lyot coronagraph and the performance of our current deformable mirrors, we are able to obtain, in numerically simulations, a dark hole with an AFTA-like pupil. Our numerical simulation shows that we can obtain contrast better than 2.1092.10^{-9} in monochromatic light and better than 3.e-8 with 10% bandwidth between 5 and 14 lambda/D.Comment: 16 pages, 5 figures, Accepted for publication (Oct. 23, 2015) in Journal of Astronomical Telescopes, Instruments, and Systems, special WFIRST-AFTA coronagrap
    corecore