3,564 research outputs found

    The Multi-Object, Fiber-Fed Spectrographs for SDSS and the Baryon Oscillation Spectroscopic Survey

    Full text link
    We present the design and performance of the multi-object fiber spectrographs for the Sloan Digital Sky Survey (SDSS) and their upgrade for the Baryon Oscillation Spectroscopic Survey (BOSS). Originally commissioned in Fall 1999 on the 2.5-m aperture Sloan Telescope at Apache Point Observatory, the spectrographs produced more than 1.5 million spectra for the SDSS and SDSS-II surveys, enabling a wide variety of Galactic and extra-galactic science including the first observation of baryon acoustic oscillations in 2005. The spectrographs were upgraded in 2009 and are currently in use for BOSS, the flagship survey of the third-generation SDSS-III project. BOSS will measure redshifts of 1.35 million massive galaxies to redshift 0.7 and Lyman-alpha absorption of 160,000 high redshift quasars over 10,000 square degrees of sky, making percent level measurements of the absolute cosmic distance scale of the Universe and placing tight constraints on the equation of state of dark energy. The twin multi-object fiber spectrographs utilize a simple optical layout with reflective collimators, gratings, all-refractive cameras, and state-of-the-art CCD detectors to produce hundreds of spectra simultaneously in two channels over a bandpass covering the near ultraviolet to the near infrared, with a resolving power R = \lambda/FWHM ~ 2000. Building on proven heritage, the spectrographs were upgraded for BOSS with volume-phase holographic gratings and modern CCD detectors, improving the peak throughput by nearly a factor of two, extending the bandpass to cover 360 < \lambda < 1000 nm, and increasing the number of fibers from 640 to 1000 per exposure. In this paper we describe the original SDSS spectrograph design and the upgrades implemented for BOSS, and document the predicted and measured performances.Comment: 43 pages, 42 figures, revised according to referee report and accepted by AJ. Provides background for the instrument responsible for SDSS and BOSS spectra. 4th in a series of survey technical papers released in Summer 2012, including arXiv:1207.7137 (DR9), arXiv:1207.7326 (Spectral Classification), and arXiv:1208.0022 (BOSS Overview

    Efficient and High-Quality Rendering of Higher-Order Geometric Data Representations

    Get PDF
    Computer-Aided Design (CAD) bezeichnet den Entwurf industrieller Produkte mit Hilfe von virtuellen 3D Modellen. Ein CAD-Modell besteht aus parametrischen Kurven und Flächen, in den meisten Fällen non-uniform rational B-Splines (NURBS). Diese mathematische Beschreibung wird ebenfalls zur Analyse, Optimierung und Präsentation des Modells verwendet. In jeder dieser Entwicklungsphasen wird eine unterschiedliche visuelle Darstellung benötigt, um den entsprechenden Nutzern ein geeignetes Feedback zu geben. Designer bevorzugen beispielsweise illustrative oder realistische Darstellungen, Ingenieure benötigen eine verständliche Visualisierung der Simulationsergebnisse, während eine immersive 3D Darstellung bei einer Benutzbarkeitsanalyse oder der Designauswahl hilfreich sein kann. Die interaktive Darstellung von NURBS-Modellen und -Simulationsdaten ist jedoch aufgrund des hohen Rechenaufwandes und der eingeschränkten Hardwareunterstützung eine große Herausforderung. Diese Arbeit stellt vier neuartige Verfahren vor, welche sich mit der interaktiven Darstellung von NURBS-Modellen und Simulationensdaten befassen. Die vorgestellten Algorithmen nutzen neue Fähigkeiten aktueller Grafikkarten aus, um den Stand der Technik bezüglich Qualität, Effizienz und Darstellungsgeschwindigkeit zu verbessern. Zwei dieser Verfahren befassen sich mit der direkten Darstellung der parametrischen Beschreibung ohne Approximationen oder zeitaufwändige Vorberechnungen. Die dabei vorgestellten Datenstrukturen und Algorithmen ermöglichen die effiziente Unterteilung, Klassifizierung, Tessellierung und Darstellung getrimmter NURBS-Flächen und einen interaktiven Ray-Casting-Algorithmus für die Isoflächenvisualisierung von NURBSbasierten isogeometrischen Analysen. Die weiteren zwei Verfahren beschreiben zum einen das vielseitige Konzept der programmierbaren Transparenz für illustrative und verständliche Visualisierungen tiefenkomplexer CAD-Modelle und zum anderen eine neue hybride Methode zur Reprojektion halbtransparenter und undurchsichtiger Bildinformation für die Beschleunigung der Erzeugung von stereoskopischen Bildpaaren. Die beiden letztgenannten Ansätze basieren auf rasterisierter Geometrie und sind somit ebenfalls für normale Dreiecksmodelle anwendbar, wodurch die Arbeiten auch einen wichtigen Beitrag in den Bereichen der Computergrafik und der virtuellen Realität darstellen. Die Auswertung der Arbeit wurde mit großen, realen NURBS-Datensätzen durchgeführt. Die Resultate zeigen, dass die direkte Darstellung auf Grundlage der parametrischen Beschreibung mit interaktiven Bildwiederholraten und in subpixelgenauer Qualität möglich ist. Die Einführung programmierbarer Transparenz ermöglicht zudem die Umsetzung kollaborativer 3D Interaktionstechniken für die Exploration der Modelle in virtuellenUmgebungen sowie illustrative und verständliche Visualisierungen tiefenkomplexer CAD-Modelle. Die Erzeugung stereoskopischer Bildpaare für die interaktive Visualisierung auf 3D Displays konnte beschleunigt werden. Diese messbare Verbesserung wurde zudem im Rahmen einer Nutzerstudie als wahrnehmbar und vorteilhaft befunden.In computer-aided design (CAD), industrial products are designed using a virtual 3D model. A CAD model typically consists of curves and surfaces in a parametric representation, in most cases, non-uniform rational B-splines (NURBS). The same representation is also used for the analysis, optimization and presentation of the model. In each phase of this process, different visualizations are required to provide an appropriate user feedback. Designers work with illustrative and realistic renderings, engineers need a comprehensible visualization of the simulation results, and usability studies or product presentations benefit from using a 3D display. However, the interactive visualization of NURBS models and corresponding physical simulations is a challenging task because of the computational complexity and the limited graphics hardware support. This thesis proposes four novel rendering approaches that improve the interactive visualization of CAD models and their analysis. The presented algorithms exploit latest graphics hardware capabilities to advance the state-of-the-art in terms of quality, efficiency and performance. In particular, two approaches describe the direct rendering of the parametric representation without precomputed approximations and timeconsuming pre-processing steps. New data structures and algorithms are presented for the efficient partition, classification, tessellation, and rendering of trimmed NURBS surfaces as well as the first direct isosurface ray-casting approach for NURBS-based isogeometric analysis. The other two approaches introduce the versatile concept of programmable order-independent semi-transparency for the illustrative and comprehensible visualization of depth-complex CAD models, and a novel method for the hybrid reprojection of opaque and semi-transparent image information to accelerate stereoscopic rendering. Both approaches are also applicable to standard polygonal geometry which contributes to the computer graphics and virtual reality research communities. The evaluation is based on real-world NURBS-based models and simulation data. The results show that rendering can be performed directly on the underlying parametric representation with interactive frame rates and subpixel-precise image results. The computational costs of additional visualization effects, such as semi-transparency and stereoscopic rendering, are reduced to maintain interactive frame rates. The benefit of this performance gain was confirmed by quantitative measurements and a pilot user study

    Novel effects of strains in graphene and other two dimensional materials

    Full text link
    The analysis of the electronic properties of strained or lattice deformed graphene combines ideas from classical condensed matter physics, soft matter, and geometrical aspects of quantum field theory (QFT) in curved spaces. Recent theoretical and experimental work shows the influence of strains in many properties of graphene not considered before, such as electronic transport, spin-orbit coupling, the formation of Moir\'e patterns, optics, ... There is also significant evidence of anharmonic effects, which can modify the structural properties of graphene. These phenomena are not restricted to graphene, and they are being intensively studied in other two dimensional materials, such as the metallic dichalcogenides. We review here recent developments related to the role of strains in the structural and electronic properties of graphene and other two dimensional compounds.Comment: 75 pages, 15 figures, review articl

    Digital predictions of complex cylinder packed columns

    Get PDF
    A digital computational approach has been developed to simulate realistic structures of packed beds. The underlying principle of the method is digitisation of the particles and packing space, enabling the generation of realistic structures. Previous publications [Caulkin, R., Fairweather, M., Jia, X., Gopinathan, N., & Williams, R.A. (2006). An investigation of packed columns using a digital packing algorithm. Computers & Chemical Engineering, 30, 1178–1188; Caulkin, R., Ahmad, A., Fairweather, M., Jia, X., & Williams, R. A. (2007). An investigation of sphere packed shell-side columns using a digital packing algorithm. Computers & Chemical Engineering, 31, 1715–1724] have demonstrated the ability of the code in predicting the packing of spheres. For cylindrical particles, however, the original, random walk-based code proved less effective at predicting bed structure. In response to this, the algorithm has been modified to make use of collisions to guide particle movement in a way which does not sacrifice the advantage of simulation speed. Results of both the original and modified code are presented, with bulk and local voidage values compared with data derived by experimental methods. The results demonstrate that collisions and their impact on packing structure cannot be disregarded if realistic packing structures are to be obtained

    A Unified Surface Geometric Framework for Feature-Aware Denoising, Hole Filling and Context-Aware Completion

    Get PDF
    Technologies for 3D data acquisition and 3D printing have enormously developed in the past few years, and, consequently, the demand for 3D virtual twins of the original scanned objects has increased. In this context, feature-aware denoising, hole filling and context-aware completion are three essential (but far from trivial) tasks. In this work, they are integrated within a geometric framework and realized through a unified variational model aiming at recovering triangulated surfaces from scanned, damaged and possibly incomplete noisy observations. The underlying non-convex optimization problem incorporates two regularisation terms: a discrete approximation of the Willmore energy forcing local sphericity and suited for the recovery of rounded features, and an approximation of the l(0) pseudo-norm penalty favouring sparsity in the normal variation. The proposed numerical method solving the model is parameterization-free, avoids expensive implicit volumebased computations and based on the efficient use of the Alternating Direction Method of Multipliers. Experiments show how the proposed framework can provide a robust and elegant solution suited for accurate restorations even in the presence of severe random noise and large damaged areas

    A FLEXIBLE METHODOLOGY FOR OUTDOOR/INDOOR BUILDING RECONSTRUCTION FROM OCCLUDED POINT CLOUDS

    Get PDF
    Terrestrial Laser Scanning data are increasingly used in building survey not only in cultural heritage domain but also for as-built modelling of large and medium size civil structures. However, raw point clouds derived from laser scanning generally not directly ready for the generation of such models. A time-consuming manual modelling phase has to be taken into account. In addition the large presence of occlusion and clutter may turn out in low-quality building models when state-of-the-art automatic modelling procedures are applied. This paper presents an automated procedure to convert raw point clouds into semantically-enriched building models. The developed method mainly focuses on a geometrical complexity typical of modern buildings with clear prevalence of planar features A characteristic of this methodology is the possibility to work with outdoor and indoor building environments. In order to operate under severe occlusions and clutter a couple of completion algorithms were designed to generate a plausible and reliable model. Finally, some examples of the developed modelling procedure are presented and discussed

    Numerical modeling of black holes as sources of gravitational waves in a nutshell

    Full text link
    These notes summarize basic concepts underlying numerical relativity and in particular the numerical modeling of black hole dynamics as a source of gravitational waves. Main topics are the 3+1 decomposition of general relativity, the concept of a well-posed initial value problem, the construction of initial data for general relativity, trapped surfaces and gravitational waves. Also, a brief summary is given of recent progress regarding the numerical evolution of black hole binary systems.Comment: 28 pages, lectures given at winter school 'Conceptual and Numerical Challenges in Femto- and Peta-Scale Physics' in Schladming, Austria, 200

    Fundus Imaging using Aplanat

    Get PDF
    Retinal photography requires the use of a complex optical system called a fundus camera, capable of illuminating and imaging simultaneously. Because of restriction of aperture stop(pupil), available fundus imaging system suffer limited field of view. Hence peripheral area of retina remains undetected in traditional way. Also, system being prone to common aberrations always makes them to compromise with quality of image. In this thesis we propose a system that uses an aberration free reflector called Aplanat instead of the conventional lens system for the fundus imaging. So Imaging optics will be based on reflection unlike the convectional system which uses lens system and hence refraction principle which results in very negligible aberrations. Also, small reflector size makes it a hand-held system with minimum complexity and power loss for illumination purpose. Working under thermodynamic limit and high numerical aperture makes it possible to inject maximum light at wide angle inside eye which abolish the necessity of mydriasis. The optical system was designed in Zemax Optical Studio 15.5 in mixed sequential mode abilities by inserting aplanat as non sequential CAD object in sequential system comprised of eye and imaging sensor. CAD object was designed using Solid Edge ST8 using the Cartesian data points created in MATLAB. This solid object then imported to Zemax through CAD import ability. Present system propose 3 phase to image retina completely. A narrow throat aplanat for for some part close to optical axis leaving a small hole at optical axis. A wide throat aplanat to image peripheral area. a normal lens system that will cover the center hole. Exploiting overlapping part in the images from all the systems, stitching can be used to get the final complete image. Conjugate plane of retina found to be a curved surface and inside the aplanat which restricts us from using Zemax tool for the imaging purpose as it can not have of axis multiple sensor at desired location in align with conjugate of retina to sense ray bundle. Also, we loose smoothness and accuracy of reflector surface while im- porting the CAD object. So 3D image reconstruction of retina was performed in tool developed by project partner. All three phases covers almost 2000 wide field of retina which counts for 87% of the retina surface. Exploiting overlapping part in the images from all the systems, stitching can be used to get the final complete image

    Consistent depth video segmentation using adaptive surface models

    Get PDF
    We propose a new approach for the segmentation of 3-D point clouds into geometric surfaces using adaptive surface models. Starting from an initial configuration, the algorithm converges to a stable segmentation through a new iterative split-And-merge procedure, which includes an adaptive mechanism for the creation and removal of segments. This allows the segmentation to adjust to changing input data along the movie, leading to stable, temporally coherent, and traceable segments. We tested the method on a large variety of data acquired with different range imaging devices, including a structured-light sensor and a time-of-flight camera, and successfully segmented the videos into surface segments. We further demonstrated the feasibility of the approach using quantitative evaluations based on ground-truth data.This research is partially funded by the EU project IntellAct (FP7-269959), the Grup consolidat 2009 SGR155, the project PAU+ (DPI2011-27510), and the CSIC project CINNOVA (201150E088). B. Dellen acknowledges support from the Spanish Ministry of Science and Innovation through a Ramon y Cajal program.Peer Reviewe

    DIGITAL INPAINTING ALGORITHMS AND EVALUATION

    Get PDF
    Digital inpainting is the technique of filling in the missing regions of an image or a video using information from surrounding area. This technique has found widespread use in applications such as restoration, error recovery, multimedia editing, and video privacy protection. This dissertation addresses three significant challenges associated with the existing and emerging inpainting algorithms and applications. The three key areas of impact are 1) Structure completion for image inpainting algorithms, 2) Fast and efficient object based video inpainting framework and 3) Perceptual evaluation of large area image inpainting algorithms. One of the main approach of existing image inpainting algorithms in completing the missing information is to follow a two stage process. A structure completion step, to complete the boundaries of regions in the hole area, followed by texture completion process using advanced texture synthesis methods. While the texture synthesis stage is important, it can be argued that structure completion aspect is a vital component in improving the perceptual image inpainting quality. To this end, we introduce a global structure completion algorithm for completion of missing boundaries using symmetry as the key feature. While existing methods for symmetry completion require a-priori information, our method takes a non-parametric approach by utilizing the invariant nature of curvature to complete missing boundaries. Turning our attention from image to video inpainting, we readily observe that existing video inpainting techniques have evolved as an extension of image inpainting techniques. As a result, they suffer from various shortcoming including, among others, inability to handle large missing spatio-temporal regions, significantly slow execution time making it impractical for interactive use and presence of temporal and spatial artifacts. To address these major challenges, we propose a fundamentally different method based on object based framework for improving the performance of video inpainting algorithms. We introduce a modular inpainting scheme in which we first segment the video into constituent objects by using acquired background models followed by inpainting of static background regions and dynamic foreground regions. For static background region inpainting, we use a simple background replacement and occasional image inpainting. To inpaint dynamic moving foreground regions, we introduce a novel sliding-window based dissimilarity measure in a dynamic programming framework. This technique can effectively inpaint large regions of occlusions, inpaint objects that are completely missing for several frames, change in size and pose and has minimal blurring and motion artifacts. Finally we direct our focus on experimental studies related to perceptual quality evaluation of large area image inpainting algorithms. The perceptual quality of large area inpainting technique is inherently a subjective process and yet no previous research has been carried out by taking the subjective nature of the Human Visual System (HVS). We perform subjective experiments using eye-tracking device involving 24 subjects to analyze the effect of inpainting on human gaze. We experimentally show that the presence of inpainting artifacts directly impacts the gaze of an unbiased observer and this in effect has a direct bearing on the subjective rating of the observer. Specifically, we show that the gaze energy in the hole regions of an inpainted image show marked deviations from normal behavior when the inpainting artifacts are readily apparent
    corecore