2,968 research outputs found

    Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images

    Full text link
    The quality of modern astronomical data, the power of modern computers and the agility of current image-processing software enable the creation of high-quality images in a purely digital form. The combination of these technological advancements has created a new ability to make color astronomical images. And in many ways it has led to a new philosophy towards how to create them. A practical guide is presented on how to generate astronomical images from research data with powerful image-processing programs. These programs use a layering metaphor that allows for an unlimited number of astronomical datasets to be combined in any desired color scheme, creating an immense parameter space to be explored using an iterative approach. Several examples of image creation are presented. A philosophy is also presented on how to use color and composition to create images that simultaneously highlight scientific detail and are aesthetically appealing. This philosophy is necessary because most datasets do not correspond to the wavelength range of sensitivity of the human eye. The use of visual grammar, defined as the elements which affect the interpretation of an image, can maximize the richness and detail in an image while maintaining scientific accuracy. By properly using visual grammar, one can imply qualities that a two-dimensional image intrinsically cannot show, such as depth, motion and energy. In addition, composition can be used to engage viewers and keep them interested for a longer period of time. The use of these techniques can result in a striking image that will effectively convey the science within the image, to scientists and to the public.Comment: 104 pages, 38 figures, submitted to A

    3D reconstruction of motor pathways from tract tracing rhesus monkey

    Full text link
    Magnetic resonance imaging (MRI) has transformed the world of non-invasive imaging for diagnostic purposes. Modern techniques such as diffusion weighted imaging (DWI), diffusion tensor imaging (DTI), and diffusion spectrum imaging (DSI) have been used to reconstruct fiber pathways of the brain - providing a graphical picture of the so-called "connectome." However, there exists controversy in the literature as to the accuracy of the diffusion tractography reconstruction. Although various attempts at histological validation been attempted, there is still no 3D histological pathway validation of the fiber bundle trajectories seen in diffusion MRI. Such a validation is necessary in order to show the viability of current DSI tractography techniques in the ultimate goal for clinical diagnostic application. This project developed methods to provide this 3D histological validation using the rhesus monkey motor pathway as a model system. By injecting biotinylated dextran amine (BDA) tract tracer into the hand area of primary motor cortex, brain section images were reconstructed to create 3D fiber pathways labeled at the axonal level. Using serial coronal brain sections, the BDA label was digitized with a high resolution digital camera to create image montages of the fiber pathway with individual sections spaced at 1200 micron intervals through the brain. An MRI analysis system, OSIRX, was then used to reconstruct these sections into a 3D volume. This is an important technical step toward merging the BDA fiber tract histology with diffusion MRI tractography of the same brain, enabling identification of the valid and inaccurate aspects of diffusion fiber reconstruction. This will ultimately facilitate the use of diffusion MRI to quantify tractography, non-invasively and in vivo, in the human brain

    Reflectance Transformation Imaging (RTI) System for Ancient Documentary Artefacts

    No full text
    This tutorial summarises our uses of reflectance transformation imaging in archaeological contexts. It introduces the UK AHRC funded project reflectance Transformation Imaging for Anciant Documentary Artefacts and demonstrates imaging methodologies

    NASA Tech Briefs, September 2012

    Get PDF
    Topics covered include: Beat-to-Beat Blood Pressure Monitor; Measurement Techniques for Clock Jitter; Lightweight, Miniature Inertial Measurement System; Optical Density Analysis of X-Rays Utilizing Calibration Tooling to Estimate Thickness of Parts; Fuel Cell/Electrochemical Cell Voltage Monitor; Anomaly Detection Techniques with Real Test Data from a Spinning Turbine Engine-Like Rotor; Measuring Air Leaks into the Vacuum Space of Large Liquid Hydrogen Tanks; Antenna Calibration and Measurement Equipment; Glass Solder Approach for Robust, Low-Loss, Fiber-to-Waveguide Coupling; Lightweight Metal Matrix Composite Segmented for Manufacturing High-Precision Mirrors; Plasma Treatment to Remove Carbon from Indium UV Filters; Telerobotics Workstation (TRWS) for Deep Space Habitats; Single-Pole Double-Throw MMIC Switches for a Microwave Radiometer; On Shaft Data Acquisition System (OSDAS); ASIC Readout Circuit Architecture for Large Geiger Photodiode Arrays; Flexible Architecture for FPGAs in Embedded Systems; Polyurea-Based Aerogel Monoliths and Composites; Resin-Impregnated Carbon Ablator: A New Ablative Material for Hyperbolic Entry Speeds; Self-Cleaning Particulate Prefilter Media; Modular, Rapid Propellant Loading System/Cryogenic Testbed; Compact, Low-Force, Low-Noise Linear Actuator; Loop Heat Pipe with Thermal Control Valve as a Variable Thermal Link; Process for Measuring Over-Center Distances; Hands-Free Transcranial Color Doppler Probe; Improving Balance Function Using Low Levels of Electrical Stimulation of the Balance Organs; Developing Physiologic Models for Emergency Medical Procedures Under Microgravity; PMA-Linked Fluorescence for Rapid Detection of Viable Bacterial Endospores; Portable Intravenous Fluid Production Device for Ground Use; Adaptation of a Filter Assembly to Assess Microbial Bioburden of Pressurant Within a Propulsion System; Multiplexed Force and Deflection Sensing Shell Membranes for Robotic Manipulators; Whispering Gallery Mode Optomechanical Resonator; Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles; Self-Sealing Wet Chemistry Cell for Field Analysis; General MACOS Interface for Modeling and Analysis for Controlled Optical Systems; Mars Technology Rover with Arm-Mounted Percussive Coring Tool, Microimager, and Sample-Handling Encapsulation Containerization Subsystem; Fault-Tolerant, Real-Time, Multi-Core Computer System; Water Detection Based on Object Reflections; SATPLOT for Analysis of SECCHI Heliospheric Imager Data; Plug-in Plan Tool v3.0.3.1; Frequency Correction for MIRO Chirp Transformation Spectroscopy Spectrum; Nonlinear Estimation Approach to Real-Time Georegistration from Aerial Images; Optimal Force Control of Vibro-Impact Systems for Autonomous Drilling Applications; Low-Cost Telemetry System for Small/Micro Satellites; Operator Interface and Control Software for the Reconfigurable Surface System Tri-ATHLETE; and Algorithms for Determining Physical Responses of Structures Under Load

    The Mars 2020 Perseverance Rover Mast Camera Zoom (Mastcam-Z) Multispectral, Stereoscopic Imaging Investigation

    Get PDF
    Mastcam-Z is a multispectral, stereoscopic imaging investigation on the Mars 2020 mission's Perseverance rover. Mastcam-Z consists of a pair of focusable, 4:1 zoomable cameras that provide broadband red/green/blue and narrowband 400-1000 nm color imaging with fields of view from 25.6 degrees x19.2 degrees (26 mm focal length at 283 mu rad/pixel) to 6.2 degrees x4.6 degrees (110 mm focal length at 67.4 mu rad/pixel). The cameras can resolve (>= 5 pixels) similar to 0.7 mm features at 2 m and similar to 3.3 cm features at 100 m distance. Mastcam-Z shares significant heritage with the Mastcam instruments on the Mars Science Laboratory Curiosity rover. Each Mastcam-Z camera consists of zoom, focus, and filter wheel mechanisms and a 1648x1214 pixel charge-coupled device detector and electronics. The two Mastcam-Z cameras are mounted with a 24.4 cm stereo baseline and 2.3 degrees total toe-in on a camera plate similar to 2 m above the surface on the rover's Remote Sensing Mast, which provides azimuth and elevation actuation. A separate digital electronics assembly inside the rover provides power, data processing and storage, and the interface to the rover computer. Primary and secondary Mastcam-Z calibration targets mounted on the rover top deck enable tactical reflectance calibration. Mastcam-Z multispectral, stereo, and panoramic images will be used to provide detailed morphology, topography, and geologic context along the rover's traverse; constrain mineralogic, photometric, and physical properties of surface materials; monitor and characterize atmospheric and astronomical phenomena; and document the rover's sample extraction and caching locations. Mastcam-Z images will also provide key engineering information to support sample selection and other rover driving and tool/instrument operations decisions

    Mars 2020 Perseverance Rover Mast Camera Zoom (Mastcam-Z) Multispectral, Stereoscopic Imaging Investigation

    Get PDF
    Mastcam-Z is a multispectral, stereoscopic imaging investigation on the Mars 2020 mission’s Perseverance rover. Mastcam-Z consists of a pair of focusable, 4:1 zoomable cameras that provide broadband red/green/blue and narrowband 400-1000 nm color imaging with fields of view from 25.6° × 19.2° (26 mm focal length at 283 μrad/pixel) to 6.2° × 4.6° (110 mm focal length at 67.4 μrad/pixel). The cameras can resolve (≥ 5 pixels) ∼0.7 mm features at 2 m and ∼3.3 cm features at 100 m distance. Mastcam-Z shares significant heritage with the Mastcam instruments on the Mars Science Laboratory Curiosity rover. Each Mastcam-Z camera consists of zoom, focus, and filter wheel mechanisms and a 1648 × 1214 pixel charge-coupled device detector and electronics. The two Mastcam-Z cameras are mounted with a 24.4 cm stereo baseline and 2.3° total toe-in on a camera plate ∼2 m above the surface on the rover’s Remote Sensing Mast, which provides azimuth and elevation actuation. A separate digital electronics assembly inside the rover provides power, data processing and storage, and the interface to the rover computer. Primary and secondary Mastcam-Z calibration targets mounted on the rover top deck enable tactical reflectance calibration. Mastcam-Z multispectral, stereo, and panoramic images will be used to provide detailed morphology, topography, and geologic context along the rover’s traverse; constrain mineralogic, photometric, and physical properties of surface materials; monitor and characterize atmospheric and astronomical phenomena; and document the rover’s sample extraction and caching locations. Mastcam-Z images will also provide key engineering information to support sample selection and other rover driving and tool/instrument operations decisions

    High Accuracy Tracking of Space-Borne Non-Cooperative Targets

    Get PDF

    TADA! Text to Animatable Digital Avatars

    Full text link
    We introduce TADA, a simple-yet-effective approach that takes textual descriptions and produces expressive 3D avatars with high-quality geometry and lifelike textures, that can be animated and rendered with traditional graphics pipelines. Existing text-based character generation methods are limited in terms of geometry and texture quality, and cannot be realistically animated due to inconsistent alignment between the geometry and the texture, particularly in the face region. To overcome these limitations, TADA leverages the synergy of a 2D diffusion model and an animatable parametric body model. Specifically, we derive an optimizable high-resolution body model from SMPL-X with 3D displacements and a texture map, and use hierarchical rendering with score distillation sampling (SDS) to create high-quality, detailed, holistic 3D avatars from text. To ensure alignment between the geometry and texture, we render normals and RGB images of the generated character and exploit their latent embeddings in the SDS training process. We further introduce various expression parameters to deform the generated character during training, ensuring that the semantics of our generated character remain consistent with the original SMPL-X model, resulting in an animatable character. Comprehensive evaluations demonstrate that TADA significantly surpasses existing approaches on both qualitative and quantitative measures. TADA enables creation of large-scale digital character assets that are ready for animation and rendering, while also being easily editable through natural language. The code will be public for research purposes
    • …
    corecore