12 research outputs found

    Zipper-type electrical connectors

    Get PDF
    Electrical connector is described of zipper-like configuration, in which sequentially interlocking tines serve as electrical contacts or insulators

    Method of welding joint in closed vessel improves quality of seam

    Get PDF
    To facilitate welding of closed vessels, a metal backup strip is used at the junction inside the vessel. After welding from the outside, this strip is dissolved by a chemically reactive solvent poured through a filler hole into the vessel

    Java Library for Input and Output of Image Data and Metadata

    Get PDF
    A Java-language library supports input and output (I/O) of image data and metadata (label data) in the format of the Video Image Communication and Retrieval (VICAR) image-processing software and in several similar formats, including a subset of the Planetary Data System (PDS) image file format. The library does the following: It provides low-level, direct access layer, enabling an application subprogram to read and write specific image files, lines, or pixels, and manipulate metadata directly. Two coding/decoding subprograms ("codecs" for short) based on the Java Advanced Imaging (JAI) software provide access to VICAR and PDS images in a file-format-independent manner. The VICAR and PDS codecs enable any program that conforms to the specification of the JAI codec to use VICAR or PDS images automatically, without specific knowledge of the VICAR or PDS format. The library also includes Image I/O plugin subprograms for VICAR and PDS formats. Application programs that conform to the Image I/O specification of Java version 1.4 can utilize any image format for which such a plug-in subprogram exists, without specific knowledge of the format itself. Like the aforementioned codecs, the VICAR and PDS Image I/O plug-in subprograms support reading and writing of metadata

    Java Image I/O for VICAR, PDS, and ISIS

    Get PDF
    This library, written in Java, supports input and output of images and metadata (labels) in the VICAR, PDS image, and ISIS-2 and ISIS-3 file formats. Three levels of access exist. The first level comprises the low-level, direct access to the file. This allows an application to read and write specific image tiles, lines, or pixels and to manipulate the label data directly. This layer is analogous to the C-language "VICAR Run-Time Library" (RTL), which is the image I/O library for the (C/C++/Fortran) VICAR image processing system from JPL MIPL (Multimission Image Processing Lab). This low-level library can also be used to read and write labeled, uncompressed images stored in formats similar to VICAR, such as ISIS-2 and -3, and a subset of PDS (image format). The second level of access involves two codecs based on Java Advanced Imaging (JAI) to provide access to VICAR and PDS images in a file-format-independent manner. JAI is supplied by Sun Microsystems as an extension to desktop Java, and has a number of codecs for formats such as GIF, TIFF, JPEG, etc. Although Sun has deprecated the codec mechanism (replaced by IIO), it is still used in many places. The VICAR and PDS codecs allow any program written using the JAI codec spec to use VICAR or PDS images automatically, with no specific knowledge of the VICAR or PDS formats. Support for metadata (labels) is included, but is format-dependent. The PDS codec, when processing PDS images with an embedded VIAR label ("dual-labeled images," such as used for MER), presents the VICAR label in a new way that is compatible with the VICAR codec. The third level of access involves VICAR, PDS, and ISIS Image I/O plugins. The Java core includes an "Image I/O" (IIO) package that is similar in concept to the JAI codec, but is newer and more capable. Applications written to the IIO specification can use any image format for which a plug-in exists, with no specific knowledge of the format itself

    Automating planetary mission operations

    No full text
    In this paper, we describe the elements of a semi-autonomous system designed to provide instrument health, pointing, and data in a cost effective fashion

    Factors influencing adverse skin responses in rats receiving repeated subcutaneous injections and potential impact on neurobehavior.

    Get PDF
    Repeated subcutaneous (s.c.) injection is a common route of administration in chronic studies of neuroactive compounds. However, in a pilot study we noted a significant incidence of skin abnormalities in adult male Long-Evans rats receiving daily s.c. injections of peanut oil (1.0 ml/kg) in the subscapular region for 21 d. Histopathological analyses of the lesions were consistent with a foreign body reaction. Subsequent studies were conducted to determine factors that influenced the incidence or severity of skin abnormalities, and whether these adverse skin reactions influenced a specific neurobehavioral outcome. Rats injected daily for 21 d with food grade peanut oil had an earlier onset and greater incidence of skin abnormalities relative to rats receiving an equal volume (1.0 ml/kg/d) of reagent grade peanut oil or triglyceride of coconut oil. Skin abnormalities in animals injected daily with peanut oil were increased in animals housed on corncob versus paper bedding. Comparison of animals obtained from different barrier facilities exposed to the same injection paradigm (reagent grade peanut oil, 1.0 ml/kg/d s.c.) revealed significant differences in the severity of skin abnormalities. However, animals from different barrier facilities did not perform differently in a Pavlovian fear conditioning task. Collectively, these data suggest that environmental factors influence the incidence and severity of skin abnormalities following repeated s.c. injections, but that these adverse skin responses do not significantly influence performance in at least one test of learning and memory

    The Benefits of Virtual Presence in Space (VPS) to Deep Space Missions

    No full text
    Understanding our place in the Universe is one of mankind's greatest scientific and technological challenges and achievements. The invention of the telescope, the Copernican Revolution, the development of Newtonian mechanics, and the Space Age exploration of our solar system; provided us with a deeper understanding of our place in the Universe; based on better observations and models. As we approach the end of the first decade of the new millennium, the same quest, to understand our place in the Universe, remains a great challenge. New technologies will enable us to construct and interact with a "Virtual Universe" based on remote and in situ observations of other worlds. As we continue the exploration that began in the last century, we will experience a "Virtual Presence in Space (VPS)" in this century. This paper describes VPS technology, the mechanisms for VPS product distribution and display, the benefits of this technology, and future plans. Deep space mission stereo observations and frames from stereo High Definition Television (HDTV) mission animations are used to illustrate the effectiveness of VPS technology

    Visualizing the Operations of the Phoenix Mars Lander

    No full text
    With the successful landing of the Phoenix Mars Lander comes the task of visualizing the spacecraft, its operations and surrounding environment. The JPL Solar System Visualization team has brought together a wide range of talents and software to provide a suit of visualizations that shed light on the operations of this visitor to another world. The core set of tools range from web-based production tracking (Image Products Release Website), to custom 3D transformation software, through to studio quality 2D and 3D video production. We will demonstrate several of the key technologies that bring together these visualizations. Putting the scientific results of Phoenix in context requires managing the classic powers-of-10 problem. Everything from the location of polar dust storms down to the Atomic Force Microscope must be brought together in a context that communicates to both the scientific and public audiences. We used Lightwave to blend 2D and 3D visualizations into a continuous series of zooms using both simulations and actual data. Beyond the high-powered industrial strength solutions, we have strived to bring as much power down to the average computer user\u27s standard view of the computer: the web browser. Zooming and Interactive Mosaics (ZIM) tool is a JavaScript web tool for displaying high-resolution panoramas in a spacecraft-centric view. This tool allows the user to pan and zoom through the mosaic, indentifying feature and target names, all the while maintaining a contextual frame-of-reference. Google Earth presents the possibility of taking hyperlinked web browser interaction into the 3D geo-browser modality. Until Google releases a Mars mode to Google Earth, we are forced to wrap the Earth in a Mars texture. However, this can still provide a suitable background for exploring interactive visualizations. These models range over both regional and local scales, with the lander positioned on Mars and the local environment mapped into pseudo- Street View modes. Many visualizations succeed by altering the interaction metaphor. Therefore, we have attempted to completely overload the Google Earth interface from a traditional planetary globe into a mosaic viewer by mapping the Phoenix Mosaics onto the sphere and using geographic latitude and longitude coordinates as the camera pointing coordinates of a Phoenix mosaic. This presentation focuses on the data management and visualization aspects of the mission. For scientific results, please see the special section U13 The Phoenix Mission
    corecore