137 research outputs found

    AOIPS 3 user's guide. Volume 2: Program descriptions

    Get PDF
    The Atmospheric and Oceanographic Information Processing System (AOIPS) 3 is the version of the AOIPS software as of April 1989. The AOIPS software was developed jointly by the Goddard Space Flight Center and General Sciences Corporation. A detailed description of very AOIPS program is presented. It is intended to serve as a reference for such items as program functionality, program operational instructions, and input/output variable descriptions. Program descriptions are derived from the on-line help information. Each program description is divided into two sections. The functional description section describes the purpose of the program and contains any pertinent operational information. The program description sections lists the program variables as they appear on-line, and describes them in detail

    Testing the current paradigm for space weather prediction with heliospheric imagers

    Get PDF
    Predictions of the arrival of four coronal mass ejections (CMEs) in geospace are produced through use of three CME geometric models combined with CME drag modeling, constraining these models with the available Coronagraph and Heliospheric Imager data. The efficacy of these predications is assessed by comparison with the Space Weather Prediction Center (SWPC) numerical MHD forecasts of these same events. It is found that such a prediction technique cannot outperform the standard SWPC forecast at a statistically meaningful level. We test the Harmonic Mean, Self-Similar Expansion, and Ellipse Evolution geometric models, and find that, for these events at least, the differences between the models are smaller than the observational errors. We present a new method of characterizing CME fronts in the Heliospheric Imager field of view, utilizing the analysis of citizen scientists working with the Solar Stormwatch project, and we demonstrate that this provides a more accurate representation of the CME front than is obtained by experts analyzing elongation time maps for the studied events. Comparison of the CME kinematics estimated independently from the STEREO-A and STEREO-B Heliospheric Imager data reveals inconsistencies that cannot be explained within the observational errors and model assumptions. We argue that these observations imply that the assumptions of the CME geometric models are routinely invalidated and question their utility in a space weather forecasting context. These results argue for the continuing development of more advanced techniques to better exploit the Heliospheric Imager observations for space weather forecasting

    Automatic Understanding and Mapping of Regions in Cities Using Google Street View Images

    Get PDF
    The use of semantic representations to achieve place understanding has been widely studied using indoor information. This kind of data can then be used for navigation, localization, and place identification using mobile devices. Nevertheless, applying this approach to outdoor data involves certain non-trivial procedures, such as gathering the information. This problem can be solved by using map APIs which allow images to be taken from the dataset captured to add to the map of a city. In this paper, we seek to leverage such APIs that collect images of city streets to generate a semantic representation of the city, built using a clustering algorithm and semantic descriptors. The main contribution of this work is to provide a new approach to generate a map with semantic information for each area of the city. The proposed method can automatically assign a semantic label for the cluster on the map. This method can be useful in smart cities and autonomous driving approaches due to the categorization of the zones in a city. The results show the robustness of the proposed pipeline and the advantages of using Google Street View images, semantic descriptors, and machine learning algorithms to generate semantic maps of outdoor places. These maps properly encode the zones existing in the selected city and are able to provide new zones between current ones.This work has been supported by the Spanish Grant PID2019-104818RB-I00 funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe”. José Carlos Rangel and Edmanuel Cruz were supported by the Sistema Nacional de Investigación (SNI) of SENACYT, Panama

    GeoAI-enhanced Techniques to Support Geographical Knowledge Discovery from Big Geospatial Data

    Get PDF
    abstract: Big data that contain geo-referenced attributes have significantly reformed the way that I process and analyze geospatial data. Compared with the expected benefits received in the data-rich environment, more data have not always contributed to more accurate analysis. “Big but valueless” has becoming a critical concern to the community of GIScience and data-driven geography. As a highly-utilized function of GeoAI technique, deep learning models designed for processing geospatial data integrate powerful computing hardware and deep neural networks into various dimensions of geography to effectively discover the representation of data. However, limitations of these deep learning models have also been reported when People may have to spend much time on preparing training data for implementing a deep learning model. The objective of this dissertation research is to promote state-of-the-art deep learning models in discovering the representation, value and hidden knowledge of GIS and remote sensing data, through three research approaches. The first methodological framework aims to unify varied shadow into limited number of patterns, with the convolutional neural network (CNNs)-powered shape classification, multifarious shadow shapes with a limited number of representative shadow patterns for efficient shadow-based building height estimation. The second research focus integrates semantic analysis into a framework of various state-of-the-art CNNs to support human-level understanding of map content. The final research approach of this dissertation focuses on normalizing geospatial domain knowledge to promote the transferability of a CNN’s model to land-use/land-cover classification. This research reports a method designed to discover detailed land-use/land-cover types that might be challenging for a state-of-the-art CNN’s model that previously performed well on land-cover classification only.Dissertation/ThesisDoctoral Dissertation Geography 201

    Examining the impacts of convective environments on storms using observations and numerical models

    Get PDF
    2022 Summer.Includes bibliographical references.Convective clouds are significant contributors to both weather and climate. While the basic environments supporting convective clouds are broadly known, there is currently no unifying theory on how joint variations in different environmental properties impact convective cloud properties. The overaching goal of this research is to assess the response of convective clouds to changes in the dynamic, thermodynamic and aerosol properties of the local environment. To achieve our goal, two tools for examining convective cloud properties and their environments are first described, developed and enhanced. This is followed by an examination of the response of convective clouds to changes in the dynamic, thermodynamic and aerosol properties using these enhanced tools. In the first study comprising this dissertation, we assess the performance of small temperature, pressure, and humidity sensors onboard drones used to sample convective environments and convective cloud outflows by comparing them to measurements made from a tethersonde platform suspended at the same height. Using 82 total drone flights, including nine at night, the following determinations about sensor accuracy are made. First, when examining temperature, the nighttime flight temperature errors are found to have a smaller range than the daytime temperature errors, indicating that much of the daytime error arises from exposure to solar radiation. The pressure errors demonstrate a strong dependence on horizontal wind speed with all of the error distributions being multimodal in high wind conditions. Finally, dewpoint temperature errors are found to be larger than temperature errors. We conclude that measurements in field campaigns are more accurate when sensors are placed away from the drone's main body and associated propeller wash and are sufficiently aspirated and shielded from incoming solar radiation. The Tracking and Object-Based Analysis of Clouds (tobac) tracking package is a commonly used tracking package in atmospheric science that allows for tracking of atmospheric phenomena on any variable and on any grid. We have enhanced the tobac tracking package to enable it to be used on more atmospheric phenomena, with a wider variety of atmospheric data and across more diverse platforms than before. New scientific improvements (three spatial dimensions and an internal spectral filtering tool) and procedural improvements (enhanced computational efficiency, internal re-gridding of data, and treatments for periodic boundary conditions) comprising this new version of tobac (v1.5) are described in the second study of this dissertation. These improvements have made tobac one of the most robust, powerful, and flexible identification and tracking tools in our field and expanded its potential use in other fields. In the third study of this dissertation, we examine the relationship between the thermodynamic and dynamic environmental properties and deep convective clouds forming in the tropical atmosphere. To elucidate this relationship, we employ a high-resolution, long-duration, large-area numerical model simulation alongside tobac to build a database of convective clouds and their environments. With this database, we examine differences in the initial environment associated with individual storm strength, organization, and morphology. We find that storm strength, defined here as maximum midlevel updraft velocity, is controlled primarily by Convective Available Potential Energy (CAPE) and Precipitable Water (PW); high CAPE (>2500 J kg-1) and high PW (approximately 63 mm) are both required for midlevel CCC updraft velocities to reach at least 10 m s-1. Of the CCCs with the most vigorous updrafts, 80.9% are in the upper tercile of precipitation rates, with the strongest precipitation rates requiring even higher PW. Furthermore, vertical wind shear is the primary differentiator between organized and isolated convective storms. Within the set of organized storms, we also find that linearly-oriented CCC systems have significantly weaker vertical wind shear than nonlinear CCCs in low- (0-1 km, 0-3 km) and mid-levels (0-5 km, 2-7 km). Overall, these results provide new insights into the joint environmental conditions determining the CCC properties in the tropical atmosphere. Finally, in the fourth study of this dissertation, we build upon the third study by examining the relationship between the aerosol environment and convective precipitation using the same simulations and tracking approaches as in the third study. As the environmental aerosol concentrations are increased, the total domain-wide precipitation decreases (-3.4%). Despite the overall decrease in precipitation, the number of tracked terminal congestus clouds increases (+8%), while the number of tracked cumulonimbus clouds is decreased (-1.26%). This increase in the number of congestus clouds is accompanied by an overall weakening in their rainfall as aerosol concentration increases, with a decrease in overall rain rates and an increase in the number of clouds that do not precipitate (+10.7%). As aerosol particles increase, overall cloud droplet size gets smaller, suppressing the initial generation of rain and leading to clouds evaporating due to entrainment before they are able to precipitate

    Preserving Command Line Workflow for a Package Management System Using ASCII DAG Visualization

    Get PDF
    Package managers provide ease of access to applications by removing the time-consuming and sometimes completely prohibitive barrier of successfully building, installing, and maintaining the software for a system. A package dependency contains dependencies between all packages required to build and run the target software. Package management system developers, package maintainers, and users may consult the dependency graph when a simple listing is insufficient for their analyses. However, users working in a remote command line environment must disrupt their workflow to visualize dependency graphs in graphical programs, possibly needing to move files between devices or incur forwarding lag. Such is the case for users of Spack, an open source package management system originally developed to ease the complex builds required by supercomputing environments. To preserve the command line workflow of Spack, we develop an interactive ASCII visualization for its dependency graphs. Through interviews with Spack maintainers, we identify user goals and corresponding visual tasks for dependency graphs. We evaluate the use of our visualization through a command line-centered study, comparing it to the system's two existing approaches. We observe that despite the limitations of the ASCII representation, our visualization is preferred by participants when approached from a command line interface workflow.U.S. Department of Energy by Lawrence Livermore National Laboratory [DE-AC52-07NA27344, LLNL-JRNL-746358]This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]

    LSST Science Book, Version 2.0

    Get PDF
    A survey that can cover the sky in optical bands over wide fields to faint magnitudes with a fast cadence will enable many of the exciting science opportunities of the next decade. The Large Synoptic Survey Telescope (LSST) will have an effective aperture of 6.7 meters and an imaging camera with field of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over 20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a total point-source depth of r~27.5. The LSST Science Book describes the basic parameters of the LSST hardware, software, and observing plans. The book discusses educational and outreach opportunities, then goes on to describe a broad range of science that LSST will revolutionize: mapping the inner and outer Solar System, stellar populations in the Milky Way and nearby galaxies, the structure of the Milky Way disk and halo and other objects in the Local Volume, transient and variable objects both at low and high redshift, and the properties of normal and active galaxies at low and high redshift. It then turns to far-field cosmological topics, exploring properties of supernovae to z~1, strong and weak lensing, the large-scale distribution of galaxies and baryon oscillations, and how these different probes may be combined to constrain cosmological models and the physics of dark energy.Comment: 596 pages. Also available at full resolution at http://www.lsst.org/lsst/sciboo

    Characterizing the Impacts of the Invasive Hemlock Woolly Adelgid on the Forest Structure of New England

    Get PDF
    Climate change is raising winter temperatures in the Northeastern United States, both expanding the range of an invasive pest, the hemlock woolly adelgid (HWA; Adelges tsugae), and threatening the survival of its host species, eastern hemlock (Tsuga canadensis). As a foundation species, hemlock trees underlie a distinct network of ecological, biogeochemical, and structural systems that will likely disappear as the HWA infestation spreads northward. Remote sensing can offer new perspectives on this regional transition, recording the progressive loss of an ecological foundation species and the transition of evergreen hemlock forest to mixed deciduous forest over the course of the infestation. Lidar remote sensing, unlike other remote sensing tools, has the potential to penetrate dense hemlock canopies and record HWA’s distinct impacts on lower canopy structure. Working with a series of lidar data from the Harvard Forest experimental site, these studies identify the unique signals of HWA impacts on vertical canopy structure and use them to predict forest condition. Methods for detecting the initial impacts of HWA are explored and a workflow for monitoring changes in forest structure at the regional scale is outlined. Finally, by applying terrestrial, airborne, and spaceborne lidar data to characterize the structural variation and dynamics of a disturbed forest ecosystem, this research illustrates the potential of lidar as a tool for forest management and ecological research

    Development, Implementation and Pre-clinical Evaluation of Medical Image Computing Tools in Support of Computer-aided Diagnosis: Respiratory, Orthopedic and Cardiac Applications

    Get PDF
    Over the last decade, image processing tools have become crucial components of all clinical and research efforts involving medical imaging and associated applications. The imaging data available to the radiologists continue to increase their workload, raising the need for efficient identification and visualization of the required image data necessary for clinical assessment. Computer-aided diagnosis (CAD) in medical imaging has evolved in response to the need for techniques that can assist the radiologists to increase throughput while reducing human error and bias without compromising the outcome of the screening, diagnosis or disease assessment. More intelligent, but simple, consistent and less time-consuming methods will become more widespread, reducing user variability, while also revealing information in a more clear, visual way. Several routine image processing approaches, including localization, segmentation, registration, and fusion, are critical for enhancing and enabling the development of CAD techniques. However, changes in clinical workflow require significant adjustments and re-training and, despite the efforts of the academic research community to develop state-of-the-art algorithms and high-performance techniques, their footprint often hampers their clinical use. Currently, the main challenge seems to not be the lack of tools and techniques for medical image processing, analysis, and computing, but rather the lack of clinically feasible solutions that leverage the already developed and existing tools and techniques, as well as a demonstration of the potential clinical impact of such tools. Recently, more and more efforts have been dedicated to devising new algorithms for localization, segmentation or registration, while their potential and much intended clinical use and their actual utility is dwarfed by the scientific, algorithmic and developmental novelty that only result in incremental improvements over already algorithms. In this thesis, we propose and demonstrate the implementation and evaluation of several different methodological guidelines that ensure the development of image processing tools --- localization, segmentation and registration --- and illustrate their use across several medical imaging modalities --- X-ray, computed tomography, ultrasound and magnetic resonance imaging --- and several clinical applications: Lung CT image registration in support for assessment of pulmonary nodule growth rate and disease progression from thoracic CT images. Automated reconstruction of standing X-ray panoramas from multi-sector X-ray images for assessment of long limb mechanical axis and knee misalignment. Left and right ventricle localization, segmentation, reconstruction, ejection fraction measurement from cine cardiac MRI or multi-plane trans-esophageal ultrasound images for cardiac function assessment. When devising and evaluating our developed tools, we use clinical patient data to illustrate the inherent clinical challenges associated with highly variable imaging data that need to be addressed before potential pre-clinical validation and implementation. In an effort to provide plausible solutions to the selected applications, the proposed methodological guidelines ensure the development of image processing tools that help achieve sufficiently reliable solutions that not only have the potential to address the clinical needs, but are sufficiently streamlined to be potentially translated into eventual clinical tools provided proper implementation. G1: Reducing the number of degrees of freedom (DOF) of the designed tool, with a plausible example being avoiding the use of inefficient non-rigid image registration methods. This guideline addresses the risk of artificial deformation during registration and it clearly aims at reducing complexity and the number of degrees of freedom. G2: The use of shape-based features to most efficiently represent the image content, either by using edges instead of or in addition to intensities and motion, where useful. Edges capture the most useful information in the image and can be used to identify the most important image features. As a result, this guideline ensures a more robust performance when key image information is missing. G3: Efficient method of implementation. This guideline focuses on efficiency in terms of the minimum number of steps required and avoiding the recalculation of terms that only need to be calculated once in an iterative process. An efficient implementation leads to reduced computational effort and improved performance. G4: Commence the workflow by establishing an optimized initialization and gradually converge toward the final acceptable result. This guideline aims to ensure reasonable outcomes in consistent ways and it avoids convergence to local minima, while gradually ensuring convergence to the global minimum solution. These guidelines lead to the development of interactive, semi-automated or fully-automated approaches that still enable the clinicians to perform final refinements, while they reduce the overall inter- and intra-observer variability, reduce ambiguity, increase accuracy and precision, and have the potential to yield mechanisms that will aid with providing an overall more consistent diagnosis in a timely fashion
    corecore