13 research outputs found

    NASA's aircraft icing technology program

    Get PDF
    NASA' Aircraft Icing Technology program is aimed at developing innovative technologies for safe and efficient flight into forecasted icing. The program addresses the needs of all aircraft classes and supports both commercial and military applications. The program is guided by three key strategic objectives: (1) numerically simulate an aircraft's response to an in-flight icing encounter, (2) provide improved experimental icing simulation facilities and testing techniques, and (3) offer innovative approaches to ice protection. Our research focuses on topics that directly support stated industry needs, and we work closely with industry to assure a rapid and smooth transfer of technology. This paper presents selected results that illustrate progress towards the three strategic objectives, and it provides a comprehensive list of references on the NASA icing program

    The NASA aircraft icing research program

    Get PDF
    The objective of the NASA aircraft icing research program is to develop and make available to industry icing technology to support the needs and requirements for all-weather aircraft designs. Research is being done for both fixed wing and rotary wing applications. The NASA program emphasizes technology development in two areas, advanced ice protection concepts and icing simulation. Reviewed here are the computer code development/validation, icing wind tunnel testing, and icing flight testing efforts

    Icing: Accretion, Detection, Protection

    Get PDF
    The global aircraft industry and its regulatory agencies are currently involved in three major icing efforts: ground icing; advanced technologies for in-flight icing; and tailplane icing. These three major icing topics correspondingly support the three major segments of any aircraft flight profile: takeoff; cruise and hold; and approach and land. This lecture addressess these three topics in the same sequence as they appear in flight, starting with ground deicing, followed by advanced technologies for in-flight ice protection, and ending with tailplane icing

    NASA's rotorcraft icing research program

    Get PDF
    The objective of the NASA aircraft icing research program is to develop and make available icing technology to support the needs and requirements of industry for all weather aircraft designs. While a majority of the technology being developed is viewed to be generic (i.e., appropriate to all vehicle classes), vehicle specific emphasis is being placed on the helicopter due to its unique icing problems. In particular, some of the considerations for rotorcraft icing are indicated. The NASA icing research program emphasizes technology development in two key areas: ice protection concepts and icing simulation (analytical and experimental). The NASA research efforts related to rotorcraft icing in these two technology areas will be reviewed

    The NASA aircraft icing research program

    Get PDF
    The objective of the NASA aircraft icing research program is to develop and make available to industry icing technology to support the needs and requirements for all weather aircraft designs. Research is being done for both fixed and rotary wing applications. The NASA program emphasizes technology development in two key areas: advanced ice protection concepts and icing simulation (analytical and experimental). The computer code development/validation, icing wind tunnel testing, and icing flight testing efforts which were conducted to support the icing technology development are reviewed

    NASA's program on icing research and technology

    Get PDF
    NASA's program in aircraft icing research and technology is reviewed. The program relies heavily on computer codes and modern applied physics technology in seeking icing solutions on a finer scale than those offered in earlier programs. Three major goals of this program are to offer new approaches to ice protection, to improve our ability to model the response of an aircraft to an icing encounter, and to provide improved techniques and facilities for ground and flight testing. This paper reviews the following program elements: (1) new approaches to ice protection; (2) numerical codes for deicer analysis; (3) measurement and prediction of ice accretion and its effect on aircraft and aircraft components; (4) special wind tunnel test techniques for rotorcraft icing; (5) improvements of icing wind tunnels and research aircraft; (6) ground de-icing fluids used in winter operation; (7) fundamental studies in icing; and (8) droplet sizing instruments for icing clouds

    Aerodynamic effects of deicing and anti-icing fluids

    No full text

    Accounting for training data error in machine learning applied to Earth Observations

    No full text
    Remote sensing, or Earth Observation (EO), is increasingly used to understand Earth system dynamics and create continuous and categorical maps of biophysical properties and land cover, especially based on recent advances in machine learning (ML). ML models typically require large, spatially explicit training datasets to make accurate predictions. Training data (TD) are typically generated by digitizing polygons on high spatial-resolution imagery, by collecting in situ data, or by using pre-existing datasets. TD are often assumed to accurately represent the truth, but in practice almost always have error, stemming from (1) sample design, and (2) sample collection errors. The latter is particularly relevant for image-interpreted TD, an increasingly commonly used method due to its practicality and the increasing training sample size requirements of modern ML algorithms. TD errors can cause substantial errors in the maps created using ML algorithms, which may impact map use and interpretation. Despite these potential errors and their real-world consequences for map-based decisions, TD error is often not accounted for or reported in EO research. Here we review the current practices for collecting and handling TD. We identify the sources of TD error, and illustrate their impacts using several case studies representing different EO applications (infrastructure mapping, global surface flux estimates, and agricultural monitoring), and provide guidelines for minimizing and accounting for TD errors. To harmonize terminology, we distinguish TD from three other classes of data that should be used to create and assess ML models: training reference data, used to assess the quality of TD during data generation; validation data, used to iteratively improve models; and map reference data, used only for final accuracy assessment. We focus primarily on TD, but our advice is generally applicable to all four classes, and we ground our review in established best practices for map accuracy assessment literature. EO researchers should start by determining the tolerable levels of map error and appropriate error metrics. Next, TD error should be minimized during sample design by choosing a representative spatio-temporal collection strategy, by using spatially and temporally relevant imagery and ancillary data sources during TD creation, and by selecting a set of legend definitions supported by the data. Furthermore, TD error can be minimized during the collection of individual samples by using consensus-based collection strategies, by directly comparing interpreted training observations against expert-generated training reference data to derive TD error metrics, and by providing image interpreters with thorough application-specific training. We strongly advise that TD error is incorporated in model outputs, either directly in bias and variance estimates or, at a minimum, by documenting the sources and implications of error. TD should be fully documented and made available via an open TD repository, allowing others to replicate and assess its use. To guide researchers in this process, we propose three tiers of TD error accounting standards. Finally, we advise researchers to clearly communicate the magnitude and impacts of TD error on map outputs, with specific consideration given to the likely map audience

    Accounting for training data error in machine learning applied to earth observations

    Get PDF
    Remote sensing, or Earth Observation (EO), is increasingly used to understand Earth system dynamics and create continuous and categorical maps of biophysical properties and land cover, especially based on recent advances in machine learning (ML). ML models typically require large, spatially explicit training datasets to make accurate predictions. Training data (TD) are typically generated by digitizing polygons on high spatial-resolution imagery, by collecting in situ data, or by using pre-existing datasets. TD are often assumed to accurately represent the truth, but in practice almost always have error, stemming from (1) sample design, and (2) sample collection errors. The latter is particularly relevant for image-interpreted TD, an increasingly commonly used method due to its practicality and the increasing training sample size requirements of modern ML algorithms. TD errors can cause substantial errors in the maps created using ML algorithms, which may impact map use and interpretation. Despite these potential errors and their real-world consequences for map-based decisions, TD error is often not accounted for or reported in EO research. Here we review the current practices for collecting and handling TD. We identify the sources of TD error, and illustrate their impacts using several case studies representing different EO applications (infrastructure mapping, global surface flux estimates, and agricultural monitoring), and provide guidelines for minimizing and accounting for TD errors. To harmonize terminology, we distinguish TD from three other classes of data that should be used to create and assess ML models: training reference data, used to assess the quality of TD during data generation; validation data, used to iteratively improve models; and map reference data, used only for final accuracy assessment. We focus primarily on TD, but our advice is generally applicable to all four classes, and we ground our review in established best practices for map accuracy assessment literature. EO researchers should start by determining the tolerable levels of map error and appropriate error metrics. Next, TD error should be minimized during sample design by choosing a representative spatio-temporal collection strategy, by using spatially and temporally relevant imagery and ancillary data sources during TD creation, and by selecting a set of legend definitions supported by the data. Furthermore, TD error can be minimized during the collection of individual samples by using consensus-based collection strategies, by directly comparing interpreted training observations against expert-generated training reference data to derive TD error metrics, and by providing image interpreters with thorough application-specific training. We strongly advise that TD error is incorporated in model outputs, either directly in bias and variance estimates or, at a minimum, by documenting the sources and implications of error. TD should be fully documented and made available via an open TD repository, allowing others to replicate and assess its use. To guide researchers in this process, we propose three tiers of TD error accounting standards. Finally, we advise researchers to clearly communicate the magnitude and impacts of TD error on map outputs, with specific consideration given to the likely map audience
    corecore