1,736 research outputs found

    Mitigations to Reduce the Law of Unintended Consequences for Autonomy and Other Technological Advances

    Get PDF
    The United Nations states that Earths population is expected to reach just under 10 billion people (9.7) by the year 2050. To meet the demands of 10 billion people, governments, multinational corporations and global leaders are relying on autonomy and technological advances to augment and/or accommodate human efforts to meet the required needs of daily living. Genetically modified organisms (GMOs), Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) gene-edited plants and cloning will be utilized to expand human food supply. Biomimetic implants are expected to improve life expectancy with 3D printed body parts. Human functioning will be extended with wearables and cybernetic implants continuing humanitys path toward transhumanism. Families will be strengthened with 3 parent households. Disease will surely be eradicated using the CRISPR-CAS9 genetic engineering revolution to design out undesirable human traits and to design in new capabilities. With autonomous cars, trucks and buses on our roads and on-demand autonomous aircraft delivering pizzas, medical prescriptions and groceries in the air and multi-planet vehicles traversing space, utopia will finally arrive! Or will it? All of these powerful, man-made, technological systems will experience unintended consequences with certainty. Instead of over-reacting with hysteria and fear, we should be seeking answers to the following questions - What skills are required to architect socially-healthy technological systems for 2050? What mindsets should we embody to ameliorate hubris syndrome and to build our future technological systems with deliberation, soberness and social responsibility

    Revealing the ISO/IEC 9126-1 Clique Tree for COTS Software Evaluation

    Get PDF
    Previous research has shown that acyclic dependency models, if they exist, can be extracted from software quality standards and that these models can be used to assess software safety and product quality. In the case of commercial off-the-shelf (COTS) software, the extracted dependency model can be used in a probabilistic Bayesian network context for COTS software evaluation. Furthermore, while experts typically employ Bayesian networks to encode domain knowledge, secondary structures (clique trees) from Bayesian network graphs can be used to determine the probabilistic distribution of any software variable (attribute) using any clique that contains that variable. Secondary structures, therefore, provide insight into the fundamental nature of graphical networks. This paper will apply secondary structure calculations to reveal the clique tree of the acyclic dependency model extracted from the ISO/IEC 9126-1 software quality standard. Suggestions will be provided to describe how the clique tree may be exploited to aid efficient transformation of an evaluation model

    How Arts Integration Supports Student Learning: Students Shed Light on the Connections (Full report)

    Get PDF
    Learning in and with the arts has been linked with increased student achievement, but the means by which the arts may support cognitive growth in students is relatively undocumented. Thirty students across ten classes in veteran teacher-artist partnerships were selected to help explore the processes and outcomes associated with arts-integrated learning units versus learning processes and outcomes in comparable non-arts units

    Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    Get PDF
    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service

    Comparison of System Identification Techniques for the Hydraulic Manipulator Test Bed (HMTB)

    Get PDF
    In this thesis linear, dynamic, multivariable state-space models for three joints of the ground-based Hydraulic Manipulator Test Bed (HMTB) are identified. HMTB, housed at the NASA Langley Research Center, is a ground-based version of the Dexterous Orbital Servicing System (DOSS), a representative space station manipulator. The dynamic models of the HMTB manipulator will first be estimated by applying nonparametric identification methods to determine each joint's response characteristics using various input excitations. These excitations include sum of sinusoids, pseudorandom binary sequences (PRBS), bipolar ramping pulses, and chirp input signals. Next, two different parametric system identification techniques will be applied to identify the best dynamical description of the joints. The manipulator is localized about a representative space station orbital replacement unit (ORU) task allowing the use of linear system identification methods. Comparisons, observations, and results of both parametric system identification techniques are discussed. The thesis concludes by proposing a model reference control system to aid in astronaut ground tests. This approach would allow the identified models to mimic on-orbit dynamic characteristics of the actual flight manipulator thus providing astronauts with realistic on-orbit responses to perform space station tasks in a ground-based environment

    Thermal enclosure system functional simulation user's manual

    Get PDF
    A form and function simulation of the thermal enclosure system (TES) for a microgravity protein crystal growth experiment has been developed as part of an investigation of the benefits and limitations of intravehicular telerobotics to aid in microgravity science and production. A user can specify the time, temperature, and sample rate profile for a given experiment, and menu options and status are presented on an LCD display. This report describes the features and operational procedures for the functional simulation

    Nature Versus Nurture: Luminous Blue Variable Nebulae in and near Massive Stellar Clusters at the Galactic Center

    Full text link
    Three Luminous Blue Variables (LBVs) are located in and near the Quintuplet Cluster at the Galactic Center: the Pistol star, G0.120-0.048, and qF362. We present imaging at 19, 25, 31, and 37 {\mu}m of the region containing these three LBVs, obtained with SOFIA using FORCAST. We argue that the Pistol and G0.120-0.048 are identical ``twins" that exhibit contrasting nebulae due to the external influence of their different environments. Our images reveal the asymmetric, compressed shell of hot dust surrounding the Pistol Star and provide the first detection of the thermal emission from the symmetric, hot dust envelope surrounding G0.120-0.048. Dust and gas composing the Pistol nebula are primarily heated and ionized by the nearby Quintuplet Cluster stars. The northern region of the Pistol nebula is decelerated due to the interaction with the high-velocity (2000 km/s) winds from adjacent Wolf-Rayet Carbon (WC) stars. With the DustEM code we determine that the Pistol nebula is composed of a distribution of very small, transiently-heated grains (10-~35 {\AA}) and that it exhibits a gradient of decreasing grain size from the south to the north due to differential sputtering by the winds from the WC stars. Dust in the G0.120-0.048 nebula is primarily heated by the central star; however, the nebular gas is ionized externally by the Arches Cluster. Unlike the Pistol nebula, the G0.120-0.048 nebula is freely expanding into the surrounding medium. Given independent dust and gas mass estimates we find that the Pistol and G0.120-0.048 nebulae exhibit similar gas-to-dust mass ratios of ~310 and ~290, respectively. Both nebulae share identical size scales (~ 0.7 pc) which suggests that they have similar dynamical timescales of ~10^5 yrs, assuming a shell expansion velocity of v_exp 60 km/s.Comment: 18 pages, 7 figures, accepted to Ap

    Risk Acceptance Personality Paradigm: How We View What We Don't Know We Don't Know

    Get PDF
    The purpose of integrated hazard analyses, probabilistic risk assessments, failure modes and effects analyses, fault trees and many other similar tools is to give managers of a program some idea of the risks associated with their program. All risk tools establish a set of undesired events and then try to evaluate the risk to the program by assessing the severity of the undesired event and the likelihood of that event occurring. Some tools provide qualitative results, some provide quantitative results and some do both. However, in the end the program manager and his/her team must decide which risks are acceptable and which are not. Even with a wide array of analysis tools available, risk acceptance is often a controversial and difficult decision making process. And yet, today's space exploration programs are moving toward more risk based design approaches. Thus, risk identification and good risk assessment is becoming even more vital to the engineering development process. This paper explores how known and unknown information influences risk-based decisions by looking at how the various parts of our personalities are affected by what they know and what they don't know. This paper then offers some criteria for consideration when making risk-based decisions

    On the Transition and Migration of Flight Functions in the Airspace System

    Get PDF
    Since ~400 BC, when man first replicated flying behavior with kites, up until the turn of the 20th century, when the Wright brothers performed the first successful powered human flight, flight functions have become available to man via significant support from man-made structures and devices. Over the past 100 years or so, technology has enabled several flight functions to migrate to automation and/or decision support systems. This migration continues with the United States NextGen and Europe s Single European Sky (a.k.a. SESAR) initiatives. These overhauls of the airspace system will be accomplished by accommodating the functional capabilities, benefits, and limitations of technology and automation together with the unique and sometimes overlapping functional capabilities, benefits, and limitations of humans. This paper will discuss how a safe and effective migration of any flight function must consider several interrelated issues, including, for example, shared situation awareness, and automation addiction, or over-reliance on automation. A long-term philosophical perspective is presented that considers all of these issues by primarily asking the following questions: How does one find an acceptable level of risk tolerance when allocating functions to automation versus humans? How does one measure or predict with confidence what the risks will be? These two questions and others will be considered from the two most-discussed paradigms involving the use of increasingly complex systems in the future: humans as operators and humans as monitors

    The Integrated Hazard Analysis Integrator

    Get PDF
    Hazard analysis addresses hazards that arise in the design, development, manufacturing, construction, facilities, transportation, operations and disposal activities associated with hardware, software, maintenance, operations and environments. An integrated hazard is an event or condition that is caused by or controlled by multiple systems, elements, or subsystems. Integrated hazard analysis (IHA) is especially daunting and ambitious for large, complex systems such as NASA s Constellation program which incorporates program, systems and element components that impact others (International Space Station, public, International Partners, etc.). An appropriate IHA should identify all hazards, causes, controls and verifications used to mitigate the risk of catastrophic loss of crew, vehicle and/or mission. Unfortunately, in the current age of increased technology dependence, there is the tendency to sometimes overlook the necessary and sufficient qualifications of the integrator, that is, the person/team that identifies the parts, analyzes the architectural structure, aligns the analysis with the program plan and then communicates/coordinates with large and small components, each contributing necessary hardware, software and/or information to prevent catastrophic loss. As viewed from both Challenger and Columbia accidents, lack of appropriate communication, management errors and lack of resources dedicated to safety were cited as major contributors to these fatalities. From the accident reports, it would appear that the organizational impact of managers, integrators and safety personnel contributes more significantly to mission success and mission failure than purely technological components. If this is so, then organizations who sincerely desire mission success must put as much effort in selecting managers and integrators as they do when designing the hardware, writing the software code and analyzing competitive proposals. This paper will discuss the necessary and sufficient requirements of one of the significant contributors to mission success, the IHA integrator. Discussions will be provided to describe both the mindset required as well as deleterious assumptions/behaviors to avoid when integrating within a large scale system
    corecore