145 research outputs found

    Research and technology, 1986

    Get PDF
    The mission of the NASA Langley Research Center is to increase the knowledge and capability of the United States in a full range of aeronautics disciplines and in selected space disciplines. This mission will be accomplished by: performing innovative research relevant to national needs and Agency goals; transferring technology to users in a timely manner; and providing development support to other United States Government agencies, industry, and the NASA centers. This report contains highlights of the major accomplishments and applications made during the past year. The highlights illustrate both the broad range of the research and technology activities at the NASA Langley Research Center and the contributions of this work toward maintaining United States leadership in aeronautics and space research

    Biomedical applications of belief networks

    Get PDF
    Biomedicine is an area in which computers have long been expected to play a significant role. Although many of the early claims have proved unrealistic, computers are gradually becoming accepted in the biomedical, clinical and research environment. Within these application areas, expert systems appear to have met with the most resistance, especially when applied to image interpretation.In order to improve the acceptance of computerised decision support systems it is necessary to provide the information needed to make rational judgements concerning the inferences the system has made. This entails an explanation of what inferences were made, how the inferences were made and how the results of the inference are to be interpreted. Furthermore there must be a consistent approach to the combining of information from low level computational processes through to high level expert analyses.nformation from low level computational processes through to high level expert analyses. Until recently ad hoc formalisms were seen as the only tractable approach to reasoning under uncertainty. A review of some of these formalisms suggests that they are less than ideal for the purposes of decision making. Belief networks provide a tractable way of utilising probability theory as an inference formalism by combining the theoretical consistency of probability for inference and decision making, with the ability to use the knowledge of domain experts.nowledge of domain experts. The potential of belief networks in biomedical applications has already been recog¬ nised and there has been substantial research into the use of belief networks for medical diagnosis and methods for handling large, interconnected networks. In this thesis the use of belief networks is extended to include detailed image model matching to show how, in principle, feature measurement can be undertaken in a fully probabilistic way. The belief networks employed are usually cyclic and have strong influences between adjacent nodes, so new techniques for probabilistic updating based on a model of the matching process have been developed.An object-orientated inference shell called FLAPNet has been implemented and used to apply the belief network formalism to two application domains. The first application is model-based matching in fetal ultrasound images. The imaging modality and biological variation in the subject make model matching a highly uncertain process. A dynamic, deformable model, similar to active contour models, is used. A belief network combines constraints derived from local evidence in the image, with global constraints derived from trained models, to control the iterative refinement of an initial model cue.In the second application a belief network is used for the incremental aggregation of evidence occurring during the classification of objects on a cervical smear slide as part of an automated pre-screening system. A belief network provides both an explicit domain model and a mechanism for the incremental aggregation of evidence, two attributes important in pre-screening systems.Overall it is argued that belief networks combine the necessary quantitative features required of a decision support system with desirable qualitative features that will lead to improved acceptability of expert systems in the biomedical domain

    Stochastic Real-time Optimal Control: A Pseudospectral Approach for Bearing-Only Trajectory Optimization

    Get PDF
    A method is presented to couple and solve the optimal control and the optimal estimation problems simultaneously, allowing systems with bearing-only sensors to maneuver to obtain observability for relative navigation without unnecessarily detracting from a primary mission. A fundamentally new approach to trajectory optimization and the dual control problem is developed, constraining polynomial approximations of the Fisher Information Matrix to provide an information gradient and allow prescription of the level of future estimation certainty required for mission accomplishment. Disturbances, modeling deficiencies, and corrupted measurements are addressed with recursive updating of the target estimate with an Unscented Kalman Filter and the optimal path with Radau pseudospectral collocation methods and sequential quadratic programming. The basic real-time optimal control (RTOC) structure is investigated, specifically addressing limitations of current techniques in this area that lose error integration. The resulting guidance method can be applied to any bearing-only system, such as submarines using passive sonar, anti-radiation missiles, or small UAVs seeking to land on power lines for energy harvesting. Methods and tools required for implementation are developed, including variable calculation timing and tip-tail blending for potential discontinuities. Validation is accomplished with simulation and flight test, autonomously landing a quadrotor helicopter on a wire

    Recent Experiences in Multidisciplinary Analysis and Optimization, part 2

    Get PDF
    The papers presented at the NASA Symposium on Recent Experiences in Multidisciplinary Analysis and Optimization held at NASA Langley Research Center, Hampton, Virginia, April 24 to 26, 1984 are given. The purposes of the symposium were to exchange information about the status of the application of optimization and the associated analyses in industry or research laboratories to real life problems and to examine the directions of future developments

    Model--Based Design of Cancer Chemotherapy Treatment Schedules

    Get PDF
    Cancer is the name given to a class of diseases characterized by an imbalance in cell proliferation and apoptosis, or programmed cell death. Once cancer has reached detectable sizes (10610^{6} cells or 1 mm3^3), it is assumed to have spread throughout the body, and a systemic form of treatment is needed. Chemotherapy treatment is commonly used, and it effects both healthy and diseased tissue. This creates a dichotomy for clinicians who need develop treatment schedules which balance toxic side effects with treatment efficacy. Nominally, the optimal treatment schedule --- where schedule is defined as the amount and frequency of drug delivered --- is the one found to be the most efficacious from the set evaluated during clinical trials. In this work, a model based approach for developing drug treatment schedules was developed. Cancer chemotherapy modeling is typically segregated into drug pharmacokinetics (PK), describing drug distribution throughout an organism, and pharmacodynamics (PD), which delineates cellular proliferation, and drug effects on the organism. This work considers two case studies: (i) a preclinical study of the oral administration of the antitumor agent 9-nitrocamptothecin (9NC) to severe combined immunodeficient (SCID) mice bearing subcutaneously implanted HT29 human colon xenografts; and (ii) a theoretical study of intravenous chemotherapy from the engineering literature.Metabolism of 9NC yields the active metabolite 9-aminocamptothecin (9AC). Both 9NC and 9AC exist in active lactone and inactive carboxylate forms. Four different PK model structures are presented to describe the plasma disposition of 9NC and 9AC: three linear models at a single dose level (0.67 mg/kg 9NC); and a nonlinear model for the dosing range 0.44 -- 1.0 mg/kg 9NC. Untreated tumor growth was modeled using two approaches: (i) exponential growth; and (ii) a switched exponential model transitioning between two different rates of exponential growth at a critical size. All of the PK/PD models considered here have bilinear kill terms which decrease tumor sizes at rates proportional to the effective drug concentration and the current tumor size. The PK/PD model combining the best linear PK model with exponential tumor growth accurately characterized tumor responses in ten experimental mice administered 0.67 mg/kg of 9NC myschedule (Monday-Friday for two weeks repeated every four weeks). The nonlinear PK model of 9NC coupled to the switched exponential PD model accurately captured the tumor response data at multiple dose levels. Each dosing problem was formulated as a mixed--integer linear programming problem (MILP), which guarantees globally optimal solutions. When minimizing the tumor volume at a specified final time, the MILP algorithm delivered as much drug as possible at the end of the treatment window (up to the cumulative toxicity constraint). While numerically optimal, it was found that an exponentially growing tumor, with bilinear kill driven by linear PK would experience the same decrease in tumor volume at a final time regardless of when the drug was administered as long as the {it same amount} was administered. An alternate objective function was selected to minimize tumor volume along a trajectory. This is more clinically relevant in that it better represents the objective of the clinician (eliminate the diseased tissue as rapidly as possible). This resulted in a treatment schedule which eliminated the tumor burden more rapidly, and this schedule can be evaluated recursively at the end of each cycle for efficacy and toxicity, as per current clinical practice.The second case study consists of an intravenously administered drug with first order elimination treating a tumor under Gompertzian growth. This system was also formulated as a MILP, and the two different objectives above were considered. The first objective was minimizing the tumor volume at a final time --- the objective the original authors considered. The MILP solution was qualitatively similar to the solutions originally found using control vector parameterization techniques. This solution also attempted to administer as much drug as possible at the end of the treatment interval. The problem was then posed as a receding horizon trajectory tracking problem. Once again, a more clinically relevant objective returned promising results; the tumor burden was rapidly eliminated

    Adaptive estimation and change detection of correlation and quantiles for evolving data streams

    Get PDF
    Streaming data processing is increasingly playing a central role in enterprise data architectures due to an abundance of available measurement data from a wide variety of sources and advances in data capture and infrastructure technology. Data streams arrive, with high frequency, as never-ending sequences of events, where the underlying data generating process always has the potential to evolve. Business operations often demand real-time processing of data streams for keeping models up-to-date and timely decision-making. For example in cybersecurity contexts, analysing streams of network data can aid the detection of potentially malicious behaviour. Many tools for statistical inference cannot meet the challenging demands of streaming data, where the computational cost of updates to models must be constant to ensure continuous processing as data scales. Moreover, these tools are often not capable of adapting to changes, or drift, in the data. Thus, new tools for modelling data streams with efficient data processing and model updating capabilities, referred to as streaming analytics, are required. Regular intervention for control parameter configuration is prohibitive to the truly continuous processing constraints of streaming data. There is a notable absence of such tools designed with both temporal-adaptivity to accommodate drift and the autonomy to not rely on control parameter tuning. Streaming analytics with these properties can be developed using an Adaptive Forgetting (AF) framework, with roots in adaptive filtering. The fundamental contributions of this thesis are to extend the streaming toolkit by using the AF framework to develop autonomous and temporally-adaptive streaming analytics. The first contribution uses the AF framework to demonstrate the development of a model, and validation procedure, for estimating time-varying parameters of bivariate data streams from cyber-physical systems. This is accompanied by a novel continuous monitoring change detection system that compares adaptive and non-adaptive estimates. The second contribution is the development of a streaming analytic for the correlation coefficient and an associated change detector to monitor changes to correlation structures across streams. This is demonstrated on cybersecurity network data. The third contribution is a procedure for estimating time-varying binomial data with thorough exploration of the nuanced behaviour of this estimator. The final contribution is a framework to enhance extant streaming quantile estimators with autonomous, temporally-adaptive properties. In addition, a novel streaming quantile procedure is developed and demonstrated, in an extensive simulation study, to show appealing performance.Open Acces

    Statistical and image analysis methods and applications

    Get PDF

    Stratégies d'optimisation par modélisation explicite des différents acteurs influençant le processus de conception

    Get PDF
    The commercial success or failure of engineered systems has always been significantly affected by their interactions with competing designs, end users, and regulatory bodies. Designs which deliver too little performance, have too high a cost, or are deemed unsafe or harmful will inevitably be overcome by competing designs which better meet the needs of customers and society as a whole. Recent efforts to address these issues have led to techniques such as design for customers or design for market systems. In this dissertation, we seek to utilize a game theory framework in order to directly incorporate the effect of these interactions into a design optimization problem which seeks to maximize designer profitability. This approach allows designers to consider the effects of uncertainty both from traditional design variabilities as well as uncertain future market conditions and the effect of customers and competitors acting as dynamic decision makers. Additionally, we develop techniques for modeling and understanding the nature of these complex interactions from observed data by utilizing causal models. Finally, we examine the complex effects of safety on design by examining the history of federal regulation on the transportation industry. These efforts lead to several key findings; first, by considering the effect of interactions designers may choose vastly different design concepts than would otherwise be considered. This is demonstrated through several case studies with applications to the design of commercial transport aircraft. Secondly, we develop a novel method for selecting causal models which allows designers to gauge the level of confidence in their understanding of stakeholder interactions, including uncertainty in the impact of potential design changes. Finally, we demonstrate through our review of regulations and other safety improvements that the demand for safety improvement is not simply related to ratio of dollars spent to lives saved; instead the level of personal responsibility and the nature and scale of potential safety concerns are found to have causal influence on the demand for increased safety in the form of new regulations.Le succès ou l'échec commercial des systèmes complexes (e.g. avions de transport commercial) ne dépend pas que de la qualité technique intrinsèque du produit mais il est aussi notablement affectée par les différentes interactions avec les autres acteurs du milieu tels la concurrence, les utilisateurs finaux et les organismes de réglementation. Des produits qui manquent de performances, ont un coût trop élevé, ou sont considérées comme dangereux ou nuisibles seront inévitablement surmontés par des produits concurrents qui répondent mieux aux besoins des clients et de la société dans son ensemble. Dans cette thèse, nous cherchons à utiliser le cadre de la théorie des jeux afin d'intégrer directement l'effet de ces interactions dans un problème d'optimisation de conception qui vise à maximiser la rentabilité du concepteur du système. Cette approche permet aux concepteurs de prendre en considération les effets de l'incertitude venant d'une part des sources traditionnelles de variabilités (propriétés matériaux, tolérances géométriques, etc) ainsi que d'autres incertitudes de nature non technique reflétant par exemple l'incertitude du futur marché ou les interactions avec les autres parties prenantes (nouveaux produits concurrents, nouvelles règlementations, etc). Dans ce cadre nous développons également des techniques de modélisation utilisant des modèles causaux afin de comprendre la nature d'interactions complexes à partir des données observées. Enfin, nous examinons les effets complexes entre sureté de fonctionnement et conception en examinant l'histoire de la réglementation fédérale sur l'industrie du transport. Ces travaux ont mené à plusieurs résultats clés. D'abord, en considérant l'effet des interactions entre les différents acteurs, les concepteurs peuvent être amenés à faire des choix techniques très différents de ceux qu'ils auraient fait sans considérer ces interactions. Cela est illustré sur plusieurs études de cas avec des applications à la conception d'avions de transport commercial. Deuxièmement, nous avons développé une nouvelle méthode de construction de modèles causaux qui permet aux concepteurs d'évaluer le niveau de confiance dans leur compréhension des interactions entre les différents acteurs. Enfin, nous avons montré par une étude liens entre la réglementation et les améliorations de la sureté que la demande pour l'amélioration de la sureté ne répond pas toujours à une rationalité économique. En revanche la demande pour plus de sureté est fortement influencée par des facteurs tels que le niveau de responsabilité personnelle et la nature et ampleur des accidents potentiels
    corecore