2,005 research outputs found

    Development of design allowable data for Celion 6000/LARC-160, graphite/polyimide composite laminates

    Get PDF
    A design allowables test program was conducted on Celion 6000/LARC-160 graphite polyimide composite to establish material performance over a 116 K (-250 F) to 589 K (600 F) temperature range. Tension, compression, in-plane shear and short beam shear properties were determined for uniaxial, quasi-isotropic and + or - 45 deg laminates. Effects of thermal aging and moisture saturation on mechanical properties were also evaluated. Celion 6000/LARC-160 graphite/polyimide can be considered an acceptable material system for structural applications to 589 K (600 F)

    Technical note: Complexity–uncertainty curve (c-u-curve) – a method to analyse, classify and compare dynamical systems

    Get PDF
    We propose and provide a proof of concept of a method to analyse, classify and compare dynamical systems of arbitrary dimensions by the two key features uncertainty and complexity. It starts by subdividing the system's time trajectory into a number of time slices. For all values in a time slice, the Shannon information entropy is calculated, measuring within-slice variability. System uncertainty is then expressed by the mean entropy of all time slices. We define system complexity as “uncertainty about uncertainty” and express it by the entropy of the entropies of all time slices. Calculating and plotting uncertainty “u” and complexity “c” for many different numbers of time slices yields the c-u-curve. Systems can be analysed, compared and classified by the c-u-curve in terms of (i) its overall shape, (ii) mean and maximum uncertainty, (iii) mean and maximum complexity and (iv) characteristic timescale expressed by the width of the time slice for which maximum complexity occurs. We demonstrate the method with the example of both synthetic and real-world time series (constant, random noise, Lorenz attractor, precipitation and streamflow) and show that the shape and properties of the respective c-u-curve clearly reflect the particular characteristics of each time series. For the hydrological time series, we also show that the c-u-curve characteristics are in accordance with hydrological system understanding. We conclude that the c-u-curve method can be used to analyse, classify and compare dynamical systems. In particular, it can be used to classify hydrological systems into similar groups, a pre-condition for regionalization, and it can be used as a diagnostic measure and as an objective function in hydrological model calibration. Distinctive features of the method are (i) that it is based on unit-free probabilities, thus permitting application to any kind of data, (ii) that it is bounded, (iii) that it naturally expands from single-variate to multivariate systems, and (iv) that it is applicable to both deterministic and probabilistic value representations, permitting e.g. application to ensemble model predictions.</p

    Technical note: “Bit by bit”: a practical and general approach for evaluating model computational complexity vs. model performance

    Get PDF
    One of the main objectives of the scientific enterprise is the development of well-performing yet parsimonious models for all natural phenomena and systems. In the 21st century, scientists usually represent their models, hypotheses, and experimental observations using digital computers. Measuring performance and parsimony of computer models is therefore a key theoretical and practical challenge for 21st century science. “Performance” here refers to a model’s ability to reduce predictive uncertainty about an object of interest. “Parsimony” (or complexity) comprises two aspects: descriptive complexity – the size of the model itself which can be measured by the disk space it occupies – and computational complexity – the model’s effort to provide output. Descriptive complexity is related to inference quality and generality; computational complexity is often a practical and economic concern for limited computing resources. In this context, this paper has two distinct but related goals. The first is to propose a practical method of measuring computational complexity by utility software “Strace”, which counts the total number of memory visits while running a model on a computer. The second goal is to propose the “bit by bit” method, which combines measuring computational complexity by “Strace” and measuring model performance by information loss relative to observations, both in bit. For demonstration, we apply the “bit by bit” method to watershed models representing a wide diversity of modelling strategies (artificial neural network, auto-regressive, processbased, and others). We demonstrate that computational complexity as measured by “Strace” is sensitive to all aspects of a model, such as the size of the model itself, the input data it reads, its numerical scheme, and time stepping. We further demonstrate that for each model, the bit counts for computational complexity exceed those for performance by several orders of magnitude and that the differences among the models for both computational complexity and performance can be explained by their setup and are in accordance with expectations. We conclude that measuring computational complexity by “Strace” is practical, and it is also general in the sense that it can be applied to any model that can be run on a digital computer. We further conclude that the “bit by bit” approach is general in the sense that it measures two key aspects of a model in the single unit of bit. We suggest that it can be enhanced by additionally measuring a model’s descriptive complexity – also in bit.info:eu-repo/semantics/publishedVersio

    Gaming with eutrophication: Contribution to integrating water quantity and quality management at catchment level

    Full text link
    The Metropolitan Region of Sao Paulo (MRSP) hosts 18 million inhabitants. A complex system of 23 interconnected reservoirs was built to ensure its water supply. Half of the potable water produced for MRSP's population (35 m3/s) is imported from a neighbour catchment, the other half is produced within the Alto Tietê catchment, where 99% of the population lives. Perimeters of land use restriction were defined to contain uncontrolled urbanization, as domestic effluents were causing increasing eutrophication of some of these reservoirs. In the 90's catchment committees and sub committees were created to promote discussion between stakeholders and develop catchment plans. The committees are very well structured "on paper". However, they are not very well organised and face a lack of experience. The objective of this work was to design tools that would strengthen their discussion capacities. The specific objective of the AguAloca process was to integrate the quality issue and its relation to catchment management as a whole in these discussions. The work was developed in the Alto Tietê Cabeceiras sub-catchment, one of the 5 sub catchments of the Alto-Tietê. It contains 5 interconnected dams, and presents competitive uses such as water supply, industry, effluent dilution and irrigated agriculture. A RPG was designed following a companion modelling approach (Etienne et al., 2003). It contains a friendly game-board, a set of individual and collective rules and a computerized biophysical model. The biophysical model is used to simulate water allocation and quality processes at catchment level. It articulates 3 modules. A simplified nutrient discharge model permits the estimation of land use nutrient exportation. An arc-node model simulates water flows and associated nutrient charges from one point of the hydrographical network to another. The Vollenweider model is used for simulating specific reservoir dynamics. The RPG allows players to make individual and collective decisions related to water allocation and the management of its quality. Impacts of these decisions are then simulated using the biophysical model. Specific indicators of the game are then updated and may influence player's behaviour (actions) in following rounds. To introduce discussions on the management of water quality at a catchment level, an issue that is rarely explicitly dealt with, four game sessions were implemented involving representatives of basin committees and water and sanitation engineers. During the game session, the participants took advantage of the water quality output of the biophysical model to test management alternatives such as rural sewage collection or effluent dilution. The biophysical model accelerated calculations of flows and eutrophication rates that were then returned to the game board with explicit indicators of quantity and quality. Players could easily test decisions impacting on qualitative water processes and visualize the simulation results directly on the game board that was representing a friendly, virtual and simplified catchment. The Agualoca game proved its ability to turn complex water processes understandable for a non totally initiated public. This experience contributed to a better understanding of multiple-use water management and also of joint management of water quality and quantity. (Résumé d'auteur

    Technical note: “Bit by bit”: a practical and general approach for evaluating model computational complexity vs. model performance

    Get PDF
    One of the main objectives of the scientific enterprise is the development of well-performing yet parsimonious models for all natural phenomena and systems. In the 21st century, scientists usually represent their models, hypotheses, and experimental observations using digital computers. Measuring performance and parsimony of computer models is therefore a key theoretical and practical challenge for 21st century science. “Performance” here refers to a model\u27s ability to reduce predictive uncertainty about an object of interest. “Parsimony” (or complexity) comprises two aspects: descriptive complexity – the size of the model itself which can be measured by the disk space it occupies – and computational complexity – the model\u27s effort to provide output. Descriptive complexity is related to inference quality and generality; computational complexity is often a practical and economic concern for limited computing resources. In this context, this paper has two distinct but related goals. The first is to propose a practical method of measuring computational complexity by utility software “Strace”, which counts the total number of memory visits while running a model on a computer. The second goal is to propose the “bit by bit” method, which combines measuring computational complexity by “Strace” and measuring model performance by information loss relative to observations, both in bit. For demonstration, we apply the “bit by bit” method to watershed models representing a wide diversity of modelling strategies (artificial neural network, auto-regressive, process-based, and others). We demonstrate that computational complexity as measured by “Strace” is sensitive to all aspects of a model, such as the size of the model itself, the input data it reads, its numerical scheme, and time stepping. We further demonstrate that for each model, the bit counts for computational complexity exceed those for performance by several orders of magnitude and that the differences among the models for both computational complexity and performance can be explained by their setup and are in accordance with expectations. We conclude that measuring computational complexity by “Strace” is practical, and it is also general in the sense that it can be applied to any model that can be run on a digital computer. We further conclude that the “bit by bit” approach is general in the sense that it measures two key aspects of a model in the single unit of bit. We suggest that it can be enhanced by additionally measuring a model\u27s descriptive complexity – also in bit

    Identifying rainfall-runoff events in discharge time series: a data-driven method based on information theory

    Get PDF
    In this study, we propose a data-driven approach for automatically identifying rainfall-runoff events in discharge time series. The core of the concept is to construct and apply discrete multivariate probability distributions to obtain probabilistic predictions of each time step that is part of an event. The approach permits any data to serve as predictors, and it is non-parametric in the sense that it can handle any kind of relation between the predictor(s) and the target. Each choice of a particular predictor data set is equivalent to formulating a model hypothesis. Among competing models, the best is found by comparing their predictive power in a training data set with user-classified events. For evaluation, we use measures from information theory such as Shannon entropy and conditional entropy to select the best predictors and models and, additionally, measure the risk of overfitting via cross entropy and Kullback–Leibler divergence. As all these measures are expressed in “bit”, we can combine them to identify models with the best tradeoff between predictive power and robustness given the available data. We applied the method to data from the Dornbirner Ach catchment in Austria, distinguishing three different model types: models relying on discharge data, models using both discharge and precipitation data, and recursive models, i.e., models using their own predictions of a previous time step as an additional predictor. In the case study, the additional use of precipitation reduced predictive uncertainty only by a small amount, likely because the information provided by precipitation is already contained in the discharge data. More generally, we found that the robustness of a model quickly dropped with the increase in the number of predictors used (an effect well known as the curse of dimensionality) such that, in the end, the best model was a recursive one applying four predictors (three standard and one recursive): discharge from two distinct time steps, the relative magnitude of discharge compared with all discharge values in a surrounding 65&thinsp;h time window and event predictions from the previous time step. Applying the model reduced the uncertainty in event classification by 77.8&thinsp;%, decreasing conditional entropy from 0.516 to 0.114 bits. To assess the quality of the proposed method, its results were binarized and validated through a holdout method and then compared to a physically based approach. The comparison showed similar behavior of both models (both with accuracy near 90&thinsp;%), and the cross-validation reinforced the quality of the proposed model. Given enough data to build data-driven models, their potential lies in the way they learn and exploit relations between data unconstrained by functional or parametric assumptions and choices. And, beyond that, the use of these models to reproduce a hydrologist's way of identifying rainfall-runoff events is just one of many potential applications.</p

    The prediction of future from the past: an old problem from a modern perspective

    Full text link
    The idea of predicting the future from the knowledge of the past is quite natural when dealing with systems whose equations of motion are not known. Such a long-standing issue is revisited in the light of modern ergodic theory of dynamical systems and becomes particularly interesting from a pedagogical perspective due to its close link with Poincar\'e's recurrence. Using such a connection, a very general result of ergodic theory - Kac's lemma - can be used to establish the intrinsic limitations to the possibility of predicting the future from the past. In spite of a naive expectation, predictability results to be hindered rather by the effective number of degrees of freedom of a system than by the presence of chaos. If the effective number of degrees of freedom becomes large enough, regardless the regular or chaotic nature of the system, predictions turn out to be practically impossible. The discussion of these issues is illustrated with the help of the numerical study of simple models.Comment: 9 pages, 4 figure

    Axion-like particle effects on the polarization of cosmic high-energy gamma sources

    Get PDF
    Various satellite-borne missions are being planned whose goal is to measure the polarization of a large number of gamma-ray bursts (GRBs). We show that the polarization pattern predicted by current models of GRB emission can be drastically modified by the existence of very light axion-like particles (ALPs), which are present in many extensions of the Standard Model of particle physics. Basically, the propagation of photons emitted by a GRB through cosmic magnetic fields with a domain-like structure induces photon-ALP mixing, which is expected to produce a strong modification of the original photon polarization. Because of the random orientation of the magnetic field in each domain, this effect strongly depends on the orientation of the photon line of sight. As a consequence, photon-ALP conversion considerably broadens the original polarization distribution. Searching for such a peculiar feature through future high-statistics polarimetric measurements is therefore a new opportunity to discover very light ALPs.Comment: Final version (21 pages, 8 eps figures). Matches the version published on JCAP. Added a Section on the effects of cosmic expansion on photon-ALP conversions. Figures modified to take into account this effect. References updated. Conclusions unchanged

    Axion-like-particle search with high-intensity lasers

    Full text link
    We study ALP-photon-conversion within strong inhomogeneous electromagnetic fields as provided by contemporary high-intensity laser systems. We observe that probe photons traversing the focal spot of a superposition of Gaussian beams of a single high-intensity laser at fundamental and frequency-doubled mode can experience a frequency shift due to their intermittent propagation as axion-like-particles. This process is strongly peaked for resonant masses on the order of the involved laser frequencies. Purely laser-based experiments in optical setups are sensitive to ALPs in the eV\mathrm{eV} mass range and can thus complement ALP searches at dipole magnets.Comment: 25 pages, 2 figure
    • …
    corecore