11,047 research outputs found

    Origins of elastic properties in ordered nanocomposites

    Get PDF
    We predict a diblock copolymer melt in the lamellar phase with added spherical nanoparticles that have an affinity for one block to have a lower tensile modulus than a pure diblock copolymer system. This weakening is due to the swelling of the lamellar domain by nanoparticles and the displacement of polymer by elastically inert fillers. Despite the overall decrease in the tensile modulus of a polydomain sample, the shear modulus for a single domain increases dramatically

    Elastic moduli of multiblock copolymers in the lamellar phase

    Get PDF
    Copyright (2004) AIP Publishing. This article may be downloaded for personal use only. Any other use requires prior permission of the author and AIP Publishing. The following article appeared in Journal of Chemical Physics 120 and may be found at http://dx.doi.org.proxy.lib.uwaterloo.ca/10.1063/1.1643899We study the linear elastic response of multiblock copolymer melts in the lamellar phase, where the molecules are composed of tethered symmetric AB diblock copolymers. We use a self-consistent field theory method, and introduce a real space approach to calculate the tensile and shear moduli as a function of block number. The former is found to be in qualitative agreement with experiment. We find that the increase in bridging fraction with block number, that follows the increase in modulus, is not responsible for the increase in modulus. It is demonstrated that the change in modulus is due to an increase in mixing of repulsive A and B monomers. Under extension, this increase originates from a widening of the interface, and more molecules pulled free of the interface. Under compression, only the second of these two processes acts to increase the modulus.Work at the Los Alamos National Laboratory was performed under the auspices (Contract No. W-7405-ENG-36) of the U.S. Department of Energy

    Revue bibliographique des méthodes de prévision des débits

    Get PDF
    Dans le domaine de la prévision des débits, une grande variété de méthodes sont disponibles: des modèles stochastiques et conceptuels mais aussi des approches plus novatrices telles que les réseaux de neurones artificiels, les modèles à base de règles floues, la méthode des k plus proches voisins, la régression floue et les splines de régression. Après avoir effectué une revue détaillée de ces méthodes et de leurs applications récentes, nous proposons une classification qui permet de mettre en lumière les différences mais aussi les ressemblances entre ces approches. Elles sont ensuite comparées pour les problèmes différents de la prévision à court, moyen et long terme. Les recommandations que nous effectuons varient aussi avec le niveau d'information a priori. Par exemple, lorsque l'on dispose de séries chronologiques stationnaires de longue durée, nous recommandons l'emploi de la méthode non paramétrique des k plus proches voisins pour les prévisions à court et moyen terme. Au contraire, pour la prévision à plus long terme à partir d'un nombre restreint d'observations, nous suggérons l'emploi d'un modèle conceptuel couplé à un modèle météorologique basé sur l'historique. Bien que l'emphase soit mise sur le problème de la prévision des débits, une grande partie de cette revue, principalement celle traitant des modèles empiriques, est aussi pertinente pour la prévision d'autres variables.A large number of models are available for streamflow forecasting. In this paper we classify and compare nine types of models for short, medium and long-term flow forecasting, according to six criteria: 1. validity of underlying hypotheses, 2. difficulties encountered when building and calibrating the model, 3. difficulties in computing the forecasts, 4. uncertainty modeling, 5. information required by each type of model, and 6. parameter updating. We first distinguish between empirical and conceptual models, the difference being that conceptual models correspond to simplified representations of the watershed, while empirical model only try to capture the structural relationships between inputs to the watershed and outputs, such as streamflow. Amongst empirical models, we distinguish between stochastic models, i.e. models based on the theory of probability, and non-stochastic models. Three types of stochastic models are presented: statistical regression models, Box-Jenkins models, and the nonparametric k-nearest neighbor method. Statistical linear regression is only applicable for long term forecasting (monthly flows, for example), since it requires independent and identically distributed observations. It is a simple method of forecasting, and its hypotheses can be validated a posteriori if sufficient data are available. Box-Jenkins models include linear autoregressive models (AR), linear moving average models (MA), linear autoregressive - moving average models (ARMA), periodic ARMA models (PARMA) and ARMA models with auxiliary inputs (ARMAX). They are more adapted for weekly or daily flow forecasting, since the yallow for the explicit modeling of time dependence. Efficient methods are available for designing the model and updating the parameters as more data become available. For both statistical linear regression and Box-Jenkins models, the inputs must be uncorrelated and linearly related to the output. Furthermore, the process must be stationary. When it is suspected that the inputs are correlated or have a nonlinear effect on the output, the k-nearest neighbor method may be considered. This data-based nonparametric approach simply consists in looking, among past observations of the process, for the k events which are most similar to the present situation. A forecast is then built from the flows which were observed for these k events. Obviously, this approach requires a large database and a stationary process. Furthermore, the time required to calibrate the model and compute the forecasts increases rapidly with the size of the database. A clear advantage of stochastic models is that forecast uncertainty may be quantified by constructing a confidence interval. Three types of non-stochastic empirical models are also discussed: artificial neural networks (ANN), fuzzy linear regression and multivariate adaptive regression splines (MARS). ANNs were originally designed as simple conceptual models of the brain. However, for forecasting purposes, these models can be thought of simply as a subset of non linear empirical models. In fact, the ANN model most commonly used in forecasting, a multi-layer feed-forward network, corresponds to a non linear autoregressive model (NAR). To capture the moving average components of a time series, it is necessary to use recurrent architectures. ANNs are difficult to design and calibrate, and the computation of forecasts is also complex. Fuzzy linear regression makes it possible to extract linear relationships from small data sets, with fewer hypotheses than statistical linear regression. It does not require the observations to be uncorrelated, nor does it ask for the error variance to be homogeneous. However, the model is very sensitive to outliers. Furthermore, a posteriori validation of the hypothesis of linearity is not possible for small data sets. MARS models are based on the hypothesis that time series are chaotic instead of stochastic. The main advantage of the method is its ability to model non-stationary processes. The approach is non-parametric, and therefore requires a large data set.Amongst conceptual models, we distinguish between physical models, hydraulic machines, and fuzzy rule-based systems. Most conceptual hydrologic models are hydraulic machines, in which the watershed is considered to behave like a network of reservoirs. Physical modeling of a watershed would imply using fundamental physical equations at a small scale, such as the law of conservation of mass. Given the complexity of a watershed, this can be done in practice only for water routing. Consequently, only short term flow forecasts can be obtained from a physical model, since the effects of precipitation, infiltration and evaporation must be negligible. Fuzzy rule-based systems make it possible to model the water cycle using fuzzy IF-THEN rules, such as IF it rains a lot in a short period of time, THEN there will be a large flow increase following the concentration time. Each fuzzy quantifier is modeled using a fuzzy number to take into account the uncertainty surrounding it. When sufficient data are available, the fuzzy quantifiers can be constructed from the data. In general, conceptual models require more effort to develop than empirical models. However, for exceptional events, conceptual models can often provide more realistic forecasts, since empirical models are not well suited for extrapolation.A fruitful approach is to combine conceptual and empirical models. One way of doing this, called extended streamflow prediction or ESP, is to combine a stochastic model for generating meteorological scenarios with a conceptual model of the watershed.Based on this review of flow forecasting models, we recommend for short term forecasting (hourly and daily flows) the use of the k-nearest neighbor method, Box-Jenkins models, water routing models or hydraulic machines. For medium term forecasting (weekly flows, for example), we recommend the k-nearest neighbor method and Box-Jenkins models, as well as fuzzy-rule based and ESP models. For long term forecasting (monthly flows), we recommend statistical and fuzzy regression, Box-Jenkins, MARS and ESP models. It is important to choose a type of model which is appropriate for the problem at hand and for which the information available is sufficient. Each type of model having its advantages, it can be more efficient to combine different approaches when forecasting streamflow

    Utilisation de l'information historique en analyse hydrologique fréquentielle

    Get PDF
    L'utilisation de l'information historique dans une analyse fréquentielle permet de mieux mobiliser l'information réellement disponible et devrait donc permettre d'améliorer l'estimation des quantiles de grande période de retour. Par information historique, on entend ici de l'information relative à des grandes crues qui se sont produites avant le début de la période de mesure (dite période de jaugeage systématique) des niveaux et débits des lacs et rivières. On observe de manière générale que l'utilisation de l'information historique conduit à une diminution de l'impact des valeurs singulières dans les séries d'enregistrements systématiques et à une diminution de l'écart-type des estimations. Dans le présent article on présente les méthodes statistiques qui permettent la modélisation de l'information historique.Use of information about historical floods, i.e. extreme floods that occurred prior to systematic gauging, can often substantially improve the precision of flood quantile estimates. Such information can be retrieved from archives, newspapers, interviews with local residents, or by use of paleohydrologic and dendohydrologic traces. Various statistical techniques for incorporating historical information into frequency analyses are discussed in this review paper. The basic hypothesis in the statistical modeling of historical information is that a certain perception water level exists and that during a given historical period preceding the period of gauging, all exceedances of this level have been recorded, be it in newpapers, in people's memory, or trough traces in the catchment such as sediment deposits or traces on trees. No information is available on floods that did not exceed the perception threshold. It is further assumed that a period of systematic gauging is available. Figure 1 illustrates this situation. The U.S. Water Resources Council (1982) recommended the use of the method of adjusted moments for fitting the log Pearson type III distribution. A weighting factor is applied to the data below the threshold observed during the gauged period to account for the missing data below the threshold in the historical period. Several studies have pointed out that the method of adjusted moments is inefficient. Maximum likelihood estimators based on partially censored data have been shown to be much more efficient and to provide a practical framework for incorporating imprecise and categorical data. Unfortunately, for some of the most common 3-parameter distributions used in hydrology, the maximum likelihood method poses numerical problems. Recently, some authors have proposed use of the method of expected moments, a variant of the method of adjusted moments which gives less weight to observations below the threshold. According to preliminary studies, estimators based on expected moments are almost as efficient as maximum likelihood estimators, but have the advantage of avoiding the numerical problems related to the maximization of likelihood functions. Several studies have emphasized the potential gain in estimation accuracy with the use of historical information. Because historical floods by definition are large, their introduction in a flood frequency analysis can have a major impact on estimates of rare floods. This is particularly true when 3-parameter distributions are considered. Moreover, use of historical information is a means to increase the representativity of a outlier in the systematic data. For example, an extreme outlier will not get the same weight in the analysis if one can state with certainty that it is the largest flood in, say, 200 years, and not only the largest flood in, say, 20 years of systematic gauging.Historical data are generally imprecise, and their inaccuracy should be properly accounted for in the analysis. However, even with substantial uncertainty in the data, the use of historical information is a viable means to improve estimates of rare floods

    Lengthscales and Cooperativity in DNA Bubble Formation

    Full text link
    It appears that thermally activated DNA bubbles of different sizes play central roles in important genetic processes. Here we show that the probability for the formation of such bubbles is regulated by the number of soft AT pairs in specific regions with lengths which at physiological temperatures are of the order of (but not equal to) the size of the bubble. The analysis is based on the Peyrard- Bishop-Dauxois model, whose equilibrium statistical properties have been accurately calculated here with a transfer integral approach

    Superconductivity-enhanced bias spectroscopy in carbon nanotube quantum dots

    Get PDF
    We study low-temperature transport through carbon nanotube quantum dots in the Coulomb blockade regime coupled to niobium-based superconducting leads. We observe pronounced conductance peaks at finite source-drain bias, which we ascribe to elastic and inelastic cotunneling processes enhanced by the coherence peaks in the density of states of the superconducting leads. The inelastic cotunneling lines display a marked dependence on the applied gate voltage which we relate to different tunneling-renormalizations of the two subbands in the nanotube. Finally, we discuss the origin of an especially pronounced sub-gap structure observed in every fourth Coulomb diamond

    Modeling all alternative solutions for highly renewable energy systems

    Get PDF
    As the world is transitioning towards highly renewable energy systems, advanced tools are needed to analyze such complex networks. Energy system design is, however, challenged by real-world objective functions consisting of a blurry mix of technical and socioeconomic agendas, with limitations that cannot always be clearly stated. As a result, it is highly likely that solutions which are techno-economically suboptimal will be preferable. Here, we present a method capable of determining the continuum containing all techno-economically near-optimal solutions, moving the field of energy system modeling from discrete solutions to a new era where continuous solution ranges are available. The presented method is applied to study a range of technical and socioeconomic metrics on a model of the European electricity system. The near-optimal region is found to be relatively flat allowing for solutions that are slightly more expensive than the optimum but better in terms of equality, land use, and implementation time.Comment: 25 pages, 7 figures, also available as preprint at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=368204

    Khovanov homology is an unknot-detector

    Get PDF
    We prove that a knot is the unknot if and only if its reduced Khovanov cohomology has rank 1. The proof has two steps. We show first that there is a spectral sequence beginning with the reduced Khovanov cohomology and abutting to a knot homology defined using singular instantons. We then show that the latter homology is isomorphic to the instanton Floer homology of the sutured knot complement: an invariant that is already known to detect the unknot.Comment: 124 pages, 13 figure
    corecore