1,188 research outputs found

    Reinforcing Economic Incentives for Carbon Credits for Forests

    Get PDF
    Afforestation is a cost-effective way for some countries to meet part of their commitments under the Kyoto Protocol and its eventual extensions. Credits for carbon sequestration can be mediated through markets for emissions permits. Both new and old forests are subject to pestilence and fire, which are events that could release substantial, discrete quantities of carbon at irregular intervals. Permits markets, the use of green accounting, and insurance markets for sudden emmisions could increase the efficiency of the scheme and its attractiveness to potential participants. La plantation de forêts est une manière peu coûteuse pour certains pays de remplir leurs engagements à l'égard du Protocole de Kyoto et de ses extensions éventuelles. Les marchés pour les permis d'émissions peuvent s'assortir de crédits pour la séquestration de carbone. Tant les nouvelles que les anciennes forêts sont exposées aux incendies et aux invasions de parasites, qui peuvent se donner lieu à l'émission d'importantes quantités de gaz carbonique à intervalles irréguliers. Les marchés des permis et les marchés d'assurance, mis en ?uvre dans un cadre de comptabilité verte, peuvent rendre plus efficace et plus attrayant un système de crédits pour séquestration du carbone.carbon credit, forest, insurance, green accounting, accidental loss, crédits carbone, forêt, assurance, comptabilité verte, perte accidentelle

    First Double-Chooz Results and the Reactor Antineutrino Anomaly

    Full text link
    We investigate the possible effects of short-baseline antinu_e disappearance implied by the reactor antineutrino anomaly on the Double-Chooz determination of theta_{13} through the normalization of the initial antineutrino flux with the Bugey-4 measurement. We show that the effects are negligible and the value of theta_{13} obtained by the Double-Chooz collaboration is accurate only if Delta m^2_{41} is larger than about 3 eV^2. For smaller values of Delta m^2_{41} the short-baseline oscillations are not fully averaged at Bugey-4 and the uncertainties due to the reactor antineutrino anomaly can be of the same order of magnitude of the intrinsic Double-Chooz uncertainties.Comment: 4 page

    Fitting Photometry of Blended Microlensing Events

    Full text link
    We reexamine the usefulness of fitting blended lightcurve models to microlensing photometric data. We find agreement with previous workers (e.g. Wozniak & Paczynski) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific point along the lightcurve (peak region and wings) of high magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction, and study the importance of non-Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth.Comment: 18 pages, 9 figures, submitted to Ap

    The Moment Problem for Continuous Positive Semidefinite Linear functionals

    Full text link
    Let Ď„\tau be a locally convex topology on the countable dimensional polynomial R\reals-algebra \rx:=\reals[X_1,...,X_n]. Let KK be a closed subset of Rn\reals^n, and let M:=M{g1,...gs}M:=M_{\{g_1, ... g_s\}} be a finitely generated quadratic module in \rx. We investigate the following question: When is the cone \Pos(K) (of polynomials nonnegative on KK) included in the closure of MM? We give an interpretation of this inclusion with respect to representing continuous linear functionals by measures. We discuss several examples; we compute the closure of M=\sos with respect to weighted norm-pp topologies. We show that this closure coincides with the cone \Pos(K) where KK is a certain convex compact polyhedron.Comment: 14 page

    Predicting the outcome of renal transplantation

    Get PDF
    ObjectiveRenal transplantation has dramatically improved the survival rate of hemodialysis patients. However, with a growing proportion of marginal organs and improved immunosuppression, it is necessary to verify that the established allocation system, mostly based on human leukocyte antigen matching, still meets today's needs. The authors turn to machine-learning techniques to predict, from donor-recipient data, the estimated glomerular filtration rate (eGFR) of the recipient 1 year after transplantation.DesignThe patient's eGFR was predicted using donor-recipient characteristics available at the time of transplantation. Donors' data were obtained from Eurotransplant's database, while recipients' details were retrieved from Charite Campus Virchow-Klinikum's database. A total of 707 renal transplantations from cadaveric donors were included.MeasurementsTwo separate datasets were created, taking features with <10% missing values for one and <50% missing values for the other. Four established regressors were run on both datasets, with and without feature selection.ResultsThe authors obtained a Pearson correlation coefficient between predicted and real eGFR (COR) of 0.48. The best model for the dataset was a Gaussian support vector machine with recursive feature elimination on the more inclusive dataset. All results are available at http://transplant.molgen.mpg.de/.LimitationsFor now, missing values in the data must be predicted and filled in. The performance is not as high as hoped, but the dataset seems to be the main cause.ConclusionsPredicting the outcome is possible with the dataset at hand (COR=0.48). Valuable features include age and creatinine levels of the donor, as well as sex and weight of the recipient

    The Ecosystem Approach to Fisheries: Issues, Terminology, Principles, Institutional Foundations, Implementation and Outlook

    Get PDF
    Ecosystems are complex and dynamic natural units that produce goods and services beyond those of benefit to fisheries. Because fisheries have a direct impact on the ecosystem, which is also impacted by other human activities, they need to be managed in an ecosystem context. The meaning of the terms 'ecosystem management', 'ecosystem based management', 'ecosystem approach to fisheries'(EAF), etc., are still not universally defined and progressively evolving. The justification of EAF is evident in the characteristics of an exploited ecosystem and the impacts resulting from fisheries and other activities. The rich set of international agreements of relevance to EAF contains a large number of principles and conceptual objectives. Both provide a fundamental guidance and a significant challenge for the implementation of EAF. The available international instruments also provide the institutional foundations for EAF. The FAO Code of Conduct for Responsible Fisheries is particularly important in this respect and contains provisions for practically all aspects of the approach. One major difficulty in defining EAF lies precisely in turning the available concepts and principles into operational objectives from which an EAF management plan would more easily be developed. The paper discusses these together with the types of action needed to achieve them. Experience in EAF implementation is still limited but some issues are already apparent, e.g. in added complexity, insufficient capacity, slow implementation, need for a pragmatic approach, etc. It is argued, in conclusion, that the future of EAF and fisheries depends on the way in which the two fundamental concepts of fisheries management and ecosystem management, and their respective stakeholders, will join efforts or collide

    Finding largest small polygons with GloptiPoly

    Get PDF
    A small polygon is a convex polygon of unit diameter. We are interested in small polygons which have the largest area for a given number of vertices nn. Many instances are already solved in the literature, namely for all odd nn, and for n=4,6n=4, 6 and 8. Thus, for even n≥10n\geq 10, instances of this problem remain open. Finding those largest small polygons can be formulated as nonconvex quadratic programming problems which can challenge state-of-the-art global optimization algorithms. We show that a recently developed technique for global polynomial optimization, based on a semidefinite programming approach to the generalized problem of moments and implemented in the public-domain Matlab package GloptiPoly, can successfully find largest small polygons for n=10n=10 and n=12n=12. Therefore this significantly improves existing results in the domain. When coupled with accurate convex conic solvers, GloptiPoly can provide numerical guarantees of global optimality, as well as rigorous guarantees relying on interval arithmetic

    Organisation spatiale des peuplements ichtyologiques des herbiers Ă  Thalassia du Grand Cul-de-Sac Marin en Guadeloupe

    Get PDF
    Les herbiers à #Thalassia testudinum du Grand Cul-de-Sac Marin en Guadeloupe ont fait l'objet d'une étude de la variabilité spatiale de la faune ichtyologique et de quelques paramètres de l'environnement (densité, biomasse et longueur foliaire, agitation, température, oxygène dissous, pH et salinité). Les échantillons de poissons ont été prélevés à la senne de plage dans 11 stations réparties de la côte (mangrove) à la barrière frangeante des récifs coralliens, pendant 12 mois consécutifs (août 1987 à juillet 1988). Les stations non significativement différentes (test de Kruskal-Wallis) pour la température ou le pH, le sont en revanche, pour la densité, la biomasse et la longueur des feuilles de #Thalassia, l'oxygène dissous et la salinité, ainsi que pour la richesse spécifique, la régularité et la biomasse ichtyologiqu

    Positivity and optimization for semi-algebraic functions

    Full text link
    We describe algebraic certificates of positivity for functions belonging to a finitely generated algebra of Borel measurable functions, with particular emphasis to algebras generated by semi-algebraic functions. In which case the standard global optimization problem with constraints given by elements of the same algebra is reduced via a natural change of variables to the better understood case of polynomial optimization. A collection of simple examples and numerical experiments complement the theoretical parts of the article.Comment: 20 page

    A Proper Motion Survey for White Dwarfs with the Wide Field Planetary Camera 2

    Full text link
    We have performed a search for halo white dwarfs as high proper motion objects in a second epoch WFPC2 image of the Groth-Westphal strip. We identify 24 high proper motion objects with mu > 0.014 ''/yr. Five of these high proper motion objects are identified as strong white dwarf candidates on the basis of their position in a reduced proper motion diagram. We create a model of the Milky Way thin disk, thick disk and stellar halo and find that this sample of white dwarfs is clearly an excess above the < 2 detections expected from these known stellar populations. The origin of the excess signal is less clear. Possibly, the excess cannot be explained without invoking a fourth galactic component: a white dwarf dark halo. We present a statistical separation of our sample into the four components and estimate the corresponding local white dwarf densities using only the directly observable variables, V, V-I, and mu. For all Galactic models explored, our sample separates into about 3 disk white dwarfs and 2 halo white dwarfs. However, the further subdivision into the thin and thick disk and the stellar and dark halo, and the subsequent calculation of the local densities are sensitive to the input parameters of our model for each Galactic component. Using the lowest mean mass model for the dark halo we find a 7% white dwarf halo and six times the canonical value for the thin disk white dwarf density (at marginal statistical significance), but possible systematic errors due to uncertainty in the model parameters likely dominate these statistical error bars. The white dwarf halo can be reduced to around 1.5% of the halo dark matter by changing the initial mass function slightly. The local thin disk white dwarf density in our solution can be made consistent with the canonical value by assuming a larger thin disk scaleheight of 500 pc.Comment: revised version, accepted by ApJ, results unchanged, discussion expande
    • …
    corecore