202 research outputs found

    Above- and Belowground Development of a Fast-Growing Willow Planted in Acid-Generating Mine Technosol

    Get PDF
    Surface metal mining produces large volumes of waste rocks. If they contain sulfide minerals, these rocks can generate a flow of acidic water from the mining site, known as acid mine drainage (AMD), which increases trace metals availability for plant roots. Adequate root development is crucial to decreasing planting stress and improving phytoremediation with woody species. However, techniques to improve revegetation success rarely take into account root development. An experiment was conducted at a gold mine in Quebec, Canada, to evaluate the establishment ability over 3 yr of a fast-growing willow (Salix miyabeana Sx64) planted in acid-generating waste rocks. The main objective was to study root development in the soil profile and trace element accumulation in leaves among substrates varying in thickness (0, 20, and 40 cm of soil) and composition (organic carbon [OC] and alkaline AMD treatment sludge). Trees directly planted in waste rocks survived well (69%) but had the lowest productivity (lowest growth in height and diameter, aerial biomass, total leaf area, and root-system size). By contrast, the treatment richer in OC showed the greatest aerial biomass and total leaf area the first year; the thicker treatment resulted in the greatest growth in height and diameter, aboveground biomass, and root-system size in both the first and third years. Willow root development was restricted to soil layers during the first year, but this restriction was overcome in the third year after planting. Willow accumulation factors in leaves were below one for all investigated trace metals except for zinc (Zn), cadmium (Cd), and strontium. For Cd and Zn, concentrations increased with time in willow foliage, decreasing the potential of this willow species use for phytostabilization, despite its ability to rapidly develop extensive root systems in the mine Technosol

    Fast-growing willow development on acidic mining wastes for rapid greening purposes

    Get PDF
    Metal mining generates large volumes of wastes, which can contain sulphide minerals that generate acid when exposed to atmospheric conditions, providing unfavourable conditions for plant establishment. In particular, mining waste rocks are piled on tens of meters, and remain devoid of vegetation, creating a desolated anthropogenic landscape. The use of adapted plants able to grow quickly on waste rocks can help increasing their aesthetical aspect. An experiment was conducted at the Westwood mine in Quebec to evaluate the establishment ability of a fast-growing willow (Salix miyabeana Sx64) on acid- generating waste rocks. The main objective was to identify substrate thickness and composition that maximized willow productivity while limiting water stress exposure and trace metal accumulation. A randomized complete block design was established in June 2014 with five treatments: (1) direct planting in waste rocks, (2) and (3) 20 cm or 40 cm moraine amended with 20% of organic matter (OM) (in volume), (4) 20 cm moraine at 40% of OM, and (5) 20 cm moraine at 20% of OM over 20 cm lime sludge from water treatment. Trees directly planted in waste rocks survived well (75%) but had the lowest aerial productivity, with the lowest height and diameter growth, aerial biomass, and total leaf area, while the treatment richer in OM showed the greatest aerial biomass and total leaf area, and the thicker treatment the greatest height and diameter growth. Willow root development was restricted to cover soils the first year after planting, and foliar δ13C values decreased in thicker soil (40 cm) compared to thin soil (20 cm). Willow accumulation factors in leaves were below one for all investigated trace metals except Zn

    Tree-Substrate Water Relations and Root Development in Tree Plantations Used for Mine Tailings Reclamation

    Get PDF
    Tree water uptake relies on well-developed root systems. However, mine wastes can restrict root growth, in particular metalliferous mill tailings, which consist of the finely crushed ore that remains after valuable metals are removed. Thus, water stress could limit plantation success in reclaimed mine lands. This study evaluates the effect of substrates varying in quality (topsoil, overburden, compost and tailings mixture, and tailings alone) and quantity (50- or 20-cm-thick topsoil layer vs. 1-m2 plantation holes) on root development and water stress exposure of trees planted in low-sulfide mine tailings under boreal conditions. A field experiment was conducted over 2 yr with two tree species: basket willow (Salix viminalis L.) and hybrid poplar (Populus canadensis Moench × Populus maximowiczii A. Henry). Trees developed roots in the tailings underlying the soil treatments despite tailings' low macroporosity. However, almost no root development occurred in tailings underlying a compost and tailings mixture. Because root development and associated water uptake was not limited to the soil, soil volume influenced neither short-term (water potential and instantaneous transpiration) nor long-term (δ13C) water stress exposure in trees. However, trees were larger and had greater total leaf area when grown in thicker topsoil. Despite a volumetric water content that always remained above permanent wilting point in the tailings colonized by tree roots, measured foliar water potentials at midday were lower than drought thresholds reported for both tested tree species

    Development of the Jungfraujoch multiwavelength lidar system for continuous observations of the aerosol optical properties in the free troposphere

    Get PDF
    Climate changes and global warming are generally associated with the enhanced greenhouse effect, but aerosols can induce a cooling effect and thus regionally mask this warming effect. Unfortunately, the strong variability both in space and in time of the aerosols and thus the difficulty to characterize their global basic properties induce large uncertainties in the predictions of the numerical models. Those uncertainties are as high as the absolute level of the enhanced greenhouse forcing. To solve this problem it is necessary to improve the set of well-calibrated instruments (both in situ and remote sensing) with the ability to measure the changes in stratospheric and tropospheric aerosols amounts and their radiative properties, changes in atmospheric water vapor and temperature distributions, and changes in clouds cover and cloud radiative properties. The quantity used to assess the importance of one compound (greenhouse gases, aerosols) to the variation of the radiative budget of the Earth is the radiative forcing. One of those forcings is the direct aerosol radiative forcing and it depends on the optical depths and the upscatter fraction of the aerosols. Those two parameters depend on the chemical composition and size distribution of the aerosols. Thus the key parameters of this radiative forcing are the chemical composition through its refractive index and the size distribution of the aerosols. This thesis deals with the design and the implementation of one multi-wavelength lidar system at the Jungfraujoch Alpine Research Station (Alt. 3580m asl). This lidar system is a combination of one standard backscatter lidar and one Raman lidar. Its design have been supported by a ray tracing analysis of the receiver part. The laser transmitter is based on a tripled Nd:YAG laser and the backscattered light is collected by one Newtonian telescope for the tropospheric measurements and by one Cassegrain telescope for the future stratospheric measurements. The received wavelengths for each telescope include three elastically scattered wavelengths (355, 532 and 1064nm), two spontaneous Raman signals from nitrogen (387 and 607nm) and one spontaneous Raman signal from the water vapor (408nm). The optical signals received by each of the telescopes are separated spectrally by two filter polychromators. They are build up around a set of beamsplitters and custom design thin band pass filters with high out-of-band rejection. On the visible channel, the adds of a Wollaston prism separates the parallel polarized backscattered signal (532(p)nm) of the perpendicular polarized one (532(c)nm). Photomultiplier tubes perform the detection of the signals for the UV and visible wavelengths and by Si-avalanche photodiodes for the near-infrared signal. The acquisition of the signals is performed by seven transient recorders in analog and in photon counting modes. Within the frame of the EARLINET (European Aerosol Research Lidar Network), hardware and software intercomparisons have been done. The software intercomparison has been divided into the validation of the elastic algorithm and the Raman algorithm. Those intercomparisons of the inversions of the lidar signals have been performed using synthetic data for a number of situations of different complexity. The hardware intercomparison have been achieved with the mobile micro-lidar of the Observatoire Cantonal de Neuchâtel. The present lidar system provides independent aerosol extinction and backscatter profiles, depolarization ratio and water vapor mixing ratio up to the tropopause. Their uncertainties could be smaller than 20% and thus make possible the retrieval of the microphysical aerosol parameters like the volume concentration distribution and the mean and integral parameters of the particle size distribution, (effective radius, total surface-area concentration, total volume concentration and number concentration of particles). This retrieval is performed by one algorithm of the Institute of Mathematic of the University of Postdam based on the hybrid regularization method. The first results of the retrieval of the volume concentration distribution with three backscatter (355, 532 and 1064nm) and one extinction (355nm) profiles has demonstrated promising results. Future upgrades of the system will add ozone concentration and temperature profile up to the stratopause

    Three-dimensional separation in shock/boundary layer interaction

    Get PDF
    A Large-eddy simulation of a shock impinging on a turbulent boundary layer is carried out and demonstrates good agreement with the experiments. A special emphasis is put on the analysis of the three-dimensional modulation of the flow in order to clarify the origin of the mean vortices located in the separation region highlighted by the experimen

    On the search path length of random binary skip graphs

    Get PDF
    International audienceIn this paper we consider the skip graph data structure, a load balancing alternative to skip lists, designed to perform better in a distributed environment. We extend previous results of Devroye on skip lists, and prove that the maximum length of a search path in a random binary skip graph of size n is of order log n with high probability.Dans ce travail, nous nous intéressons aux "skip graphs", une alternative aux "skip lists" permettant un meilleur équlibrage de charges et conçue pour être plus efficace dans un environnement distribué. Nous étendons des résultats de Devroye sur les skip lists, et prouvons que la longueur maximale d'un chemin de recherche dans un skip graph binaire aléatoire de taille n, est d'ordre log(n) avec forte probabilité

    Optimizing Resource allocation while handling SLA violations in Cloud Computing platforms

    Get PDF
    International audienceIn this paper we study a resource allocation problem in the context of Cloud Computing, where a set of Virtual Machines (VM) has to be placed on a set of Physical Machines (PM). Each VM has a given demand (e.g. CPU demand), and each PM has a capacity. However, each VM only uses a fraction of its demand. The aim is to exploit the difference between the demand of the VM and its real utilization of the resources, to exploit the capacities of the PMs as much as possible. Moreover, the real consumption of the VMs can change over time (while staying under its original demand), implying sometimes expensive ''SLA violations'', corresponding to some VM's consumption not satisfied because of overloaded PMs. Thus, while optimizing the global resource utilization of the PMs, it is necessary to ensure that at any moment a VM's need evolves, a few number of migrations (moving a VM from PM to PM) is sufficient to find a new configuration in which all the VMs' consumptions are satisfied. We modelize this problem using a fully dynamic bin packing approach and we present an algorithm ensuring a global utilization of the resources of 66%. Moreover, each time a PM is overloaded at most one migration is necessary to fall back in a configuration with no overloaded PM, and only 3 different PMs are concerned by required migrations that may occur to keep the global resource utilization correct. This allows the platform to be highly resilient to a great number of changes

    Modeling and Practical Evaluation of a Service Location Problem in Large Scale Networks

    Get PDF
    International audienceWe consider a generalization of a classical optimization problem related to server and replica location problems in networks. More precisely, we suppose that a set of users distributed over a network wish to have access to a particular service proposed by a set of providers. The aim is then to distinguish a set of service providers able to offer a sufficient amount of resources in order to satisfy the requests of the clients. Moreover, a quality of service following some requirements in terms of latencies is desirable. A smart repartition of the servers in the network may also ensure good fault tolerance properties. We model this problem as a variant of Bin Packing, namely Bin Packing under Distance Constraint (BPDC) where the goal is to build a minimal number of bins (i.e. to choose a minimal number of servers) so that (i) each client is associated to exactly one server, (ii) the capacity of the server is large enough to satisfy the requests of its clients and (iii) the distance between two clients associated to the same server is minimized. We prove that this problem is hard to approximate even when using resource augmentation techniques : we compare the number of obtained bins when using polynomial time algorithms allowed to build bins of diameter at most b*dmax, for b>1, to the optimal number of bins of diameter at most dmax. On the one hand, we prove that (i) if b=(2-e), BPDC is hard to approximate within any constant approximation ratio, for any e>0; and that (ii) BPDC is hard to approximate at a ratio lower than 3/2 even if resource augmentation is used. On the other hand, if b=2, we propose a polynomial time approximation algorithm for BPDC with approximation ratio 7/3 in the general case. We show how to turn an approximation algorithm for BPDC into an approximation algorithm for the non-uniform capacitated K-center problem and vice-versa. Then, we present a comparison of the quality of results for BPDC in the context of several Internet latency embedding tools such as Sequoia and Vivaldi, using datasets based on PlanetLab latency measurements.Nous considérons une généralisation d'un problème d'optimisation classique lié au placement de serveurs et de réplicats dans les réseaux. Plus précisément, nous supposons qu'un ensemble d'utilisateurs au sein d'un réseau souhaite accéder à un service particulier proposé par un ensemble de fournisseurs de ce service. L'objectif est alors d'identifier un ensemble de fournisseurs de service capable d'offrir suffisamment de ressources pour répondre aux requêtes des clients. Par ailleurs, une certaine qualité de service relativement aux temps de communications est désirable. Une répartition judicieuse des serveurs dans le réseau offrirait également de bonnes propriétés de tolérance aux pannes. Nous modélisons ce problème comme une variante de Bin Packing, le Bin Packing avec Contrainte de Distance (BPDC en anglais) où le but est de construire un minimum de groupes (i.e. de choisir un nombre minimal de serveurs) de telle sorte que (i) chaque client est associé à exactement un serveur, (ii) la capacité dudit serveur est suffisante pour répondre aux requêtes des clients qui lui sont associés et (iii) la distance entre deux clients associés au même serveur est minimisée. Nous prouvons que ce problème est inapproximable même en utilisant des techniques d'augmentation de ressources : le nombre de groupes obtenus en utilisant des algorithmes s'exécutant en temps polynomial et autorisés à construire des groupes de diamètre au plus b*dmax, avec b>1, est comparé au nombre de groupes d'une solution optimale construisant des groupes de diamètre au plus dmax. D'un côté, nous prouvons que (i) si b=(2-e), BPDC est inapproximable à facteur constant, pour tout e>0; et que (ii) BPDC est inapproximableà un facteur inférieur à 3/2 même en utilisant de l'augmentation de ressources. D'un autre côté, si b=2, nous proposons un algorithme s'exécutant en temps polynomial pour BPDC assurant un facteur d'approximation de 7/3 dans le cas général. Nous montrons également comment transformer un algorithme d'approximation pour BPDC en un algorithme d'approximation pour le K-centre non uniforme avec capacités, et vice-versa. Enfin, nous présentons une comparaison qualitative de nos résultats pour BPDC en utilisant plusieurs outils de plongement de l'espace des latences d'Internet, comme Sequoia et Vivaldi

    Reliable Service Allocation in Clouds

    Get PDF
    International audienceWe consider several reliability problems that arise when allocating applications to processing resources in a Cloud computing platform. More specifically, we assume on the one hand that each computing resource is associated to a capacity constraint and to a probability of failure. On the other hand, we assume that each service runs as a set of independent instances of identical Virtual Machines, and that the Service Level Agreement between the Cloud provider and the client states that a minimal number of instances of the service should run with a given probability. In this context, given the capacity and failure probabilities of the machines, and the capacity and reliability demands of the services, the question for the cloud provider is to find an allocation of the instances of the services (possibly using replication) onto machines satisfying all types of constraints during a given time period. In this paper, our goal is to assess the impact of the reliability constraint on the complexity of resource allocation problems. We consider several variants of this problem, depending on the number of services and whether their reliability demand is individual or global. We prove several fundamental complexity results (#\#P' and NP-completeness results) and we provide several optimal and approximation algorithms. In particular, we prove that a basic randomized allocation algorithm, that is easy to implement, provides optimal or quasi-optimal results in several contexts, and we show through simulations that it also achieves very good results in more general settings
    corecore