758 research outputs found

    Integrating marine megafauna into ecosystem-based management: a multidisciplinary approach applied to southern european waters.

    Get PDF
    305 p.Los ecosistemas marinos están amenazados por actividades humanas que afectan su sostenibilidad y capacidad de recuperación, disminuyendo la biodiversidad marina y afectando a su funcionamiento. El objetivo de esta tesis fue evaluar los impactos de las actividades antropogénicas sobre la megafauna marina mediante la integración de su ecología espacial en el enfoque ecosistémico. Centrada en las aves marinas y los cetáceos que habitan en el Golfo de Vizcaya (GdV), esta tesis desarrolla un marco ecológico integrador basado en enfoques multidisciplinares para identificar amenazas, desarrollar indicadores ambientales, establecer valores de referencia, obtener estimas de abundancia espacio-temporal, evaluar la coherencia de la red de Áreas Marinas Protegidas (AMPs) y examinar el valor de las series temporales de datos para la designación robusta de AMPs. Así, el Cap. 1 evalúa el impacto de las amenazas sobre la megafauna del GdV; el Cap. 2 desarrolla un enfoque metodológico integrador para avanzar en el monitoreo a nivel ecosistémico; el Cap. 3 identifica variables esenciales oceánicas y áreas de alto valor de biodiversidad para la comunidad de megafauna de la costa norte y noroeste peninsular; el Cap. 4 aborda la dificultad de proteger especies altamente moviles localizando sus áreas críticas y la protección que ofrece para éstas la actual red de AMPs del GdV; y el Cap. 5 evalúa el valor de las series temporales de datos explorando si las áreas prioritarias para la conservación de la megafauna son consistentes, independientemente del período de tiempo considerado en el proceso de priorización.Azti Tecnali

    Integrating marine megafauna into ecosystem-based management: a multidisciplinary approach applied to southern european waters.

    Get PDF
    305 p.Los ecosistemas marinos están amenazados por actividades humanas que afectan su sostenibilidad y capacidad de recuperación, disminuyendo la biodiversidad marina y afectando a su funcionamiento. El objetivo de esta tesis fue evaluar los impactos de las actividades antropogénicas sobre la megafauna marina mediante la integración de su ecología espacial en el enfoque ecosistémico. Centrada en las aves marinas y los cetáceos que habitan en el Golfo de Vizcaya (GdV), esta tesis desarrolla un marco ecológico integrador basado en enfoques multidisciplinares para identificar amenazas, desarrollar indicadores ambientales, establecer valores de referencia, obtener estimas de abundancia espacio-temporal, evaluar la coherencia de la red de Áreas Marinas Protegidas (AMPs) y examinar el valor de las series temporales de datos para la designación robusta de AMPs. Así, el Cap. 1 evalúa el impacto de las amenazas sobre la megafauna del GdV; el Cap. 2 desarrolla un enfoque metodológico integrador para avanzar en el monitoreo a nivel ecosistémico; el Cap. 3 identifica variables esenciales oceánicas y áreas de alto valor de biodiversidad para la comunidad de megafauna de la costa norte y noroeste peninsular; el Cap. 4 aborda la dificultad de proteger especies altamente moviles localizando sus áreas críticas y la protección que ofrece para éstas la actual red de AMPs del GdV; y el Cap. 5 evalúa el valor de las series temporales de datos explorando si las áreas prioritarias para la conservación de la megafauna son consistentes, independientemente del período de tiempo considerado en el proceso de priorización.Azti Tecnali

    Quantifying Standing Dead Tree Volume and Structural Loss with Voxelized Terrestrial Lidar Data

    Get PDF
    Standing dead trees (SDTs) are an important forest component and impact a variety of ecosystem processes, yet the carbon pool dynamics of SDTs are poorly constrained in terrestrial carbon cycling models. The ability to model wood decay and carbon cycling in relation to detectable changes in tree structure and volume over time would greatly improve such models. The overall objective of this study was to provide automated aboveground volume estimates of SDTs and automated procedures to detect, quantify, and characterize structural losses over time with terrestrial lidar data. The specific objectives of this study were: 1) develop an automated SDT volume estimation algorithm providing accurate volume estimates for trees scanned in dense forests; 2) develop an automated change detection methodology to accurately detect and quantify SDT structural loss between subsequent terrestrial lidar observations; and 3) characterize the structural loss rates of pine and oak SDTs in southeastern Texas. A voxel-based volume estimation algorithm, “TreeVolX”, was developed and incorporates several methods designed to robustly process point clouds of varying quality levels. The algorithm operates on horizontal voxel slices by segmenting the slice into distinct branch or stem sections then applying an adaptive contour interpolation and interior filling process to create solid reconstructed tree models (RTMs). TreeVolX estimated large and small branch volume with an RMSE of 7.3% and 13.8%, respectively. A voxel-based change detection methodology was developed to accurately detect and quantify structural losses and incorporated several methods to mitigate the challenges presented by shifting tree and branch positions as SDT decay progresses. The volume and structural loss of 29 SDTs, composed of Pinus taeda and Quercus stellata, were successfully estimated using multitemporal terrestrial lidar observations over elapsed times ranging from 71 – 753 days. Pine and oak structural loss rates were characterized by estimating the amount of volumetric loss occurring in 20 equal-interval height bins of each SDT. Results showed that large pine snags exhibited more rapid structural loss in comparison to medium-sized oak snags in this study

    Credit default swaps and CreditGrades: Evidence from the Nordic markets

    Get PDF
    This study examines and compares theoretical CDS spreads created by a structural framework with empirical CDS spreads. The model employed is the CreditGrades model based on the Merton framework from 1974 which calculate default probabilities and credit spreads from balance sheet and equity data. The aim is to measure how well the model can explain the observed CDS spreads and if it has any predictive ability. The model is tested for 22 companies in the Nordic market. Regression analysis is used to measure the explanatory power of the model. It is tested for the period between 2005 and 2009 and for two subperiods, 2005-2007 and 2007-2009. The model was found to have limited explanatory power with R-square value ranging from 0 to 21 percentages. Even though the explanatory value is low the CDS spreads obtained through CreditGrades are significant for 19 companies during 2005-2009 and 21 companies during 2007-2009. The predictive ability of the model is inconclusive with about a third yielding significant results for the one day lagged model and third of the companies’ CDS spreads are significantly autocorrelated with its lagged variable. The residuals were found to be highly cross-correlated. Principle component analysis reveals that 20-50 % of the variation in the residual can be explained by a systematic component not related to the company specific information. We propose the use of a counterparty risk index. With the inclusion of the index the R-square value is strengthened. The index is significant for 21 companies during the entire sample period and all of the companies during the second half of the sample

    Point-set manifold processing for computational mechanics: thin shells, reduced order modeling, cell motility and molecular conformations

    Get PDF
    In many applications, one would like to perform calculations on smooth manifolds of dimension d embedded in a high-dimensional space of dimension D. Often, a continuous description of such manifold is not known, and instead it is sampled by a set of scattered points in high dimensions. This poses a serious challenge. In this thesis, we approximate the point-set manifold as an overlapping set of smooth parametric descriptions, whose geometric structure is revealed by statistical learning methods, and then parametrized by meshfree methods. This approach avoids any global parameterization, and hence is applicable to manifolds of any genus and complex geometry. It combines four ingredients: (1) partitioning of the point set into subregions of trivial topology, (2) the automatic detection of the local geometric structure of the manifold by nonlinear dimensionality reduction techniques, (3) the local parameterization of the manifold using smooth meshfree (here local maximum-entropy) approximants, and (4) patching together the local representations by means of a partition of unity. In this thesis we show the generality, flexibility, and accuracy of the method in four different problems. First, we exercise it in the context of Kirchhoff-Love thin shells, (d=2, D=3). We test our methodology against classical linear and non linear benchmarks in thin-shell analysis, and highlight its ability to handle point-set surfaces of complex topology and geometry. We then tackle problems of much higher dimensionality. We perform reduced order modeling in the context of finite deformation elastodynamics, considering a nonlinear reduced configuration space, in contrast with classical linear approaches based on Principal Component Analysis (d=2, D=10000's). We further quantitatively unveil the geometric structure of the motility strategy of a family of micro-organisms called Euglenids from experimental videos (d=1, D~30000's). Finally, in the context of enhanced sampling in molecular dynamics, we automatically construct collective variables for the molecular conformational dynamics (d=1...6, D~30,1000's)

    Harm caused by Marine Litter

    Get PDF
    Marine litter is a global concern with a range of problems associated to it, as recognised by the Marine Strategy Framework Directive (MSFD). Marine litter can impact organisms at different levels of biological organization and habitats in a number of ways namely: through entanglement in, or ingestion of, litter items by individuals, resulting in death and/or severe suffering; through chemical and microbial transfer; as a vector for transport of biota and by altering or modifying assemblages of species. Marine litter is a threat not only to marine species and ecosystems but also carries a risk to human health and has significant implications to human welfare, impacting negatively vital economic sectors such as tourism, fisheries, aquaculture or energy supply and bringing economic losses to individuals, enterprises and communities. This technical report aims to provide clear insight about the major negative impacts from marine litter by describing the mechanisms of harm. Further it provides reflexions about the evidence for harm from marine litter to biota comprising the underlying aspect of animal welfare while also considering the socioeconomic effects, including the influence of marine litter on ecosystem services. General conclusions highlight that understanding the risks and uncertainties with regard to the harm caused by marine litter is closely associated with the precautionary principle. The collected evidence in this report can be regarded as a supporting step to define harm and to provide an evidence base for the various actions needed to be implemented by decision-makers. This improved knowledge about the scale of the harmful effects of marine litter will further support EU Member States (MSs) and Regional Seas Conventions (RSCs) to implement their programme of measures, regional action plans and assessments.JRC.D.2-Water and Marine Resource

    Motion Segmentation Aided Super Resolution Image Reconstruction

    Get PDF
    This dissertation addresses Super Resolution (SR) Image Reconstruction focusing on motion segmentation. The main thrust is Information Complexity guided Gaussian Mixture Models (GMMs) for Statistical Background Modeling. In the process of developing our framework we also focus on two other topics; motion trajectories estimation toward global and local scene change detections and image reconstruction to have high resolution (HR) representations of the moving regions. Such a framework is used for dynamic scene understanding and recognition of individuals and threats with the help of the image sequences recorded with either stationary or non-stationary camera systems. We introduce a new technique called Information Complexity guided Statistical Background Modeling. Thus, we successfully employ GMMs, which are optimal with respect to information complexity criteria. Moving objects are segmented out through background subtraction which utilizes the computed background model. This technique produces superior results to competing background modeling strategies. The state-of-the-art SR Image Reconstruction studies combine the information from a set of unremarkably different low resolution (LR) images of static scene to construct an HR representation. The crucial challenge not handled in these studies is accumulating the corresponding information from highly displaced moving objects. In this aspect, a framework of SR Image Reconstruction of the moving objects with such high level of displacements is developed. Our assumption is that LR images are different from each other due to local motion of the objects and the global motion of the scene imposed by non-stationary imaging system. Contrary to traditional SR approaches, we employed several steps. These steps are; the suppression of the global motion, motion segmentation accompanied by background subtraction to extract moving objects, suppression of the local motion of the segmented out regions, and super-resolving accumulated information coming from moving objects rather than the whole scene. This results in a reliable offline SR Image Reconstruction tool which handles several types of dynamic scene changes, compensates the impacts of camera systems, and provides data redundancy through removing the background. The framework proved to be superior to the state-of-the-art algorithms which put no significant effort toward dynamic scene representation of non-stationary camera systems

    The 11th International Conference and Workshop on Lobster Biology and Management

    Get PDF
    As large, charismatic species, lobsters of all stripes often find themselves at the center of scientific research and in the media spotlight. Lobster fisheries are important economic drivers of coastal communities around the world. Indeed, lobsters are poster children of a marine environment increasingly under the pressures of human exploitation and environmental change. The 200+ abstracts in this program reflect the activity of a vibrant international community of researchers and industry members striving to understand what makes lobsters tick and keep their fisheries sustainable

    Towards Distributed Task-based Visualization and Data Analysis

    Get PDF
    To support scientific work with large and complex data the field of scientific visualization emerged in computer science and produces images through computational analysis of the data. Frameworks for combination of different analysis and visualization modules allow the user to create flexible pipelines for this purpose and set the standard for interactive scientific visualization used by domain scientists. Existing frameworks employ a thread-parallel message-passing approach to parallel and distributed scalability, leaving the field of scientific visualization in high performance computing to specialized ad-hoc implementations. The task-parallel programming paradigm proves promising to improve scalability and portability in high performance computing implementations and thus, this thesis aims towards the creation of a framework for distributed, task-based visualization modules and pipelines. The major contribution of the thesis is the establishment of modules for Merge Tree construction and (based on the former) topological simplification. Such modules already form a necessary first step for most visualization pipelines and can be expected to increase in importance for larger and more complex data produced and/or analysed by high performance computing. To create a task-parallel, distributed Merge Tree construction module the construction process has to be completely revised. We derive a novel property of Merge Tree saddles and introduce a novel task-parallel, distributed Merge Tree construction method that has both good performance and scalability. This forms the basis for a module for topological simplification which we extend by introducing novel alternative simplification parameters that aim to reduce the importance of prior domain knowledge to increase flexibility in typical high performance computing scenarios. Both modules lay the groundwork for continuative analysis and visualization steps and form a fundamental step towards an extensive task-parallel visualization pipeline framework for high performance computing.Wissenschaftliche Visualisierung ist eine Disziplin der Informatik, die durch computergestützte Analyse Bilder aus Datensätzen erzeugt, um das wissenschaftliche Arbeiten mit großen und komplexen Daten zu unterstützen. Softwaresysteme, die dem Anwender die Kombination verschiedener Analyse- und Visualisierungsmodule zu einer flexiblen Pipeline erlauben, stellen den Standard für interaktive wissenschaftliche Visualisierung. Die hierfür bereits existierenden Systeme setzen auf Thread-Parallelisierung mit expliziter Kommunikation, sodass das Feld der wissenschaftlichen Visualisierung auf Hochleistungsrechnern meist spezialisierten Direktlösungen überlassen wird. An dieser Stelle scheint Task-Parallelisierung vielversprechend, um Skalierbarkeit und Übertragbarkeit von Lösungen für Hochleistungsrechner zu verbessern. Daher zielt die vorliegende Arbeit auf die Umsetzung eines Softwaresystems für verteilte und task-parallele Visualisierungsmodule und -pipelines ab. Der zentrale Beitrag den die vorliegende Arbeit leistet ist die Einführung zweier Module für Merge Tree Konstruktion und topologische Datenbereinigung. Solche Module stellen bereits einen notwendigen ersten Schritt für die meisten Visualisierungspipelines dar und werden für größere und komplexere Datensätze, die im Hochleistungsrechnen erzeugt beziehungsweise analysiert werden, erwartungsgemäß noch wichtiger. Um eine Task-parallele, verteilbare Konstruktionsmethode für Merge Trees zu entwickeln musste der etablierte Algorithmus grundlegend überarbeitet werden. In dieser Arbeit leiten wir eine neue Eigenschaft für Merge Tree Knoten her und entwickeln einen neuartigen Konstruktionsalgorithmus, der gute Performance und Skalierbarkeit aufweist. Darauf aufbauend entwickeln wir ein Modul für topologische Datenbereinigung, welche wir durch neue, alternative Bereinigungsparameter erweitern, um die Flexibilität im Einstaz auf Hochleistungsrechnern zu erhöhen. Beide Module ermöglichen weiterführende Analyse und Visualisierung und setzen einen Grundstein für die Entwicklung eines umfassenden Task-parallelen Softwaresystems für Visualisierungspipelines auf Hochleistungsrechnern

    Fehlerkaschierte Bildbasierte Darstellungsverfahren

    Get PDF
    Creating photo-realistic images has been one of the major goals in computer graphics since its early days. Instead of modeling the complexity of nature with standard modeling tools, image-based approaches aim at exploiting real-world footage directly,as they are photo-realistic by definition. A drawback of these approaches has always been that the composition or combination of different sources is a non-trivial task, often resulting in annoying visible artifacts. In this thesis we focus on different techniques to diminish visible artifacts when combining multiple images in a common image domain. The results are either novel images, when dealing with the composition task of multiple images, or novel video sequences rendered in real-time, when dealing with video footage from multiple cameras.Fotorealismus ist seit jeher eines der großen Ziele in der Computergrafik. Anstatt die Komplexität der Natur mit standardisierten Modellierungswerkzeugen nachzubauen, gehen bildbasierte Ansätze den umgekehrten Weg und verwenden reale Bildaufnahmen zur Modellierung, da diese bereits per Definition fotorealistisch sind. Ein Nachteil dieser Variante ist jedoch, dass die Komposition oder Kombination mehrerer Quellbilder eine nichttriviale Aufgabe darstellt und häufig unangenehm auffallende Artefakte im erzeugten Bild nach sich zieht. In dieser Dissertation werden verschiedene Ansätze verfolgt, um Artefakte zu verhindern oder abzuschwächen, welche durch die Komposition oder Kombination mehrerer Bilder in einer gemeinsamen Bilddomäne entstehen. Im Ergebnis liefern die vorgestellten Verfahren neue Bilder oder neue Ansichten einer Bildsammlung oder Videosequenz, je nachdem, ob die jeweilige Aufgabe die Komposition mehrerer Bilder ist oder die Kombination mehrerer Videos verschiedener Kameras darstellt
    corecore