347 research outputs found

    Vegetation dynamics in northern south America on different time scales

    Get PDF
    The overarching goal of this doctoral thesis was to understand the dynamics of vegetation activity occurring across time scales globally and in a regional context. To achieve this, I took advantage of open data sets, novel mathematical approaches for time series analyses, and state-of-the-art technology to effectively manipulate and analyze time series data. Specifically, I disentangled the longest records of vegetation greenness (>30 years) in tandem with climate variables at 0.05° for a global scale analysis (Chapter 3). Later, I focused my analysis on a particular region, northern South America (NSA), to evaluate vegetation activity at seasonal (Chapter 4) and interannual scales (Chapter 5) using moderate spatial resolution (0.0083°). Two main approaches were used in this research; time series decomposition through the Fast Fourier Transformation (FFT), and dimensionality reduction analysis through Principal Component Analysis (PCA). Overall, assessing vegetation-climate dynamics at different temporal scales facilitates the observation and understanding of processes that are often obscured by one or few dominant processes. On the one hand, the global analysis showed the dominant seasonality of vegetation and temperature in northern latitudes in comparison with the heterogeneous patterns of the tropics, and the remarkable longer-term oscillations in the southern hemisphere. On the other hand, the regional analysis showed the complex and diverse land-atmosphere interactions in NSA when assessing seasonality and interannual variability of vegetation activity associated with ENSO. In conclusion, disentangling these processes and assessing them separately allows one to formulate new hypotheses of mechanisms in ecosystem functioning, reveal hidden patterns of climate-vegetation interactions, and inform about vegetation dynamics relevant for ecosystem conservation and management

    Current Options for Visualization of Local Deformation in Modern Shape Analysis Applied to Paleobiological Case Studies

    Get PDF
    In modern shape analysis, deformation is quantified in different ways depending on the algorithms used and on the scale at which it is evaluated. While global affine and non-affine deformation components can be decoupled and computed using a variety of methods, the very local deformation can be considered, infinitesimally, as an affine deformation. The deformation gradient tensor F can be computed locally using a direct calculation by exploiting triangulation or tetrahedralization structures or by locally evaluating the first derivative of an appropriate interpolation function mapping the global deformation from the undeformed to the deformed state. A suitable function is represented by the thin plate spline (TPS) that separates affine from non-affine deformation components. F, also known as Jacobian matrix, encodes both the locally affine deformation and local rotation. This implies that it should be used for visualizing primary strain directions (PSDs) and deformation ellipses and ellipsoids on the target configuration. Using C = FTF allows, instead, one to compute PSD and to visualize them on the source configuration. Moreover, C allows the computation of the strain energy that can be evaluated and mapped locally at any point of a body using an interpolation function. In addition, it is possible, by exploiting the second-order Jacobian, to calculate the amount of the non-affine deformation in the neighborhood of the evaluation point by computing the body bending energy density encoded in the deformation. In this contribution, we present (i) the main computational methods for evaluating local deformation metrics, (ii) a number of different strategies to visualize them on both undeformed and deformed configurations, and (iii) the potential pitfalls in ignoring the actual three-dimensional nature of F when it is evaluated along a surface identified by a triangulation in three dimensions

    Refficientlib: an efficient load-rebalanced adaptive mesh refinement algorithm for high-performance computational physics meshes

    Get PDF
    No separate or additional fees are collected for access to or distribution of the work.In this paper we present a novel algorithm for adaptive mesh refinement in computational physics meshes in a distributed memory parallel setting. The proposed method is developed for nodally based parallel domain partitions where the nodes of the mesh belong to a single processor, whereas the elements can belong to multiple processors. Some of the main features of the algorithm presented in this paper are its capability of handling multiple types of elements in two and three dimensions (triangular, quadrilateral, tetrahedral, and hexahedral), the small amount of memory required per processor, and the parallel scalability up to thousands of processors. The presented algorithm is also capable of dealing with nonbalanced hierarchical refinement, where multirefinement level jumps are possible between neighbor elements. An algorithm for dealing with load rebalancing is also presented, which allows us to move the hierarchical data structure between processors so that load unbalancing is kept below an acceptable level at all times during the simulation. A particular feature of the proposed algorithm is that arbitrary renumbering algorithms can be used in the load rebalancing step, including both graph partitioning and space-filling renumbering algorithms. The presented algorithm is packed in the Fortran 2003 object oriented library \textttRefficientLib, whose interface calls which allow it to be used from any computational physics code are summarized. Finally, numerical experiments illustrating the performance and scalability of the algorithm are presented.Peer ReviewedPostprint (published version

    Securing Microservices

    Get PDF
    Microservices has drawn significant interest in recent years and is now successfully finding its way into different areas, from Enterprise IT to Internet-of-Things to even Critical Applications. This article discusses how Microservices can be secured at different levels and stages considering a common software development lifecycle

    Proceedings, MSVSCC 2015

    Get PDF
    The Virginia Modeling, Analysis and Simulation Center (VMASC) of Old Dominion University hosted the 2015 Modeling, Simulation, & Visualization Student capstone Conference on April 16th. The Capstone Conference features students in Modeling and Simulation, undergraduates and graduate degree programs, and fields from many colleges and/or universities. Students present their research to an audience of fellow students, faculty, judges, and other distinguished guests. For the students, these presentations afford them the opportunity to impart their innovative research to members of the M&S community from academic, industry, and government backgrounds. Also participating in the conference are faculty and judges who have volunteered their time to impart direct support to their students’ research, facilitate the various conference tracks, serve as judges for each of the tracks, and provide overall assistance to this conference. 2015 marks the ninth year of the VMASC Capstone Conference for Modeling, Simulation and Visualization. This year our conference attracted a number of fine student written papers and presentations, resulting in a total of 51 research works that were presented. This year’s conference had record attendance thanks to the support from the various different departments at Old Dominion University, other local Universities, and the United States Military Academy, at West Point. We greatly appreciated all of the work and energy that has gone into this year’s conference, it truly was a highly collaborative effort that has resulted in a very successful symposium for the M&S community and all of those involved. Below you will find a brief summary of the best papers and best presentations with some simple statistics of the overall conference contribution. Followed by that is a table of contents that breaks down by conference track category with a copy of each included body of work. Thank you again for your time and your contribution as this conference is designed to continuously evolve and adapt to better suit the authors and M&S supporters. Dr.Yuzhong Shen Graduate Program Director, MSVE Capstone Conference Chair John ShullGraduate Student, MSVE Capstone Conference Student Chai

    Refficientlib: an efficient load-rebalanced adaptive mesh refinement algorithm for high-performance computational physics meshes

    Get PDF
    In this paper we present a novel algorithm for adaptive mesh refinement in computational physics meshes in a distributed memory parallel setting. The proposed method is developed for nodally based parallel domain partitions where the nodes of the mesh belong to a single processor, whereas the elements can belong to multiple processors. Some of the main features of the algorithm presented in this paper are its capability of handling multiple types of elements in two and three dimensions (triangular, quadrilateral, tetrahedral, and hexahedral), the small amount of memory required per processor, and the parallel scalability up to thousands of processors. The presented algorithm is also capable of dealing with nonbalanced hierarchical refinement, where multirefinement level jumps are possible between neighbor elements. An algorithm for dealing with load rebalancing is also presented, which allows us to move the hierarchical data structure between processors so that load unbalancing is kept below an acceptable level at all times during the simulation. A particular feature of the proposed algorithm is that arbitrary renumbering algorithms can be used in the load rebalancing step, including both graph partitioning and space-filling renumbering algorithms. The presented algorithm is packed in the Fortran 2003 object oriented library \textttRefficientLib, whose interface calls which allow it to be used from any computational physics code are summarized. Finally, numerical experiments illustrating the performance and scalability of the algorithm are presented. No separate or additional fees are collected for access to or distribution of the wor

    Big Earth Data and Machine Learning for Sustainable and Resilient Agriculture

    Full text link
    Big streams of Earth images from satellites or other platforms (e.g., drones and mobile phones) are becoming increasingly available at low or no cost and with enhanced spatial and temporal resolution. This thesis recognizes the unprecedented opportunities offered by the high quality and open access Earth observation data of our times and introduces novel machine learning and big data methods to properly exploit them towards developing applications for sustainable and resilient agriculture. The thesis addresses three distinct thematic areas, i.e., the monitoring of the Common Agricultural Policy (CAP), the monitoring of food security and applications for smart and resilient agriculture. The methodological innovations of the developments related to the three thematic areas address the following issues: i) the processing of big Earth Observation (EO) data, ii) the scarcity of annotated data for machine learning model training and iii) the gap between machine learning outputs and actionable advice. This thesis demonstrated how big data technologies such as data cubes, distributed learning, linked open data and semantic enrichment can be used to exploit the data deluge and extract knowledge to address real user needs. Furthermore, this thesis argues for the importance of semi-supervised and unsupervised machine learning models that circumvent the ever-present challenge of scarce annotations and thus allow for model generalization in space and time. Specifically, it is shown how merely few ground truth data are needed to generate high quality crop type maps and crop phenology estimations. Finally, this thesis argues there is considerable distance in value between model inferences and decision making in real-world scenarios and thereby showcases the power of causal and interpretable machine learning in bridging this gap.Comment: Phd thesi
    • …
    corecore