2,949 research outputs found

    Refining Greenland geothermal heat flux through stable isotope analysis

    Get PDF
    Geothermal heat flux is an important control on the dynamics of glaciers and ice sheets. In Greenland however, only few direct observations of geothermal heat flux exist. The exact spatial distribution and magnitude of heat flux in Greenland is therefore largely unknown. Many studies have attempted to constrain heat flux in Greenland indirectly by modelling it based on other observable variables, such as the seismic and magnetic structure of the Greenland lithosphere, or through techniques that extrapolate the existing measurements onto models of the Greenland lithology. Various estimates of Greenand heat flux have been produced this way, however many do not agree well with each other and show large inter- estimate variability both in terms of magnitude and spatial distribution of estimated heat flux values. Stable isotope composition of basal meltwater has previously not been considered in efforts to constrain Greenland geothermal heat flux. The ice layers in the Greenland ice sheet show large differences in δ18O values resulting from changes in climate throughout their deposi- tional history. If different ice layers are in contact with the bed, then spatial differences in geothermal heat flux will affect the local meltrates these layers experience at the ice sheet base and hence modulate the amount of meltwater each layer contributes into the subglacial drainaige system. If the δ18O values of the melting ice layers are sufficiently different, the isotopic composition of the mixed meltwater that flows through the subglacial hydrological system will be different for different spatial distributions of geothermal heat flux. By simulating the basal meltwater production in Greenland based on different published estimates of Greenland geothermal heat flux, I show in this thesis that different heat fluxes result in differences in the age distribution of the basal ice. In particular, the presence and extent of Eemian ice in central northern Greenland shows substantial differences for different heat flux estimates. As Eemian ice, being interglacial ice, shows higher δ18O val- ues than ice from the last glacial period, the modelled differences in Eemian extent result in detectable differences in the isotopic composition of the basal meltwater in North-east Greenland on the order of few permille. Stable isotope composition of basal meltwater might thus have the potential to contribute to the discussion about a heat flux hotspot in central northern Greenland.Master's Thesis in Earth ScienceGEOV399MAMN-GEO

    A complete design path for the layout of flexible macros

    Get PDF
    XIV+172hlm.;24c

    Adaptive Routing Approaches for Networked Many-Core Systems

    Get PDF
    Through advances in technology, System-on-Chip design is moving towards integrating tens to hundreds of intellectual property blocks into a single chip. In such a many-core system, on-chip communication becomes a performance bottleneck for high performance designs. Network-on-Chip (NoC) has emerged as a viable solution for the communication challenges in highly complex chips. The NoC architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication challenges such as wiring complexity, communication latency, and bandwidth. Furthermore, the combined benefits of 3D IC and NoC schemes provide the possibility of designing a high performance system in a limited chip area. The major advantages of 3D NoCs are the considerable reductions in average latency and power consumption. There are several factors degrading the performance of NoCs. In this thesis, we investigate three main performance-limiting factors: network congestion, faults, and the lack of efficient multicast support. We address these issues by the means of routing algorithms. Congestion of data packets may lead to increased network latency and power consumption. Thus, we propose three different approaches for alleviating such congestion in the network. The first approach is based on measuring the congestion information in different regions of the network, distributing the information over the network, and utilizing this information when making a routing decision. The second approach employs a learning method to dynamically find the less congested routes according to the underlying traffic. The third approach is based on a fuzzy-logic technique to perform better routing decisions when traffic information of different routes is available. Faults affect performance significantly, as then packets should take longer paths in order to be routed around the faults, which in turn increases congestion around the faulty regions. We propose four methods to tolerate faults at the link and switch level by using only the shortest paths as long as such path exists. The unique characteristic among these methods is the toleration of faults while also maintaining the performance of NoCs. To the best of our knowledge, these algorithms are the first approaches to bypassing faults prior to reaching them while avoiding unnecessary misrouting of packets. Current implementations of multicast communication result in a significant performance loss for unicast traffic. This is due to the fact that the routing rules of multicast packets limit the adaptivity of unicast packets. We present an approach in which both unicast and multicast packets can be efficiently routed within the network. While suggesting a more efficient multicast support, the proposed approach does not affect the performance of unicast routing at all. In addition, in order to reduce the overall path length of multicast packets, we present several partitioning methods along with their analytical models for latency measurement. This approach is discussed in the context of 3D mesh networks.Siirretty Doriast

    Visualized Algorithm Engineering on Two Graph Partitioning Problems

    Get PDF
    Concepts of graph theory are frequently used by computer scientists as abstractions when modeling a problem. Partitioning a graph (or a network) into smaller parts is one of the fundamental algorithmic operations that plays a key role in classifying and clustering. Since the early 1970s, graph partitioning rapidly expanded for applications in wide areas. It applies in both engineering applications, as well as research. Current technology generates massive data (“Big Data”) from business interactions and social exchanges, so high-performance algorithms of partitioning graphs are a critical need. This dissertation presents engineering models for two graph partitioning problems arising from completely different applications, computer networks and arithmetic. The design, analysis, implementation, optimization, and experimental evaluation of these models employ visualization in all aspects. Visualization indicates the performance of the implementation of each Algorithm Engineering work, and also helps to analyze and explore new algorithms to solve the problems. We term this research method as “Visualized Algorithm Engineering (VAE)” to emphasize the contribution of the visualizations in these works. The techniques discussed here apply to a broad area of problems: computer networks, social networks, arithmetic, computer graphics and software engineering. Common terminologies accepted across these disciplines have been used in this dissertation to guarantee practitioners from all fields can understand the concepts we introduce

    Adaptive remote visualization system with optimized network performance for large scale scientific data

    Get PDF
    This dissertation discusses algorithmic and implementation aspects of an automatically configurable remote visualization system, which optimally decomposes and adaptively maps the visualization pipeline to a wide-area network. The first node typically serves as a data server that generates or stores raw data sets and a remote client resides on the last node equipped with a display device ranging from a personal desktop to a powerwall. Intermediate nodes can be located anywhere on the network and often include workstations, clusters, or custom rendering engines. We employ a regression model-based network daemon to estimate the effective bandwidth and minimal delay of a transport path using active traffic measurement. Data processing time is predicted for various visualization algorithms using block partition and statistical technique. Based on the link measurements, node characteristics, and module properties, we strategically organize visualization pipeline modules such as filtering, geometry generation, rendering, and display into groups, and dynamically assign them to appropriate network nodes to achieve minimal total delay for post-processing or maximal frame rate for streaming applications. We propose polynomial-time algorithms using the dynamic programming method to compute the optimal solutions for the problems of pipeline decomposition and network mapping under different constraints. A parallel based remote visualization system, which comprises a logical group of autonomous nodes that cooperate to enable sharing, selection, and aggregation of various types of resources distributed over a network, is implemented and deployed at geographically distributed nodes for experimental testing. Our system is capable of handling a complete spectrum of remote visualization tasks expertly including post processing, computational steering and wireless sensor network monitoring. Visualization functionalities such as isosurface, ray casting, streamline, linear integral convolution (LIC) are supported in our system. The proposed decomposition and mapping scheme is generic and can be applied to other network-oriented computation applications whose computing components form a linear arrangement

    Debris-flow erosion and deposition dynamics

    Get PDF
    Debris flows are a major natural hazard in mountains world wide, because of their destructive potential. Prediction of occurrence, magnitude and travel distance is still a scientific challenge, and thus research into the mechanics of debris flows is still needed. Poor understanding of the processes of erosion and deposition are partly responsible for the difficulties in predicting debrisflow magnitude and travel distance. Even less is known about the long-term evolution of debrisflow fans because the sequential effects of debris-flow erosion and deposition in thousands of flows are poorly documented and hence models to simulate debris-flow fans do not exist. Here I address the specific issues of the dynamics of erosion and deposition in single flows and over multiple flows on debris-flow fans by terrain analysis, channel monitoring and fan evolution modeling. I documented erosion and deposition dynamics of debris flows at fan scale using the Illgraben debris-flow fan, Switzerland, as an example. Debris flow activity over the past three millenia in the Illgraben catchment in south-western Switzerland was documented by geomorphic mapping, radiocarbon dating of wood and cosmogenic exposure dating of deposits. In this specific case I also documented the disturbance induced by two rock avalanches in the catchment resulting in distinct patterns of deposition on the fan surface. Implications of human intervention and the significance of autogenic forcing of the fan system are also discussed. Quantification and understanding of erosion and deposition dynamics in debris flows at channel scale hinges on the ability to detect surface change. But change detection is a fundamental task in geomorphology in general. Terrestrial laser scanners are increasingly used for monitoring down to centimeter scale of surface change resulting from a variety of geomorphic processes, as they allow the rapid generation of high resolution digital elevation models. In this thesis procedures were developed to measure surface change in complex topography such as a debris-flow channel. From this data high-resolution digital elevation models were generated. But data from laser scanning contains ambiguous elevation information originating from point cloud matching, surface roughness and erroneous measurments. This affects the ability to detect change, and results in spatially variable uncertainties. I hence developed techniques to visualize and quantify these uncertainties for the specific application of change detection. I demonstrated that use of data filters (e.g. minimum height filter) on laser scanner data introduces systematic bias in change detection. Measurement of debris-flow erosion and deposition in single events was performed at Illgraben, where multiple debris flows are recorded every year. I applied terrestrial laser scanning and flow hydrograph analysis to quantify erosion and deposition in a series of debris flows. Flow depth was identified as an important control on the pattern and magnitude of erosion, whereas deposition is governed more by the geometry of flow margins. The relationship between flow depth and erosion is visible both at the reach scale and at the scale of the entire fan. Maximum flow depth is a function of debris flow front discharge and pre-flow channel cross section geometry, and this dual control gives rise to complex interactions with implications for long-term channel stability, the use of fan stratigraphy for reconstruction of past debris flow regimes, and the predictability of debris flow hazards. Debris-flow fan evolution on time scales of decades up to ten thousands of years is poorly understood because the cumulative effects of erosion and deposition in subsequent events are rarely well documented and suitable numerical models are lacking. Enhancing this understanding is crucial to assess the role of autogenic (internal) and allogenic (external) forcing mechanisms on building debris-flow fans over long time scales. On short time scales understanding fan evolution is important for debris-flow hazard assessment. I propose a 2D reduced-complexity model to assess debris-flow fan evolution. The model is built on a broad range of qualitative and empirical observations on debris-flow behaviour as well as on monitoring data acquired at Illgraben as part of this thesis. I have formulated a framework of rules that govern debris-flow behaviour, and that allows efficient implementation in a numerical simulation. The model is shown to replicate the general behaviour of alluvial fans in nature and in flume experiments. In three applications it is demonstrated how fan evolution modeling may improve understanding of inundation patterns, surface age distribution and surface morphology

    Simulation Of Multi-core Systems And Interconnections And Evaluation Of Fat-Mesh Networks

    Get PDF
    Simulators are very important in computer architecture research as they enable the exploration of new architectures to obtain detailed performance evaluation without building costly physical hardware. Simulation is even more critical to study future many-core architectures as it provides the opportunity to assess currently non-existing computer systems. In this thesis, a multiprocessor simulator is presented based on a cycle accurate architecture simulator called SESC. The shared L2 cache system is extended into a distributed shared cache (DSC) with a directory-based cache coherency protocol. A mesh network module is extended and integrated into SESC to replace the bus for scalable inter-processor communication. While these efforts complete an extended multiprocessor simulation infrastructure, two interconnection enhancements are proposed and evaluated. A novel non-uniform fat-mesh network structure similar to the idea of fat-tree is proposed. This non-uniform mesh network takes advantage of the average traffic pattern, typically all-to-all in DSC, to dedicate additional links for connections with heavy traffic (e.g., near the center) and fewer links for lighter traffic (e.g., near the periphery). Two fat-mesh schemes are implemented based on different routing algorithms. Analytical fat-mesh models are constructed by presenting the expressions for the traffic requirements of personalized all-to-all traffic. Performance improvements over the uniform mesh are demonstrated in the results from the simulator. A hybrid network consisting of one packet switching plane and multiple circuit switching planes is constructed as the second enhancement. The circuit switching planes provide fast paths between neighbors with heavy communication traffic. A compiler technique that abstracts the symbolic expressions of benchmarks' communication patterns can be used to help facilitate the circuit establishment
    corecore