38 research outputs found

    Optimal (Randomized) Parallel Algorithms in the Binary-Forking Model

    Full text link
    In this paper we develop optimal algorithms in the binary-forking model for a variety of fundamental problems, including sorting, semisorting, list ranking, tree contraction, range minima, and ordered set union, intersection and difference. In the binary-forking model, tasks can only fork into two child tasks, but can do so recursively and asynchronously. The tasks share memory, supporting reads, writes and test-and-sets. Costs are measured in terms of work (total number of instructions), and span (longest dependence chain). The binary-forking model is meant to capture both algorithm performance and algorithm-design considerations on many existing multithreaded languages, which are also asynchronous and rely on binary forks either explicitly or under the covers. In contrast to the widely studied PRAM model, it does not assume arbitrary-way forks nor synchronous operations, both of which are hard to implement in modern hardware. While optimal PRAM algorithms are known for the problems studied herein, it turns out that arbitrary-way forking and strict synchronization are powerful, if unrealistic, capabilities. Natural simulations of these PRAM algorithms in the binary-forking model (i.e., implementations in existing parallel languages) incur an Ω(logn)\Omega(\log n) overhead in span. This paper explores techniques for designing optimal algorithms when limited to binary forking and assuming asynchrony. All algorithms described in this paper are the first algorithms with optimal work and span in the binary-forking model. Most of the algorithms are simple. Many are randomized

    Gap-filling eddy covariance methane fluxes:Comparison of machine learning model predictions and uncertainties at FLUXNET-CH4 wetlands

    Get PDF
    Time series of wetland methane fluxes measured by eddy covariance require gap-filling to estimate daily, seasonal, and annual emissions. Gap-filling methane fluxes is challenging because of high variability and complex responses to multiple drivers. To date, there is no widely established gap-filling standard for wetland methane fluxes, with regards both to the best model algorithms and predictors. This study synthesizes results of different gap-filling methods systematically applied at 17 wetland sites spanning boreal to tropical regions and including all major wetland classes and two rice paddies. Procedures are proposed for: 1) creating realistic artificial gap scenarios, 2) training and evaluating gap-filling models without overstating performance, and 3) predicting half-hourly methane fluxes and annual emissions with realistic uncertainty estimates. Performance is compared between a conventional method (marginal distribution sampling) and four machine learning algorithms. The conventional method achieved similar median performance as the machine learning models but was worse than the best machine learning models and relatively insensitive to predictor choices. Of the machine learning models, decision tree algorithms performed the best in cross-validation experiments, even with a baseline predictor set, and artificial neural networks showed comparable performance when using all predictors. Soil temperature was frequently the most important predictor whilst water table depth was important at sites with substantial water table fluctuations, highlighting the value of data on wetland soil conditions. Raw gap-filling uncertainties from the machine learning models were underestimated and we propose a method to calibrate uncertainties to observations. The python code for model development, evaluation, and uncertainty estimation is publicly available. This study outlines a modular and robust machine learning workflow and makes recommendations for, and evaluates an improved baseline of, methane gap-filling models that can be implemented in multi-site syntheses or standardized products from regional and global flux networks (e.g., FLUXNET)

    What is the Oxygen Isotope Composition of Venus? The Scientific Case for Sample Return from Earth’s “Sister” Planet

    Get PDF
    Venus is Earth’s closest planetary neighbour and both bodies are of similar size and mass. As a consequence, Venus is often described as Earth’s sister planet. But the two worlds have followed very different evolutionary paths, with Earth having benign surface conditions, whereas Venus has a surface temperature of 464 °C and a surface pressure of 92 bar. These inhospitable surface conditions may partially explain why there has been such a dearth of space missions to Venus in recent years.The oxygen isotope composition of Venus is currently unknown. However, this single measurement (Δ17O) would have first order implications for our understanding of how large terrestrial planets are built. Recent isotopic studies indicate that the Solar System is bimodal in composition, divided into a carbonaceous chondrite (CC) group and a non-carbonaceous (NC) group. The CC group probably originated in the outer Solar System and the NC group in the inner Solar System. Venus comprises 41% by mass of the inner Solar System compared to 50% for Earth and only 5% for Mars. Models for building large terrestrial planets, such as Earth and Venus, would be significantly improved by a determination of the Δ17O composition of a returned sample from Venus. This measurement would help constrain the extent of early inner Solar System isotopic homogenisation and help to identify whether the feeding zones of the terrestrial planets were narrow or wide.Determining the Δ17O composition of Venus would also have significant implications for our understanding of how the Moon formed. Recent lunar formation models invoke a high energy impact between the proto-Earth and an inner Solar System-derived impactor body, Theia. The close isotopic similarity between the Earth and Moon is explained by these models as being a consequence of high-temperature, post-impact mixing. However, if Earth and Venus proved to be isotopic clones with respect to Δ17O, this would favour the classic, lower energy, giant impact scenario.We review the surface geology of Venus with the aim of identifying potential terrains that could be targeted by a robotic sample return mission. While the potentially ancient tessera terrains would be of great scientific interest, the need to minimise the influence of venusian weathering favours the sampling of young basaltic plains. In terms of a nominal sample mass, 10 g would be sufficient to undertake a full range of geochemical, isotopic and dating studies. However, it is important that additional material is collected as a legacy sample. As a consequence, a returned sample mass of at least 100 g should be recovered.Two scenarios for robotic sample return missions from Venus are presented, based on previous mission proposals. The most cost effective approach involves a “Grab and Go” strategy, either using a lander and separate orbiter, or possibly just a stand-alone lander. Sample return could also be achieved as part of a more ambitious, extended mission to study the venusian atmosphere. In both scenarios it is critical to obtain a surface atmospheric sample to define the extent of atmosphere-lithosphere oxygen isotopic disequilibrium. Surface sampling would be carried out by multiple techniques (drill, scoop, “vacuum-cleaner” device) to ensure success. Surface operations would take no longer than one hour.Analysis of returned samples would provide a firm basis for assessing similarities and differences between the evolution of Venus, Earth, Mars and smaller bodies such as Vesta. The Solar System provides an important case study in how two almost identical bodies, Earth and Venus, could have had such a divergent evolution. Finally, Venus, with its runaway greenhouse atmosphere, may provide data relevant to the understanding of similar less extreme processes on Earth. Venus is Earth’s planetary twin and deserves to be better studied and understood. In a wider context, analysis of returned samples from Venus would provide data relevant to the study of exoplanetary systems

    A Hierarchy of Perceptual Training in Low Vision

    No full text
    A growing concern in low vision care is whether people afflicted with a visual impairment can adapt to their condition and relearn to perform lost functional abilities. A purely sensory-physiological approach to this issue is restricted because 1) low vision patients often have below what is normally assumed as the basic necessary sensory input for many functional tasks (e.g. reading) and 2) in many cases, such an approach assumes a lack of plasticity past a critical period of acquisition. An alternative approach is that there is some useful plasticity or ability to relearn at all ages even though they may differ quantitatively and/or qualitatively

    Effect of magnification and field of view on reading speed using a CCTV

    No full text
    Reading speeds were measured in 18 subjects with normal vision and 10 with low vision for each of 20 experimental conditions with different magnifications and field sizes on the Closed Circuit Television System (CCTV). There was a significant difference between the results for the two groups of subjects. These results suggested that in low-vision patients with faster reading speeds, minimum magnification for maximum field size on the CCTV would be valid advice. For low-vision patients who read more slowly, reading speed may improve at higher magnifications despite reduced field size
    corecore