15 research outputs found
The calibration and evaluation of speed-dependent automatic zooming interfaces.
Speed-Dependent Automatic Zooming (SDAZ) is an exciting new navigation technique that couples the user's rate of motion through an information space with the zoom level. The faster a user scrolls in the document, the 'higher' they fly above the work surface. At present, there are few guidelines for the calibration of SDAZ. Previous work by Igarashi & Hinckley (2000) and Cockburn & Savage (2003) fails to give values for predefined constants governing their automatic zooming behaviour. The absence of formal guidelines means that SDAZ implementers are forced to adjust the properties of the automatic zooming by trial and error.
This thesis aids calibration by identifying the low-level components of SDAZ. Base calibration settings for these components are then established using a formal evaluation recording participants' comfortable scrolling rates at different magnification levels.
To ease our experiments with SDAZ calibration, we implemented a new system that provides a comprehensive graphical user interface for customising SDAZ behaviour. The system was designed to simplify future extensions---for example new components such as interaction techniques and methods to render information can easily be added with little modification to existing code. This system was used to configure three SDAZ interfaces: a text document browser, a flat map browser and a multi-scale globe browser.
The three calibrated SDAZ interfaces were evaluated against three equivalent interfaces with rate-based scrolling and manual zooming. The evaluation showed that SDAZ is 10% faster for acquiring targets in a map than rate-based scrolling with manual zooming, and SDAZ is 4% faster for acquiring targets in a text document. Participants also preferred using automatic zooming over manual zooming. No difference was found for the globe browser for acquisition time or preference. However, in all interfaces participants commented that automatic zooming was less physically and mentally draining than manual zooming
Megaphylogeny resolves global patterns of mushroom evolution
Mushroom-forming fungi (Agaricomycetes) have the greatest morphological diversity and complexity of any group of fungi. They have radiated into most niches and fulfil diverse roles in the ecosystem, including wood decomposers, pathogens or mycorrhizal mutualists. Despite the importance of mushroom-forming fungi, large-scale patterns of their evolutionary history are poorly known, in part due to the lack of a comprehensive and dated molecular phylogeny. Here, using multigene and genome-based data, we assemble a 5,284-species phylogenetic tree and infer ages and broad patterns of speciation/extinction and morphological innovation in mushroom-forming fungi. Agaricomycetes started a rapid class-wide radiation in the Jurassic, coinciding with the spread of (sub)tropical coniferous forests and a warming climate. A possible mass extinction, several clade-specific adaptive radiations and morphological diversification of fruiting bodies followed during the Cretaceous and the Paleogene, convergently giving rise to the classic toadstool morphology, with a cap, stalk and gills (pileate-stipitate morphology). This morphology is associated with increased rates of lineage diversification, suggesting it represents a key innovation in the evolution of mushroom-forming fungi. The increase in mushroom diversity started during the Mesozoic-Cenozoic radiation event, an era of humid climate when terrestrial communities dominated by gymnosperms and reptiles were also expanding
Neocortical Axon Arbors Trade-off Material and Conduction Delay Conservation
The brain contains a complex network of axons rapidly communicating information between billions of synaptically connected neurons. The morphology of individual axons, therefore, defines the course of information flow within the brain. More than a century ago, RamĂłn y Cajal proposed that conservation laws to save material (wire) length and limit conduction delay regulate the design of individual axon arbors in cerebral cortex. Yet the spatial and temporal communication costs of single neocortical axons remain undefined. Here, using reconstructions of in vivo labelled excitatory spiny cell and inhibitory basket cell intracortical axons combined with a variety of graph optimization algorithms, we empirically investigated Cajal's conservation laws in cerebral cortex for whole three-dimensional (3D) axon arbors, to our knowledge the first study of its kind. We found intracortical axons were significantly longer than optimal. The temporal cost of cortical axons was also suboptimal though far superior to wire-minimized arbors. We discovered that cortical axon branching appears to promote a low temporal dispersion of axonal latencies and a tight relationship between cortical distance and axonal latency. In addition, inhibitory basket cell axonal latencies may occur within a much narrower temporal window than excitatory spiny cell axons, which may help boost signal detection. Thus, to optimize neuronal network communication we find that a modest excess of axonal wire is traded-off to enhance arbor temporal economy and precision. Our results offer insight into the principles of brain organization and communication in and development of grey matter, where temporal precision is a crucial prerequisite for coincidence detection, synchronization and rapid network oscillations
Capturing Parallel Performance Dynamics
Supercomputers play a key role in countless areas of science and engineering, enabling the development of new insights and technological advances never possible before. The strategic importance and ever-growing complexity of the efficient usage of supercomputing resources makes application performance analysis invaluable for the development of parallel codes. Runtime call-path profiling is a conventional, well-known method used for collecting summary statistics of an execution such as the time spent in different call paths of the code. However, these kinds of measurements only give the user a summary overview of the entire execution, without regard to changes in performance behavior over time. The possible causes of temporal changes are quite numerous, ranging from adaptive workload balancing through periodically executed extra work or distinct computational phases to system noise. As present day scientific applications tend to be run for extended periods of time, understanding the patterns and trends in the performance data along the time axis becomes crucial. A straightforward approach is profiling every iteration of the main loop separately. As shown by our analysis of a representative set of scientific codes, such measurements provide a wealth of new data that often leads to invaluable new insights. However, the introduction of the time dimension makes the amount of data collected proportional to the number of iterations, and memory usage and file sizes grow considerably. To counter this problem, a low-overhead online compression algorithm was developed that requires only a fraction of the memory and file sizes needed for an uncompressed measurement. By exploiting similarities between different iterations, the lossy compression algorithm allows all the relevant temporal patterns of the performance behavior to be reconstructed. While standard, direct instrumentation, which is assumed by the initial version of the compression algorithm, results in fairly low overhead with many scientific codes, in some cases the high frequency of events (e.g., tiny C++ member function calls) makes such measurements impractical. To overcome this problem, a sampling-based methodology could be used instead, where the amount of measurement overhead becomes a function of the sampling frequency, independent of the function-call frequency. However, sampling alone is insufficient for our purposes, as it does not provide access to the communication metrics the compression algorithm heavily depends on. Therefore, a hybrid solution was developed that seamlessly integrates both types of measurement techniques in a single unified measurement, using direct instrumentation for message passing constructs, while sampling the rest of the code. Finally, the compression algorithm was adapted to the hybrid profiling approach, avoiding the overhead of pure direct instrumentation. Evaluation of the above methodologies shows that our semantics-based compression algorithm provides a very good approximation of the original data with very little measurement dilation, while the hybrid combination of sampling and direct instrumentation fulfills its purpose by showing the expected reduction of measurement dilation in cases unsuitable for direct instrumentation. Beyond testing with standardized benchmark suites, the usefulness of these techniques was demonstrated by their key role in gaining important new insights into the performance characteristics of real-world applications
A workflow for holistic performance system analysis
This document describes the performance-analysis workflow defined in Task 3.2 of Work Package 3 of the EU FP7 project HOPSA. The HOPSA project (HOlistic Performance System Analysis) sets out for the first time to develop an integrated diagnostic infrastructure for combined application and system tuning. The document guides application developers in the process of tuning and optimising their codes for performance. It describes which tools should be used in which order to accomplish common performance analysis tasks. Since the document addresses primarily the userâs perspective, it follows the style of a user guide. It does, however, not replace the user guides of individual performance- analysis tools developed in HOPSA but rather connects them as it shows how to use the tools in a complementary way. At the centre of this document is the so-called lightweight measurement module (LWM2). Being responsible for the first step in the workflow, the system-wide mandatory collection of basic performance data, the module is covered in greater detail. Special emphasis is given to the interpretation of the job digest created with the help of LWM2. The metrics listed in this compact report indicate whether an application suffers from an inherent performance problem or whether application interference may have been at the root of dissatisfactory behaviour. They also provide a first assessment regarding the nature of a potential performance problem and help to decide on further diagnostic steps using any of the more powerful performance-analysis tools. For each of those tools, a short summary is given with information on the most important questions it can help to answer. Moreover, the document covers Score-P, a common measurement infrastructure shared by some of the tools. The performance data types supported by Score-P form a natural refinement hierarchy that can be followed to track down and represent even complex bottleneck situations at increasing levels of granularity. Finally, a brief excursion on system tuning explains how system providers can leverage the data collected by LWM2 to identify a suboptimal system configuration or faulty components
A workflow for holistic performance system analysis
This document describes the performance-analysis workflow defined in Task 3.2 of Work Package 3 of the EU FP7 project HOPSA. The HOPSA project (HOlistic Performance System Analysis) sets out for the first time to develop an integrated diagnostic infrastructure for combined application and system tuning. The document guides application developers in the process of tuning and optimising their codes for performance. It describes which tools should be used in which order to accomplish common performance analysis tasks. Since the document addresses primarily the userâs perspective, it follows the style of a user guide. It does, however, not replace the user guides of individual performance- analysis tools developed in HOPSA but rather connects them as it shows how to use the tools in a complementary way. At the centre of this document is the so-called lightweight measurement module (LWM2). Being responsible for the first step in the workflow, the system-wide mandatory collection of basic performance data, the module is covered in greater detail. Special emphasis is given to the interpretation of the job digest created with the help of LWM2. The metrics listed in this compact report indicate whether an application suffers from an inherent performance problem or whether application interference may have been at the root of dissatisfactory behaviour. They also provide a first assessment regarding the nature of a potential performance problem and help to decide on further diagnostic steps using any of the more powerful performance-analysis tools. For each of those tools, a short summary is given with information on the most important questions it can help to answer. Moreover, the document covers Score-P, a common measurement infrastructure shared by some of the tools. The performance data types supported by Score-P form a natural refinement hierarchy that can be followed to track down and represent even complex bottleneck situations at increasing levels of granularity. Finally, a brief excursion on system tuning explains how system providers can leverage the data collected by LWM2 to identify a suboptimal system configuration or faulty components
Results of the BAMM analysis
We used BAMM 2.5.0. (Bayesian Analysis of Macroevolutionary Mixtures), to examine rate heterogeneity across lineages and detect shifts in diversification rates. We analyzed 10 chronograms and ran MCMC analyses for 100 million generations using four independent chains per analysis with 50 million generations as burn-in. Prior parameters were optimized using the setBAMMpriors function in BAMMtools 2.1.6., except for the prior on the expected number of shifts, which was set to 270 based on preliminary runs
Input files and the results of the molecular clock analysis (PhyloBayes & FastDate) of the 5,284 taxa data set
There are three sub-directories: PhyloBayes_input, PhyloBayes_output and FastDate_analysis. The whole analysis started with PhyloBayes. The PhyloBayes input files are the following. fb_align_*.phy: 10 alignments, containing 543 taxa after randomly deleted the ~90% of the tips from the 5284taxa dataset. fb_calib_*.cal: 10 files, containing species pairs which define the constrains on the MRCA. fb_tree_*.tre: 10 phylogenies, containing 543 taxa after randomly deleted the ~90% of the tips from the 5284taxa dataset. PhyloBayes analyses were run using the 10% subsampled dataset, a birth-death prior on divergence times, an uncorrelated gamma multiplier relaxed clock model and a CAT-poisson substitution model with a gamma distribution on the rate across sites. A uniformly distributed prior was applied to fossil calibration times. All analyses were run until convergence, typically 15,000 cycles. Convergence of chains was assessed by visually inspecting the likelihood values of the trees and the tree height parameter. We sampled every tree from the posterior and after discarding the first 7,000 samples as burn-in we summarized the posterior estimates using the readdiv function of PhyloBayes. The results can be found in the PhyloBayes_output directory. The directory FastDate_anaysis contains the input files. calib_final_tree_*_.cal: 10 files, containing species pairs which define the constrains on the MRCA. FastDate was run on the complete trees (5,284 species) with the node ages constrained to the values of the 95% highest posterior densities of the ages inferred by PhyloBayes. tree_original_*.tree2: 10 phylogenies, containing 5284 taxa. These trees came from the 5284taxa ML analysis. FastDate analyses were run with time discretized into 1,000 intervals and the ratio of sampled extant individuals set to 0.14. The output files are the followings. fastdate_kronogram_*.tree: 10 chronograms inferred by FastDate analysis. transform_to_ultrametric_script.R: An R script which transforms trees to ultrametric. Because rounding issues, a negligible length was added to some of the tips to achieve ultrametric trees. fastdate_kronogram_*.tree2: 10 chronograms used in further analysis
Megaphylogeny resolves global patterns of mushroom evolution
Mushroom-forming fungi (Agaricomycetes) have the greatest morphological diversity and complexity of any group of fungi. They have radiated into most niches and fulfil diverse roles in the ecosystem, including wood decomposers, pathogens or mycorrhizal mutualists. Despite the importance of mushroom-forming fungi, large-scale patterns of their evolutionary history are poorly known, in part due to the lack of a comprehensive and dated molecular phylogeny. Here, using multigene and genome-based data, we assemble a 5,284-species phylogenetic tree and infer ages and broad patterns of speciation/extinction and morphological innovation in mushroom-forming fungi. Agaricomycetes started a rapid class-wide radiation in the Jurassic, coinciding with the spread of (sub)tropical coniferous forests and a warming climate. A possible mass extinction, several clade-specific adaptive radiations and morphological diversification of fruiting bodies followed during the Cretaceous and the Paleogene, convergently giving rise to the classic toadstool morphology, with a cap, stalk and gills (pileate-stipitate morphology). This morphology is associated with increased rates of lineage diversification, suggesting it represents a key innovation in the evolution of mushroom-forming fungi. The increase in mushroom diversity started during the Mesozoic-Cenozoic radiation event, an era of humid climate when terrestrial communities dominated by gymnosperms and reptiles were also expanding