177 research outputs found
Recommended from our members
Simulations of hydrodynamic interactions among immersed particles in stokes flow using a massively parallel computer
In this paper, a massively parallel implementation of the boundary element method to study particle transport in Stokes flow is discussed. The numerical algorithm couples the quasistatic Stokes equations for the fluid with kinematic and equilibrium equations for the particles. The formation and assembly of the discretized boundary element equations is based on the torus-wrap mapping as opposed to the more traditional row- or column-wrap mappings. The equation set is solved using a block Jacobi iteration method. Results are shown for an example application problem, which requires solving a dense system of 6240 equations more than 1200 times
NNSA ASC Exascale Environment Planning, Applications Working Group, Report February 2011
The scope of the Apps WG covers three areas of interest: Physics and Engineering Models (PEM), multi-physics Integrated Codes (IC), and Verification and Validation (V&V). Each places different demands on the exascale environment. The exascale challenge will be to provide environments that optimize all three. PEM serve as a test bed for both model development and 'best practices' for IC code development, as well as their use as standalone codes to improve scientific understanding. Rapidly achieving reasonable performance for a small team is the key to maintaining PEM innovation. Thus, the environment must provide the ability to develop portable code at a higher level of abstraction, which can then be tuned, as needed. PEM concentrate their computational footprint in one or a few kernels that must perform efficiently. Their comparative simplicity permits extreme optimization, so the environment must provide the ability to exercise significant control over the lower software and hardware levels. IC serve as the underlying software tools employed for most ASC problems of interest. Often coupling dozens of physics models into very large, very complex applications, ICs are usually the product of hundreds of staff-years of development, with lifetimes measured in decades. Thus, emphasis is placed on portability, maintainability and overall performance, with optimization done on the whole rather than on individual parts. The exascale environment must provide a high-level standardized programming model with effective tools and mechanisms for fault detection and remediation. Finally, V&V addresses the infrastructure and methods to facilitate the assessment of code and model suitability for applications, and uncertainty quantification (UQ) methods for assessment and quantification of margins of uncertainty (QMU). V&V employs both PEM and IC, with somewhat differing goals, i.e., parameter studies and error assessments to determine both the quality of the calculation and to estimate expected deviations of simulations from experiments. The exascale environment must provide a performance envelope suitable both for capacity calculations (high through-put) and full system capability runs (high performance). Analysis of the results place shared demand on both the I/O as well as the visualization subsystems
Seahawk: moving beyond HTML in Web-based bioinformatics analysis
<p>Abstract</p> <p>Background</p> <p>Traditional HTML interfaces for input to and output from Bioinformatics analysis on the Web are highly variable in style, content and data formats. Combining multiple analyses can therfore be an onerous task for biologists. Semantic Web Services allow automated discovery of conceptual links between remote data analysis servers. A shared data ontology and service discovery/execution framework is particularly attractive in Bioinformatics, where data and services are often both disparate and distributed. Instead of biologists copying, pasting and reformatting data between various Web sites, Semantic Web Service protocols such as MOBY-S hold out the promise of seamlessly integrating multi-step analysis.</p> <p>Results</p> <p>We have developed a program (Seahawk) that allows biologists to intuitively and seamlessly chain together Web Services using a data-centric, rather than the customary service-centric approach. The approach is illustrated with a ferredoxin mutation analysis. Seahawk concentrates on lowering entry barriers for biologists: no prior knowledge of the data ontology, or relevant services is required. In stark contrast to other MOBY-S clients, in Seahawk users simply load Web pages and text files they already work with. Underlying the familiar Web-browser interaction is an XML data engine based on extensible XSLT style sheets, regular expressions, and XPath statements which import existing user data into the MOBY-S format.</p> <p>Conclusion</p> <p>As an easily accessible applet, Seahawk moves beyond standard Web browser interaction, providing mechanisms for the biologist to concentrate on the analytical task rather than on the technical details of data formats and Web forms. As the MOBY-S protocol nears a 1.0 specification, we expect more biologists to adopt these new semantic-oriented ways of doing Web-based analysis, which empower them to do more complicated, <it>ad hoc </it>analysis workflow creation without the assistance of a programmer.</p
Quantification and analysis of icebergs in a tidewater glacier fjord using an object-based approach
Tidewater glaciers are glaciers that terminate in, and calve icebergs into, the ocean. In addition to the influence that tidewater glaciers have on physical and chemical oceanography, floating icebergs serve as habitat for marine animals such as harbor seals (Phoca vitulina richardii). The availability and spatial distribution of glacier ice in the fjords is likely a key environmental variable that influences the abundance and distribution of selected marine mammals; however, the amount of ice and the fine-scale characteristics of ice in fjords have not been systematically quantified. Given the predicted changes in glacier habitat, there is a need for the development of methods that could be broadly applied to quantify changes in available ice habitat in tidewater glacier fjords. We present a case study to describe a novel method that uses object-based image analysis (OBIA) to classify floating glacier ice in a tidewater glacier fjord from high-resolution aerial digital imagery. Our objectives were to (i) develop workflows and rule sets to classify high spatial resolution airborne imagery of floating glacier ice; (ii) quantify the amount and fine-scale characteristics of floating glacier ice; (iii) and develop processes for automating the object-based analysis of floating glacier ice for large number of images from a representative survey day during June 2007 in Johns Hopkins Inlet (JHI), a tidewater glacier fjord in Glacier Bay National Park, southeastern Alaska. On 18 June 2007, JHI was comprised of brash ice ([Formula: see text] = 45.2%, SD = 41.5%), water ([Formula: see text] = 52.7%, SD = 42.3%), and icebergs ([Formula: see text] = 2.1%, SD = 1.4%). Average iceberg size per scene was 5.7 m2 (SD = 2.6 m2). We estimate the total area (± uncertainty) of iceberg habitat in the fjord to be 455,400 ± 123,000 m2. The method works well for classifying icebergs across scenes (classification accuracy of 75.6%); the largest classification errors occur in areas with densely-packed ice, low contrast between neighboring ice cover, or dark or sediment-covered ice, where icebergs may be misclassified as brash ice about 20% of the time. OBIA is a powerful image classification tool, and the method we present could be adapted and applied to other ice habitats, such as sea ice, to assess changes in ice characteristics and availability
Public attitudes to inequality in water distribution: Insights from preferences for water reallocation from irrigators to Aboriginal Australians
Water allocation regimes that adjudicate between competing uses are in many countries under pressure to adapt to increasing demands, climate‐driven shortages, expectations for equity of access, as well as societal changes in values and priorities. International authorities expound standards for national allocation regimes that include robust processes for addressing the needs of ‘new entrants' and for varying existing entitlements within sustainable limits. The claims of Indigenous peoples to water represents a newly recognised set of rights and interests that will test the ability of allocation regimes to address the global water governance goal of equity. No study has sought to identify public attitudes or willingness to pay for a fairer allocation of water rights between Indigenous and non‐Indigenous people. We surveyed households from the jurisdictions of Australia's Murray‐Darling Basin, a region undergoing a historic government‐led recovery of water, and found that 69.2% of respondents support the principle of reallocating a small amount of water from irrigators to Aboriginal people via the water market. Using contingent valuation, we estimated households are willing to pay A74.5 million, which is almost double a recent government commitment to fund the acquisition of entitlements for Aboriginal nations of this basin. Results varied by state of residency and affinity with environmental groups. An information treatment that presented narrative accounts from Aboriginal people influenced the results. Insights from this study can inform water reallocation processes
Automating Genomic Data Mining via a Sequence-based Matrix Format and Associative Rule Set
There is an enormous amount of information encoded in each genome – enough to create living, responsive and adaptive organisms. Raw sequence data alone is not enough to understand function, mechanisms or interactions. Changes in a single base pair can lead to disease, such as sickle-cell anemia, while some large megabase deletions have no apparent phenotypic effect. Genomic features are varied in their data types and annotation of these features is spread across multiple databases. Herein, we develop a method to automate exploration of genomes by iteratively exploring sequence data for correlations and building upon them. First, to integrate and compare different annotation sources, a sequence matrix (SM) is developed to contain position-dependant information. Second, a classification tree is developed for matrix row types, specifying how each data type is to be treated with respect to other data types for analysis purposes. Third, correlative analyses are developed to analyze features of each matrix row in terms of the other rows, guided by the classification tree as to which analyses are appropriate. A prototype was developed and successful in detecting coinciding genomic features among genes, exons, repetitive elements and CpG islands
Photometric and Spectroscopic Observations of SN 1990E in NGC 1035: Observational Constraints for Models of Type II Supernovae
We present 126 photometric and 30 spectral observation of SN 1990E spanning from 12 days before B maximum to 600 days past discovery. These observations show that SN 1990E was of type II-P, displaying hydrogen in its spectrum, and the characteristic plateau in its light curve. SN 1990E is one of the few SNe II which has been well observed before maximum light, and we present evidence that this SN was discovered very soon after its explosion. In the earliest spectra we identify, for the first time, several N II lines. We present a new technique for measuring extinction to SNe II based on the evolution of absorption lines, and use this method to estimate the extinction to SN 1990E, Av=1.5+/-0.3 mag. From our photometric data we have constructed a bolometric light curve for SN 1990E and show that, even at the earliest times, the bolometric luminosity was falling rapidly. We use the late-time bolometric light curve to show that SN 1990E trapped a majority of the gamma rays produced by the radioactive decay of 56Co, and estimate that SN 1990E ejected 0.073 Mo of 56Ni, an amount virtually identical to that of SN 1987A. [excerpt
Evaluation of a commercial web-based weight loss and weight loss maintenance program in overweight and obese adults: a randomized controlled trial
<p>Abstract</p> <p>Background</p> <p>Obesity rates in adults continue to rise and effective treatment programs with a broad reach are urgently required. This paper describes the study protocol for a web-based randomized controlled trial (RCT) of a commercially available program for overweight and obese adult males and females. The aim of this RCT was to determine and compare the efficacy of two web-based interventions for weight loss and maintenance of lost weight.</p> <p>Methods/Design</p> <p>Overweight and obese adult males and females were stratified by gender and BMI and randomly assigned to one of three groups for 12-weeks: waitlist control, or basic or enhanced online weight-loss. Control participants were re-randomized to the two weight loss groups at the end of the 12-week period. The basic and enhanced group participants had an option to continue or repeat the 12-week program. If the weight loss goal was achieved at the end of 12, otherwise on completion of 24 weeks of weight loss, participants were re-randomized to one of two online maintenance programs (maintenance basic or maintenance enhanced), until 18 months from commencing the weight loss program. Assessments took place at baseline, three, six, and 18 months after commencing the initial weight loss intervention with control participants repeating the initial assessment after three month of waiting. The primary outcome is body mass index (BMI). Other outcomes include weight, waist circumference, blood pressure, plasma markers of cardiovascular disease risk, dietary intake, eating behaviours, physical activity and quality of life.</p> <p>Both the weight loss and maintenance of lost weight programs were based on social cognitive theory with participants advised to set goals, self-monitor weight, dietary intake and physical activity levels. The enhanced weight loss and maintenance programs provided additional personalized, system-generated feedback on progress and use of the program. Details of the methodological aspects of recruitment, inclusion criteria, randomization, intervention programs, assessments and statistical analyses are described.</p> <p>Discussion</p> <p>Importantly, this paper describes how an RCT of a currently available commercial online program in Australia addresses some of the short falls in the current literature pertaining to the efficacy of web-based weight loss programs.</p> <p>Australian New Zealand Clinical Trials Registry (ANZCTR) number: ACTRN12610000197033</p
- …