161 research outputs found
Detection of a z=0.0515, 0.0522 absorption system in the QSO S4 0248+430 due to an intervening galaxy
In some of the few cases where the line of sight to a Quasi-Stellar Object (QSO) passes near a galaxy, the galaxy redshift is almost identical to an absorption redshift in the spectrum of the QSO. Although these relatively low redshift QSO-galaxy pairs may not be typical of the majority of the narrow heavy-element QSO absorption systems, they provide a direct measure of column densities in the outer parts of galaxies and some limits on the relative abundances of the gas. Observations are presented here of the QSO S4 0248+430 and a nearby anonymous galaxy (Kuhr 1977). The 14 second separation of the line of sight to the QSO (z sub e = 1.316) and the z=0.052 spiral galaxy, (a projected separation of 20 kpc ((h sub o = 50, q sub o = 0)), makes this a particularly suitable pair for probing the extent and content of gas in the galaxy. Low resolution (6A full width half maximum), long slit charge coupled device (CCD) spectra show strong CA II H and K lines in absorption at the redshift of the galaxy (Junkkarinen 1987). Higher resolution spectra showing both Ca II H and K and Na I D1 and D2 in absorption and direct images are reported here
Recommended from our members
Massively parallel I/O: Building an infrastructure for parallel computing
The solution of Grand Challenge Problems will require computations that are too large to fit in the memories of even the largest machines. Inevitably, new designs of I/O systems will be necessary to support them. This report describes the work in investigating I/O subsystems for massively parallel computers. Specifically, the authors investigated out-of-core algorithms for common scientific calculations present several theoretical results. They also describe several approaches to parallel I/O, including partitioned secondary storage and choreographed I/O, and the implications of each to massively parallel computing
Recommended from our members
3D seismic imaging on massively parallel computers
The ability to image complex geologies such as salt domes in the Gulf of Mexico and thrusts in mountainous regions is a key to reducing the risk and cost associated with oil and gas exploration. Imaging these structures, however, is computationally expensive. Datasets can be terabytes in size, and the processing time required for the multiple iterations needed to produce a velocity model can take months, even with the massively parallel computers available today. Some algorithms, such as 3D, finite-difference, prestack, depth migration remain beyond the capacity of production seismic processing. Massively parallel processors (MPPs) and algorithms research are the tools that will enable this project to provide new seismic processing capabilities to the oil and gas industry. The goals of this work are to (1) develop finite-difference algorithms for 3D, prestack, depth migration; (2) develop efficient computational approaches for seismic imaging and for processing terabyte datasets on massively parallel computers; and (3) develop a modular, portable, seismic imaging code
Recommended from our members
Simulations of hydrodynamic interactions among immersed particles in stokes flow using a massively parallel computer
In this paper, a massively parallel implementation of the boundary element method to study particle transport in Stokes flow is discussed. The numerical algorithm couples the quasistatic Stokes equations for the fluid with kinematic and equilibrium equations for the particles. The formation and assembly of the discretized boundary element equations is based on the torus-wrap mapping as opposed to the more traditional row- or column-wrap mappings. The equation set is solved using a block Jacobi iteration method. Results are shown for an example application problem, which requires solving a dense system of 6240 equations more than 1200 times
NNSA ASC Exascale Environment Planning, Applications Working Group, Report February 2011
The scope of the Apps WG covers three areas of interest: Physics and Engineering Models (PEM), multi-physics Integrated Codes (IC), and Verification and Validation (V&V). Each places different demands on the exascale environment. The exascale challenge will be to provide environments that optimize all three. PEM serve as a test bed for both model development and 'best practices' for IC code development, as well as their use as standalone codes to improve scientific understanding. Rapidly achieving reasonable performance for a small team is the key to maintaining PEM innovation. Thus, the environment must provide the ability to develop portable code at a higher level of abstraction, which can then be tuned, as needed. PEM concentrate their computational footprint in one or a few kernels that must perform efficiently. Their comparative simplicity permits extreme optimization, so the environment must provide the ability to exercise significant control over the lower software and hardware levels. IC serve as the underlying software tools employed for most ASC problems of interest. Often coupling dozens of physics models into very large, very complex applications, ICs are usually the product of hundreds of staff-years of development, with lifetimes measured in decades. Thus, emphasis is placed on portability, maintainability and overall performance, with optimization done on the whole rather than on individual parts. The exascale environment must provide a high-level standardized programming model with effective tools and mechanisms for fault detection and remediation. Finally, V&V addresses the infrastructure and methods to facilitate the assessment of code and model suitability for applications, and uncertainty quantification (UQ) methods for assessment and quantification of margins of uncertainty (QMU). V&V employs both PEM and IC, with somewhat differing goals, i.e., parameter studies and error assessments to determine both the quality of the calculation and to estimate expected deviations of simulations from experiments. The exascale environment must provide a performance envelope suitable both for capacity calculations (high through-put) and full system capability runs (high performance). Analysis of the results place shared demand on both the I/O as well as the visualization subsystems
Seahawk: moving beyond HTML in Web-based bioinformatics analysis
<p>Abstract</p> <p>Background</p> <p>Traditional HTML interfaces for input to and output from Bioinformatics analysis on the Web are highly variable in style, content and data formats. Combining multiple analyses can therfore be an onerous task for biologists. Semantic Web Services allow automated discovery of conceptual links between remote data analysis servers. A shared data ontology and service discovery/execution framework is particularly attractive in Bioinformatics, where data and services are often both disparate and distributed. Instead of biologists copying, pasting and reformatting data between various Web sites, Semantic Web Service protocols such as MOBY-S hold out the promise of seamlessly integrating multi-step analysis.</p> <p>Results</p> <p>We have developed a program (Seahawk) that allows biologists to intuitively and seamlessly chain together Web Services using a data-centric, rather than the customary service-centric approach. The approach is illustrated with a ferredoxin mutation analysis. Seahawk concentrates on lowering entry barriers for biologists: no prior knowledge of the data ontology, or relevant services is required. In stark contrast to other MOBY-S clients, in Seahawk users simply load Web pages and text files they already work with. Underlying the familiar Web-browser interaction is an XML data engine based on extensible XSLT style sheets, regular expressions, and XPath statements which import existing user data into the MOBY-S format.</p> <p>Conclusion</p> <p>As an easily accessible applet, Seahawk moves beyond standard Web browser interaction, providing mechanisms for the biologist to concentrate on the analytical task rather than on the technical details of data formats and Web forms. As the MOBY-S protocol nears a 1.0 specification, we expect more biologists to adopt these new semantic-oriented ways of doing Web-based analysis, which empower them to do more complicated, <it>ad hoc </it>analysis workflow creation without the assistance of a programmer.</p
Automating Genomic Data Mining via a Sequence-based Matrix Format and Associative Rule Set
There is an enormous amount of information encoded in each genome – enough to create living, responsive and adaptive organisms. Raw sequence data alone is not enough to understand function, mechanisms or interactions. Changes in a single base pair can lead to disease, such as sickle-cell anemia, while some large megabase deletions have no apparent phenotypic effect. Genomic features are varied in their data types and annotation of these features is spread across multiple databases. Herein, we develop a method to automate exploration of genomes by iteratively exploring sequence data for correlations and building upon them. First, to integrate and compare different annotation sources, a sequence matrix (SM) is developed to contain position-dependant information. Second, a classification tree is developed for matrix row types, specifying how each data type is to be treated with respect to other data types for analysis purposes. Third, correlative analyses are developed to analyze features of each matrix row in terms of the other rows, guided by the classification tree as to which analyses are appropriate. A prototype was developed and successful in detecting coinciding genomic features among genes, exons, repetitive elements and CpG islands
Evaluation of a commercial web-based weight loss and weight loss maintenance program in overweight and obese adults: a randomized controlled trial
<p>Abstract</p> <p>Background</p> <p>Obesity rates in adults continue to rise and effective treatment programs with a broad reach are urgently required. This paper describes the study protocol for a web-based randomized controlled trial (RCT) of a commercially available program for overweight and obese adult males and females. The aim of this RCT was to determine and compare the efficacy of two web-based interventions for weight loss and maintenance of lost weight.</p> <p>Methods/Design</p> <p>Overweight and obese adult males and females were stratified by gender and BMI and randomly assigned to one of three groups for 12-weeks: waitlist control, or basic or enhanced online weight-loss. Control participants were re-randomized to the two weight loss groups at the end of the 12-week period. The basic and enhanced group participants had an option to continue or repeat the 12-week program. If the weight loss goal was achieved at the end of 12, otherwise on completion of 24 weeks of weight loss, participants were re-randomized to one of two online maintenance programs (maintenance basic or maintenance enhanced), until 18 months from commencing the weight loss program. Assessments took place at baseline, three, six, and 18 months after commencing the initial weight loss intervention with control participants repeating the initial assessment after three month of waiting. The primary outcome is body mass index (BMI). Other outcomes include weight, waist circumference, blood pressure, plasma markers of cardiovascular disease risk, dietary intake, eating behaviours, physical activity and quality of life.</p> <p>Both the weight loss and maintenance of lost weight programs were based on social cognitive theory with participants advised to set goals, self-monitor weight, dietary intake and physical activity levels. The enhanced weight loss and maintenance programs provided additional personalized, system-generated feedback on progress and use of the program. Details of the methodological aspects of recruitment, inclusion criteria, randomization, intervention programs, assessments and statistical analyses are described.</p> <p>Discussion</p> <p>Importantly, this paper describes how an RCT of a currently available commercial online program in Australia addresses some of the short falls in the current literature pertaining to the efficacy of web-based weight loss programs.</p> <p>Australian New Zealand Clinical Trials Registry (ANZCTR) number: ACTRN12610000197033</p
Refining trait resilience: identifying engineering, ecological, and adaptive facets from extant measures of resilience
The current paper presents a new measure of trait resilience derived from three common
mechanisms identified in ecological theory: Engineering, Ecological and Adaptive (EEA)
resilience. Exploratory and confirmatory factor analyses of five existing resilience scales
suggest that the three trait resilience facets emerge, and can be reduced to a 12-item scale.
The conceptualization and value of EEA resilience within the wider trait and well-being psychology
is illustrated in terms of differing relationships with adaptive expressions of the traits
of the five-factor personality model and the contribution to well-being after controlling for
personality and coping, or over time. The current findings suggest that EEA resilience is a
useful and parsimonious model and measure of trait resilience that can readily be placed
within wider trait psychology and that is found to contribute to individual well-bein
The Internet for weight control in an obese sample: results of a randomised controlled trial
Rising levels of obesity coupled with the limited success of currently available weight control methods highlight the need for investigation of novel approaches to obesity treatment. This study aims to determine the effectiveness and cost-effectiveness of an Internet-based resource for obesity management
- …