482 research outputs found
A Sounding Rocket Mission Concept to Acquire High-Resolution Radiometric Spectra Spanning the 9 nm - 31 nm Wavelength Range
When studying Solar Extreme Ultraviolet (EUV) emissions, both single-wavelength, two- dimensional (2D) spectroheliograms and multi-wavelength, one-dimensional (1D) line spectra are important, especially for a thorough understanding of the complex processes in the solar magnetized plasma from the base of the chromosphere through the corona. 2D image data are required for a detailed study of spatial structures, whereas radiometric (i.e., spectral) data provide information on relevant atomic excitation/ionization state densities (and thus temperature). Using both imaging and radiometric techniques, several satellite missions presently study solar dynamics in the EUV, including the Solar Dynamics Observatory (SDO), Hinode, and the Solar-Terrestrial Relations Observatory (STEREO). The EUV wavelengths of interest typically span 9 nm to 31 nm, with the shorter wavelengths being associated with the hottest features (e.g., intense flares and bright points) and the longer wavelengths associated with cooler features (e.g., coronal holes and filaments). Because the optical components of satellite instruments degrade over time, it is not uncommon to conduct sounding rocket underflights for calibration purposes. The authors have designed a radiometric sounding rocket payload that could serve as both a calibration underflight for and a complementary scientific mission to the upcoming Solar Ultraviolet Imager (SUVI) mission aboard the GOES-R satellite (scheduled for a 2015 launch). The challenge to provide quality radiometric line spectra over the 9-31 nm range covered by SUVI was driven by the multilayer coatings required to make the optical components, including mirrors and gratings, reflective over the entire range. Typically, these multilayers provide useful EUV reflectances over bandwidths of a few nm. Our solution to this problem was to employ a three-telescope system in which the optical components were coated with multilayers that spanned three wavelength ranges to cover the three pairs of SUVI bands. The complete system was designed to fit within the Black Brandt-IX 22.-diameter payload skin envelope. The basic optical path is that of a simple parabolic telescope in which EUV light is focused onto a slit and shutter assembly and imaged onto a normal-incidence diffraction grating, which then disperses the light onto a 2048 2048 CCD sensor. The CCD thus records 1D spatial information along one axis and spectral information along the other. The slit spans 40 arc-minutes in length, thus covering a solar diameter out to +/- 1.3 solar radii. Our operations concept includes imaging at three distinct positions: the north-south meridian, the northeast-southwest diagonal, and real-time pointing at an active region. Six 10-second images will be obtained at each position. Fine pointing is provided by the SPARCS-VII attitude control system typically employed on Black Brandt solar missions. Both before and after launch, all three telescopes will be calibrated with the EUV line emission source and monochromater system at NASA's Stray Light Facility at Marshall Spaceflight Center. Details of the payload design, operations concept, and data application will be presented
Desensitizing Inflation from the Planck Scale
A new mechanism to control Planck-scale corrections to the inflationary eta
parameter is proposed. A common approach to the eta problem is to impose a
shift symmetry on the inflaton field. However, this symmetry has to remain
unbroken by Planck-scale effects, which is a rather strong requirement on
possible ultraviolet completions of the theory. In this paper, we show that the
breaking of the shift symmetry by Planck-scale corrections can be
systematically suppressed if the inflaton field interacts with a conformal
sector. The inflaton then receives an anomalous dimension in the conformal
field theory, which leads to sequestering of all dangerous high-energy
corrections. We analyze a number of models where the mechanism can be seen in
action. In our most detailed example we compute the exact anomalous dimensions
via a-maximization and show that the eta problem can be solved using only
weakly-coupled physics.Comment: 34 pages, 3 figures
Quantifying trends in disease impact to produce a consistent and reproducible definition of an emerging infectious disease.
The proper allocation of public health resources for research and control requires quantification of both a disease's current burden and the trend in its impact. Infectious diseases that have been labeled as "emerging infectious diseases" (EIDs) have received heightened scientific and public attention and resources. However, the label 'emerging' is rarely backed by quantitative analysis and is often used subjectively. This can lead to over-allocation of resources to diseases that are incorrectly labelled "emerging," and insufficient allocation of resources to diseases for which evidence of an increasing or high sustained impact is strong. We suggest a simple quantitative approach, segmented regression, to characterize the trends and emergence of diseases. Segmented regression identifies one or more trends in a time series and determines the most statistically parsimonious split(s) (or joinpoints) in the time series. These joinpoints in the time series indicate time points when a change in trend occurred and may identify periods in which drivers of disease impact change. We illustrate the method by analyzing temporal patterns in incidence data for twelve diseases. This approach provides a way to classify a disease as currently emerging, re-emerging, receding, or stable based on temporal trends, as well as to pinpoint the time when the change in these trends happened. We argue that quantitative approaches to defining emergence based on the trend in impact of a disease can, with appropriate context, be used to prioritize resources for research and control. Implementing this more rigorous definition of an EID will require buy-in and enforcement from scientists, policy makers, peer reviewers and journal editors, but has the potential to improve resource allocation for global health
Gene identification and protein classification in microbial metagenomic sequence data via incremental clustering
<p>Abstract</p> <p>Background</p> <p>The identification and study of proteins from metagenomic datasets can shed light on the roles and interactions of the source organisms in their communities. However, metagenomic datasets are characterized by the presence of organisms with varying GC composition, codon usage biases etc., and consequently gene identification is challenging. The vast amount of sequence data also requires faster protein family classification tools.</p> <p>Results</p> <p>We present a computational improvement to a sequence clustering approach that we developed previously to identify and classify protein coding genes in large microbial metagenomic datasets. The clustering approach can be used to identify protein coding genes in prokaryotes, viruses, and intron-less eukaryotes. The computational improvement is based on an incremental clustering method that does not require the expensive all-against-all compute that was required by the original approach, while still preserving the remote homology detection capabilities. We present evaluations of the clustering approach in protein-coding gene identification and classification, and also present the results of updating the protein clusters from our previous work with recent genomic and metagenomic sequences. The clustering results are available via CAMERA, (http://camera.calit2.net).</p> <p>Conclusion</p> <p>The clustering paradigm is shown to be a very useful tool in the analysis of microbial metagenomic data. The incremental clustering method is shown to be much faster than the original approach in identifying genes, grouping sequences into existing protein families, and also identifying novel families that have multiple members in a metagenomic dataset. These clusters provide a basis for further studies of protein families.</p
The Subsystems Approach to Genome Annotation and its Use in the Project to Annotate 1000 Genomes
The release of the 1000(th) complete microbial genome will occur in the next two to three years. In anticipation of this milestone, the Fellowship for Interpretation of Genomes (FIG) launched the Project to Annotate 1000 Genomes. The project is built around the principle that the key to improved accuracy in high-throughput annotation technology is to have experts annotate single subsystems over the complete collection of genomes, rather than having an annotation expert attempt to annotate all of the genes in a single genome. Using the subsystems approach, all of the genes implementing the subsystem are analyzed by an expert in that subsystem. An annotation environment was created where populated subsystems are curated and projected to new genomes. A portable notion of a populated subsystem was defined, and tools developed for exchanging and curating these objects. Tools were also developed to resolve conflicts between populated subsystems. The SEED is the first annotation environment that supports this model of annotation. Here, we describe the subsystem approach, and offer the first release of our growing library of populated subsystems. The initial release of data includes 180 177 distinct proteins with 2133 distinct functional roles. This data comes from 173 subsystems and 383 different organisms
Probing the Links between Political Economy and Non-Traditional Security: Themes, Approaches, and Instruments
This is a pre-print of an article published in International Politics. The definitive publisher-authenticated version of: Hameiri, Shahar, and Lee Jones. "Probing the links between political economy and non-traditional security: Themes, approaches and instruments." International Politics (2015), is available online at: http://dx.doi.org/10.1057/ip.2015.1In recent decades, the security agenda for states and international organisations has expanded dramatically to include a range of ‘non-traditional’, transnational security issues. It is often suggested that globalisation has been a key driver for the emergence or intensification of these problems, but, surprisingly, little sustained scholarly effort has been made to examine the link between responses to the new security agenda and the changing political economy. This curious neglect largely reflects the mutual blind-spots of the sub-disciplines of International Security Studies and International Political Economy, coupled with the dominance of approaches that tend to neglect economic factors. This special issue, which this article introduces, aims to overcome this significant gap. In particular, it focuses on three key themes: the broad relationship between security and the political economy; what is being secured in the name of security, and how this has changed; and how things are being secured – what modes of governance have emerged to manage security problems. In all of these areas, the contributions point to the crucial role of the state in translating shifting state-economy relations to new security definitions and practices
Swarming Behavior in Plant Roots
Interactions between individuals that are guided by simple rules can generate swarming behavior. Swarming behavior has been observed in many groups of organisms, including humans, and recent research has revealed that plants also demonstrate social behavior based on mutual interaction with other individuals. However, this behavior has not previously been analyzed in the context of swarming. Here, we show that roots can be influenced by their neighbors to induce a tendency to align the directions of their growth. In the apparently noisy patterns formed by growing roots, episodic alignments are observed as the roots grow close to each other. These events are incompatible with the statistics of purely random growth. We present experimental results and a theoretical model that describes the growth of maize roots in terms of swarming
Rebooting the human mitochondrial phylogeny: an automated and scalable methodology with expert knowledge
<p>Abstract</p> <p>Background</p> <p>Mitochondrial DNA is an ideal source of information to conduct evolutionary and phylogenetic studies due to its extraordinary properties and abundance. Many insights can be gained from these, including but not limited to screening genetic variation to identify potentially deleterious mutations. However, such advances require efficient solutions to very difficult computational problems, a need that is hampered by the very plenty of data that confers strength to the analysis.</p> <p>Results</p> <p>We develop a systematic, automated methodology to overcome these difficulties, building from readily available, public sequence databases to high-quality alignments and phylogenetic trees. Within each stage in an autonomous workflow, outputs are carefully evaluated and outlier detection rules defined to integrate expert knowledge and automated curation, hence avoiding the manual bottleneck found in past approaches to the problem. Using these techniques, we have performed exhaustive updates to the human mitochondrial phylogeny, illustrating the power and computational scalability of our approach, and we have conducted some initial analyses on the resulting phylogenies.</p> <p>Conclusions</p> <p>The problem at hand demands careful definition of inputs and adequate algorithmic treatment for its solutions to be realistic and useful. It is possible to define formal rules to address the former requirement by refining inputs directly and through their combination as outputs, and the latter are also of help to ascertain the performance of chosen algorithms. Rules can exploit known or inferred properties of datasets to simplify inputs through partitioning, therefore cutting computational costs and affording work on rapidly growing, otherwise intractable datasets. Although expert guidance may be necessary to assist the learning process, low-risk results can be fully automated and have proved themselves convenient and valuable.</p
- …