652 research outputs found
Ecology of Rhizopodea and Ostracoda of southern Pamlico Sound region, North Carolina
112 p., including 21 pl., 19 fig.http://paleo.ku.edu/contributions.htm
Optimization Of Detergent-Mediated Reconstitution Of Influenza A M2 Protein Into Proteoliposomes
We report the optimization of detergent-mediated reconstitution of an integral membrane-bound protein, full-length influenza M2 protein, by direct insertion into detergent-saturated liposomes. Detergent-mediated reconstitution is an important method for preparing proteoliposomes for studying membrane proteins, and must be optimized for each combination of protein and membrane constituents used. The purpose of the reconstitution was to prepare samples for site-directed spin-labeling electron paramagnetic resonance (SDSL-EPR) studies. Our goals in optimizing the protocol were to minimize the amount of detergent used, reduce overall proteoliposome preparation time, and confirm the removal of all detergent. The liposomes were comprised of (1-palmitoyl-2-oleyl-sn-glycero-phosphocholine (POPC) and 1-palmitoyl-2-oleyl-sn-glycero-3-[phospho-rac-(1-glycerol)] (POPG), and the detergent octylglucoside (OG) was used for reconstitution. Rigorous physical characterization was applied to optimize each step of the reconstitution process. We used dynamic light scattering (DLS) to determine the amount of OG needed to saturate the preformed liposomes. During detergent removal by absorption with Bio-Beads, we quantified the detergent concentration by means of a colorimetric assay, thereby determining the number of Bio-Bead additions needed to remove all detergent from the final proteoliposomes. We found that the overnight Bio-Bead incubation used in previously published protocols can be omitted, reducing the time needed for reconstitution. We also monitored the size distribution of the proteoliposomes with DLS, confirming that the size distribution remains essentially constant throughout the reconstitution process
Toward a Dependability Case Language and Workflow for a Radiation Therapy System
We present a near-future research agenda for bringing a suite of modern programming-languages verification tools - specifically interactive theorem proving, solver-aided languages, and formally defined domain-specific languages - to the development of a specific safety-critical system, a radiotherapy medical device. We sketch how we believe recent programming-languages research advances can merge with existing best practices for safety-critical systems to increase system assurance and developer productivity. We motivate hypotheses central to our agenda: That we should start with a single specific system and that we need to integrate a variety of complementary verification and synthesis tools into system development
SensorWeb Evolution Using the Earth Observing One (EO-1) Satellite as a Test Platform
The Earth Observing One (EO-1) satellite was launched in November 2000 as a one year technology demonstration mission for a variety of space technologies. After the first year, in addition to collecting science data from its instruments, the EO-1 mission has been used as a testbed for a variety of technologies which provide various automation capabilities and which have been used as a pathfinder for the creation of SensorWebs. A SensorWeb is the integration of variety of space, airborne and ground sensors into a loosely coupled collaborative sensor system that automatically provides useful data products. Typically, a SensorWeb is comprised of heterogeneous sensors tied together with a messaging architecture and web services. This paper provides an overview of the various technologies that were tested and eventually folded into normal operations. As these technologies were folded in, the nature of operations transformed. The SensorWeb software enables easy connectivity for collaboration with sensors, but the side benefit is that it improved the EO-1 operational efficiency. This paper presents the various phases of EO-1 operation over the past 12 years and also presents operational efficiency gains demonstrated by some metrics
A gravitational lensing explanation for the excess of strong Mg-II absorbers in GRB afterglow spectra
GRB afterglows offer a probe of the intergalactic medium out to high redshift
which complements observations along more abundant quasar lines-of-sight.
Although both quasars and GRB afterglows should provide a-priori random
sight-lines through the intervening IGM, it has been observed that strong Mg-II
absorbers are twice as likely to be found along sight-lines toward GRBs.
Several proposals to reconcile this discrepancy have been put forward, but none
has been found sufficient to explain the magnitude of the effect. In this paper
we estimate the effect of gravitational lensing by galaxies and their
surrounding mass distributions on the statistics of Mg-II absorption. We find
that the multi-band magnification bias could be very strong in the
spectroscopic GRB afterglow population and that gravitational lensing can
explain the discrepancy in density of absorbers, for plausibly steep luminosity
functions. The model makes the prediction that approximately 20%-60% of the
spectroscopic afterglow sample (i.e. ~ 5-15 of 26 sources) would have been
multiply imaged, and hence result in repeating bursts. We show that despite
this large lensing fraction it is likely that none would yet have been
identified by chance owing to the finite sky coverage of GRB searches. We
predict that continued optical monitoring of the bright GRB afterglow locations
in the months and years following the initial decay would lead to
identification of lensed GRB afterglows. A confirmation of the lensing
hypothesis would allow us to constrain the GRB luminosity function down to
otherwise inaccessibly faint levels, with potential consequences for GRB
models.Comment: 8 pages, 3 figures. Submitted to MNRAS
The Namibia Early Flood Warning System, A CEOS Pilot Project
Over the past year few years, an international collaboration has developed a pilot project under the auspices of Committee on Earth Observation Satellite (CEOS) Disasters team. The overall team consists of civilian satellite agencies. For this pilot effort, the development team consists of NASA, Canadian Space Agency, Univ. of Maryland, Univ. of Colorado, Univ. of Oklahoma, Ukraine Space Research Institute and Joint Research Center(JRC) for European Commission. This development team collaborates with regional , national and international agencies to deliver end-to-end disaster coverage. In particular, the team in collaborating on this effort with the Namibia Department of Hydrology to begin in Namibia . However, the ultimate goal is to expand the functionality to provide early warning over the South Africa region. The initial collaboration was initiated by United Nations Office of Outer Space Affairs and CEOS Working Group for Information Systems and Services (WGISS). The initial driver was to demonstrate international interoperability using various space agency sensors and models along with regional in-situ ground sensors. In 2010, the team created a preliminary semi-manual system to demonstrate moving and combining key data streams and delivering the data to the Namibia Department of Hydrology during their flood season which typically is January through April. In this pilot, a variety of moderate resolution and high resolution satellite flood imagery was rapidly delivered and used in conjunction with flood predictive models in Namibia. This was collected in conjunction with ground measurements and was used to examine how to create a customized flood early warning system. During the first year, the team made use of SensorWeb technology to gather various sensor data which was used to monitor flood waves traveling down basins originating in Angola, but eventually flooding villages in Namibia. The team made use of standardized interfaces such as those articulated under the Open Cloud Consortium (OGC) Sensor Web Enablement (SWE) set of web services was good [1][2]. However, it was discovered that in order to make a system like this functional, there were many performance issues. Data sets were large and located in a variety of location behind firewalls and had to be accessed across open networks, so security was an issue. Furthermore, the network access acted as bottleneck to transfer map products to where they are needed. Finally, during disasters, many users and computer processes act in parallel and thus it was very easy to overload the single string of computers stitched together in a virtual system that was initially developed. To address some of these performance issues, the team partnered with the Open Cloud Consortium (OCC) who supplied a Computation Cloud located at the University of Illinois at Chicago and some manpower to administer this Cloud. The Flood SensorWeb [3] system was interfaced to the Cloud to provide a high performance user interface and product development engine. Figure 1 shows the functional diagram of the Flood SensorWeb. Figure 2 shows some of the functionality of the Computation Cloud that was integrated. A significant portion of the original system was ported to the Cloud and during the past year, technical issues were resolved which included web access to the Cloud, security over the open Internet, beginning experiments on how to handle surge capacity by using the virtual machines in the cloud in parallel, using tiling techniques to render large data sets as layers on map, interfaces to allow user to customize the data processing/product chain and other performance enhancing techniques. The conclusion reached from the effort and this presentation is that defining the interoperability standards in a small fraction of the work. For example, once open web service standards were defined, many users could not make use of the standards due to security restrictions. Furthermore, once an interoperable sysm is functional, then a surge of users can render a system unusable, especially in the disaster domain
Medically unexplained symptoms and attachment theory: The BodyMind Approach
© 2019 Payne and Brooks. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.This article discusses how The BodyMind Approach ® (TBMA) addresses insecure attachment styles in medically unexplained symptoms (MUS). Insecure attachment styles are associated with adverse childhood experiences (ACEs) and MUS (Adshead and Guthrie, 2015) and affect sufferers' capacity to self-manage. The article goes on to make a new hypothesis to account for TBMA's effectiveness (Payne and Brooks, 2017), that is, it addresses insecure attachment styles, which may be present in some MUS sufferers, leading to their capacity to self-manage. Three insecure attachment styles (dismissive, pre-occupied and fearful) associated with MUS are discussed. TBMA is described and explanations provided of how TBMA has been specifically designed to support people's insecure attachment styles. Three key concepts to support insecure attachment styles involved in the content of TBMA are identified and debated: (a) emotional regulation; (b) safety; and (c) bodymindfulness. There is a rationale for the design of TBMA as opposed to psychological interventions for this population. The programme's structure, facilitation and content, takes account of the three insecure attachment styles above. Examples of how TBMA works with their specific characteristics are presented. TBMA has been tested and found to be effective during delivery in the United Kingdom National Health Service (NHS). Improved self-management has potential to reduce costs for the NHS and in General Practitioner time and resources.Peer reviewe
- …