6,735 research outputs found
Effects of phosphatidylserine on membrane incorporation and surface protection properties of exchangeable poly(ethylene glycol)-conjugated lipids
AbstractLiposomes containing the acidic phospholipid phosphatidylserine (PS) have been shown to avidly interact with proteins involved in blood coagulation and complement activation. Membranes with PS were therefore used to assess the shielding properties of poly(ethylene glycol 2000)-derivatized phosphatidylethanolamine (PE-PEG2000) with various acyl chain lengths on membranes containing reactive lipids. The desorption of PE-PEG2000 from PS containing liposomes was studied using an in vitro assay which involved the transfer of PE-PEG2000 into multilamellar vesicles, and the reactivity of PS containing liposomes was monitored by quantifying interactions with blood coagulation proteins. The percent inhibition of clotting activity of PS liposomes was dependent on the PE-PEG2000 content. 1,2-Distearoyl-sn-glycero-3-phosphoethanolamine (DSPE)-PEG2000 which transferred out slowly from PS liposomes was able to abolish >80% of clotting activity of PS liposomes at 15 mol%. This level of DSPE-PEG2000 was also able to extend the mean residence time of PS liposomes from 0.2 h to 14 h. However, PE-PEG2000 with shorter acyl chains such as 1,2-dimyristyl-sn-glycero-3-phosphoethanolamine-PEG2000 were rapidly transferred out from PS liposomes, which resulted in a 73% decrease in clotting activity inhibition and 45% of administered intravenously liposomes were removed from the blood within 15 min after injection. Thus, PS facilitates the desorption of PE-PEG2000 from PS containing liposomes, thereby providing additional control of PEG release rates from membrane surfaces. These results suggest that membrane reactivity can be selectively regulated by surface grafted PEGs coupled to phosphatidylethanolamine of an appropriate acyl chain length
An Alumni survey of the School of Social Work, Portland State University
The alumni survey conducted at Portland State University School of Social Work by second year students had two purposes. One purpose was to fulfill the research practicum requirements of a Masters of Social Work degree by providing experience in the area of applied survey research. The other was to provide a data base for future alumni research at the school
Using XDAQ in Application Scenarios of the CMS Experiment
XDAQ is a generic data acquisition software environment that emerged from a
rich set of of use-cases encountered in the CMS experiment. They cover not the
deployment for multiple sub-detectors and the operation of different processing
and networking equipment as well as a distributed collaboration of users with
different needs. The use of the software in various application scenarios
demonstrated the viability of the approach. We discuss two applications, the
tracker local DAQ system for front-end commissioning and the muon chamber
validation system. The description is completed by a brief overview of XDAQ.Comment: Conference CHEP 2003 (Computing in High Energy and Nuclear Physics,
La Jolla, CA
The CMS Event Builder
The data acquisition system of the CMS experiment at the Large Hadron
Collider will employ an event builder which will combine data from about 500
data sources into full events at an aggregate throughput of 100 GByte/s.
Several architectures and switch technologies have been evaluated for the DAQ
Technical Design Report by measurements with test benches and by simulation.
This paper describes studies of an EVB test-bench based on 64 PCs acting as
data sources and data consumers and employing both Gigabit Ethernet and Myrinet
technologies as the interconnect. In the case of Ethernet, protocols based on
Layer-2 frames and on TCP/IP are evaluated. Results from ongoing studies,
including measurements on throughput and scaling are presented.
The architecture of the baseline CMS event builder will be outlined. The
event builder is organised into two stages with intelligent buffers in between.
The first stage contains 64 switches performing a first level of data
concentration by building super-fragments from fragments of 8 data sources. The
second stage combines the 64 super-fragments into full events. This
architecture allows installation of the second stage of the event builder in
steps, with the overall throughput scaling linearly with the number of switches
in the second stage. Possible implementations of the components of the event
builder are discussed and the expected performance of the full event builder is
outlined.Comment: Conference CHEP0
The CMS event builder demonstrator based on Myrinet
The data acquisition system for the CMS experiment at the Large Hadron Collider (LHC) will require a large and high performance event building network. Several switch technologies are currently being evaluated in order to compare different architectures for the event builder. One candidate is Myrinet. This paper describes the demonstrator which has been set up to study a small-scale (8*8) event builder based on a Myrinet switch. Measurements are presented on throughput, overhead and scaling for various traffic conditions. Results are shown on event building with a push architecture. (6 refs)
A software approach for readout and data acquisition in CMS
Traditional systems dominated by performance constraints tend to neglect other qualities such as maintainability and configurability. Object-Orientation allows one to encapsulate the technology differences in communication sub-systems and to provide a uniform view of data transport layer to the systems engineer. We applied this paradigm to the design and implementation of intelligent data servers in the Compact Muon Solenoid (CMS) data acquisition system at CERN to easily exploiting the physical communication resources of the available equipment. CMS is a high-energy physics experiment under study that incorporates a highly distributed data acquisition system. This paper outlines the architecture of one part, the so called Readout Unit, and shows how we can exploit the object advantage for systems with specific data rate requirements. A C++ streams communication layer with zero copying functionality has been established for UDP, TCP, DLPI and specific Myrinet and VME bus communication on the VxWorks real-time operating system. This software provides performance close to the hardware channel and hides communication details from the application programmers. (28 refs)
A Call to Standardize the Definition and Method of Assessing Women for Vaginal Discharge Syndrome in Pregnancy.
- …