17 research outputs found

    Selective cytotoxicity of a novel immunotoxin based on pulchellin A chain for cells expressing HIV envelope.

    Full text link
    Immunotoxins (ITs), which consist of antibodies conjugated to toxins, have been proposed as a treatment for cancer and chronic infections. To develop and improve the ITs, different toxins such as ricin, have been used, aiming for higher efficacy against target cells. The toxin pulchellin, isolated from the Abrus pulchellus plant, has similar structure and function as ricin. Here we have compared two plant toxins, recombinant A chains from ricin (RAC) and pulchellin (PAC) toxins, for their ability to kill HIV Env-expressing cells. In this study, RAC and PAC were produced in E. coli, and chromatographically purified, then chemically conjugated to two different anti-HIV monoclonal antibodies (MAbs), anti-gp120 MAb 924 or anti-gp41 MAb 7B2. These conjugates were characterized biochemically and immunologically. Cell internalization was studied by flow cytometry and confocal microscopy. Results showed that PAC can function within an effective IT. The ITs demonstrated specific binding against native antigens on persistently HIV-infected cells and recombinant antigens on Env-transfected cells. PAC cytotoxicity appears somewhat less than RAC, the standard for comparison. This is the first report that PAC may have utility for the design and construction of therapeutic ITs, highlighting the potential role for specific cell targeting

    Magnetic nanoparticles containing labeling reagents for cell surface mapping

    Get PDF
    Cell surface proteins play an important role in understanding cell-cell communication, cell signaling pathways, cell division and molecular pathogenesis in various diseases. Commonly used biotinylation regents for cell surface mapping have shown some potential drawbacks such as crossing the cell membrane, difficult recovery of biotinylated proteins from streptavidin/avidin beads, interference from endogenous biotin and nonspecific nature of streptavidin. With aim to solve these problems, we introduced sulfo-N-hydroxysuccinimidyl (NHS) ester functionalized magnetic nanoparticles containing cleavable groups to label solvent exposed primary amine groups of proteins. Silica coated iron oxide magnetic nanoparticles (Fe3O4@SiO2 MNPs) were linked to NHS ester groups via a cleavable disulfide bond. Additionally, the superparamagnetic properties of Fe3O4@SiO2 MNPs facilitate efficient separation of the labeled peptides and removal of the detergent without any extra step of purification. In the last step, the disulfide bond between the labeled peptides and MNPs was cleaved to release the labeled peptides. The disulfide linked NHS ester modified Fe3O4@SiO2 MNPs were tested using a small peptide, and a model protein (bovine serum albumin) followed by liquid chromatography-tandem mass spectrometry analysis (LC-MS/MS) of labeled peptides. In the next step, disulfide linked, NHS ester modified Fe3O4@SiO2 MNPs (150 nm) successfully labeled the solvent exposed cell surface peptides of Saccharomyces cerevisae. Electron microscopic analysis confirmed the cell surface binding of NHS ester modified Fe3O4@SiO2 MNPs. Mass spectrometric analysis revealed the presence of 30 unique proteins containing 56 peptides. Another MNPs based labeling reagent was developed to target solvent exposed carboxyl acid residues of peptides and proteins. The surface of Fe3O4@SiO2 MNPs was modified with free amine groups via a disulfide bond. Solvent exposed carboxyl groups of ACTH 4-11 and BSA were labeled by using1-Ethyl-3-(3-dimethylaminopropyl)carbodiimide (EDC) chemistry. Upon cleaving the disulfide bond, labeled peptides were analyzed by LC-MS/MS. The MNPs containing labeling reagents offers specific labeling under physiological conditions and rapid magnetic separation of labeled peptides prior to mass spectrometric analysis. The ability of large Fe3O4@SiO2 MNPs to specifically attach to cell surface makes them a potential candidate to study the surface of variety of different cell types and complex proteins surrounded by lipid bilayer

    Scaling Hierarchical N-body Simulations on GPU Clusters

    Full text link
    Abstract — This paper focuses on the use of GPGPU-based clus-ters for hierarchical N-body simulations. Whereas the behavior of these hierarchical methods has been studied in the past on CPU-based architectures, we investigate key performance issues in the context of clusters of GPUs. These include kernel orga-nization and efficiency, the balance between tree traversal and force computation work, grain size selection through the tuning of offloaded work request sizes, and the reduction of sequential bottlenecks. The effects of various application parameters are studied and experiments done to quantify gains in performance. Our studies are carried out in the context of a production-quality parallel cosmological simulator called ChaNGa. We highlight the re-engineering of the application to make it more suitable for GPU-based environments. Finally, we present performance results from experiments on the NCSA Lincoln GPU cluster, including a note on GPU use in multistepped simulations

    Integrating multiple clusters for compute-intensive applications

    Get PDF
    Multicluster grids provide one promising solution to satisfying the growing computational demands of compute-intensive applications. However, it is challenging to seamlessly integrate all participating clusters in different domains into a single virtual computational platform. In order to fully utilize the capabilities of multicluster grids, computer scientists need to deal with the issue of joining together participating autonomic systems practically and efficiently to execute grid-enabled applications. Driven by several compute-intensive applications, this theses develops a multicluster grid management toolkit called Pelecanus to bridge the gap between user\u27s needs and the system\u27s heterogeneity. Application scientists will be able to conduct very large-scale execution across multiclusters with transparent QoS assurance. A novel model called DA-TC (Dynamic Assignment with Task Containers) is developed and is integrated into Pelecanus. This model uses the concept of a task container that allows one to decouple resource allocation from resource binding. It employs static load balancing for task container distribution and dynamic load balancing for task assignment. The slowest resources become useful rather than be bottlenecks in this manner. A cluster abstraction is implemented, which not only provides various cluster information for the DA-TC execution model, but also can be used as a standalone toolkit to monitor and evaluate the clusters\u27 functionality and performance. The performance of the proposed DA-TC model is evaluated both theoretically and experimentally. Results demonstrate the importance of reducing queuing time in decreasing the total turnaround time for an application. Experiments were conducted to understand the performance of various aspects of the DA-TC model. Experiments showed that our model could significantly reduce turnaround time and increase resource utilization for our targeted application scenarios. Four applications are implemented as case studies to determine the applicability of the DA-TC model. In each case the turnaround time is greatly reduced, which demonstrates that the DA-TC model is efficient for assisting application scientists in conducting their research. In addition, virtual resources were integrated into the DA-TC model for application execution. Experiments show that the execution model proposed in this thesis can work seamlessly with multiple hybrid grid/cloud resources to achieve reduced turnaround time

    Polycation-based gene delivery

    Get PDF

    Constraint Programming-based Job Dispatching for Modern HPC Applications

    Get PDF
    A High-Performance Computing job dispatcher is a critical software that assigns the finite computing resources to submitted jobs. This resource assignment over time is known as the on-line job dispatching problem in HPC systems. The fact the problem is on-line means that solutions must be computed in real-time, and their required time cannot exceed some threshold to do not affect the normal system functioning. In addition, a job dispatcher must deal with a lot of uncertainty: submission times, the number of requested resources, and duration of jobs. Heuristic-based techniques have been broadly used in HPC systems, at the cost of achieving (sub-)optimal solutions in a short time. However, the scheduling and resource allocation components are separated, thus generates a decoupled decision that may cause a performance loss. Optimization-based techniques are less used for this problem, although they can significantly improve the performance of HPC systems at the expense of higher computation time. Nowadays, HPC systems are being used for modern applications, such as big data analytics and predictive model building, that employ, in general, many short jobs. However, this information is unknown at dispatching time, and job dispatchers need to process large numbers of them quickly while ensuring high Quality-of-Service (QoS) levels. Constraint Programming (CP) has been shown to be an effective approach to tackle job dispatching problems. However, state-of-the-art CP-based job dispatchers are unable to satisfy the challenges of on-line dispatching, such as generate dispatching decisions in a brief period and integrate current and past information of the housing system. Given the previous reasons, we propose CP-based dispatchers that are more suitable for HPC systems running modern applications, generating on-line dispatching decisions in a proper time and are able to make effective use of job duration predictions to improve QoS levels, especially for workloads dominated by short jobs

    Dynamic task scheduling and binding for many-core systems through stream rewriting

    Get PDF
    This thesis proposes a novel model of computation, called stream rewriting, for the specification and implementation of highly concurrent applications. Basically, the active tasks of an application and their dependencies are encoded as a token stream, which is iteratively modified by a set of rewriting rules at runtime. In order to estimate the performance and scalability of stream rewriting, a large number of experiments have been evaluated on many-core systems and the task management has been implemented in software and hardware.In dieser Dissertation wurde Stream Rewriting als eine neue Methode entwickelt, um Anwendungen mit einer großen Anzahl von dynamischen Tasks zu beschreiben und effizient zur Laufzeit verwalten zu können. Dabei werden die aktiven Tasks in einem Datenstrom verpackt, der zur Laufzeit durch wiederholtes Suchen und Ersetzen umgeschrieben wird. Um die Performance und Skalierbarkeit zu bestimmen, wurde eine Vielzahl von Experimenten mit Many-Core-Systemen durchgeführt und die Verwaltung von Tasks über Stream Rewriting in Software und Hardware implementiert

    Fifth Biennial Report : June 1999 - August 2001

    No full text

    Design and evaluation of a Thread-Level Speculation runtime library

    Get PDF
    En los próximos años es más que probable que máquinas con cientos o incluso miles de procesadores sean algo habitual. Para aprovechar estas máquinas, y debido a la dificultad de programar de forma paralela, sería deseable disponer de sistemas de compilación o ejecución que extraigan todo el paralelismo posible de las aplicaciones existentes. Así en los últimos tiempos se han propuesto multitud de técnicas paralelas. Sin embargo, la mayoría de ellas se centran en códigos simples, es decir, sin dependencias entre sus instrucciones. La paralelización especulativa surge como una solución para estos códigos complejos, posibilitando la ejecución de cualquier tipo de códigos, con o sin dependencias. Esta técnica asume de forma optimista que la ejecución paralela de cualquier tipo de código no de lugar a errores y, por lo tanto, necesitan de un mecanismo que detecte cualquier tipo de colisión. Para ello, constan de un monitor responsable que comprueba constantemente que la ejecución no sea errónea, asegurando que los resultados obtenidos de forma paralela sean similares a los de cualquier ejecución secuencial. En caso de que la ejecución fuese errónea los threads se detendrían y reiniciarían su ejecución para asegurar que la ejecución sigue la semántica secuencial. Nuestra contribución en este campo incluye (1) una nueva librería de ejecución especulativa fácil de utilizar; (2) nuevas propuestas que permiten reducir de forma significativa el número de accesos requeridos en las peraciones especulativas, así como consejos para reducir la memoria a utilizar; (3) propuestas para mejorar los métodos de scheduling centradas en la gestión dinámica de los bloques de iteraciones utilizados en las ejecuciones especulativas; (4) una solución híbrida que utiliza memoria transaccional para implementar las secciones críticas de una librería de paralelización especulativa; y (5) un análisis de las técnicas especulativas en uno de los dispositivos más vanguardistas del momento, los coprocesadores Intel Xeon Phi. Como hemos podido comprobar, la paralelización especulativa es un campo de investigación activo. Nuestros resultados demuestran que esta técnica permite obtener mejoras de rendimiento en un gran número de aplicaciones. Así, esperamos que este trabajo contribuya a facilitar el uso de soluciones especulativas en compiladores comerciales y/o modelos de programación paralela de memoria compartida.Departamento de Informática (Arquitectura y Tecnología de Computadores, Ciencias de la Computación e Inteligencia Artificial, Lenguajes y Sistemas Informáticos
    corecore