1,354 research outputs found

    Syntheses of some silicate mineral structures containing Mn³⁺ ;Groundwater requiring protective landscaping\ua0: model trials at Dalrymple, North Queensland ;\ua0Natural supply of phosphorus from basalt controlled by rejuvenated landscapes along the Burdekin River, Queensland

    Get PDF
    Contents1. Syntheses of some silicate mineral structures containing Mn3+D.J. DRYSDALE ---\ua0P.1-52. Groundwater requiring protective landscaping: model trials at Dalrymple, North QueenslandE.J. HEIDECKER ---\ua0P.6-183. Natural supply of phosphorus from basalt controlled by rejuvenated landscapes along the Burdekin River, QueenslandE.J. HEIDECKER ---\ua0P.19-2

    Provisioning of data locality for HEP analysis workflows

    Get PDF
    The heavily increasing amount of data produced by current experiments in high energy particle physics challenge both end users and providers of computing resources. The boosted data rates and the complexity of analyses require huge datasets being processed in short turnaround cycles. Usually, data storages and computing farms are deployed by different providers, which leads to data delocalization and a strong influence of the interconnection transfer rates. The CMS collaboration at KIT has developed a prototype enabling data locality for HEP analysis processing via two concepts. A coordinated and distributed caching approach that reduce the limiting factor of data transfers by joining local high performance devices with large background storages were tested. Thereby, a throughput optimization was reached by selecting and allocating critical data within user work-flows. A highly performant setup using these caching solutions enables fast processing of throughput dependent analysis workflows

    Boosting Performance of Data-intensive Analysis Workflows with Distributed Coordinated Caching

    Get PDF
    Data-intensive end-user analyses in high energy physics require high data throughput to reach short turnaround cycles. This leads to enormous challenges for storage and network infrastructure, especially when facing the tremendously increasing amount of data to be processed during High-Luminosity LHC runs. Including opportunistic resources with volatile storage systems into the traditional HEP computing facilities makes this situation more complex. Bringing data close to the computing units is a promising approach to solve throughput limitations and improve the overall performance. We focus on coordinated distributed caching by coordinating workows to the most suitable hosts in terms of cached files. This allows optimizing overall processing efficiency of data-intensive workows and efficiently use limited cache volume by reducing replication of data on distributed caches. We developed a NaviX coordination service at KIT that realizes coordinated distributed caching using XRootD cache proxy server infrastructure and HTCondor batch system. In this paper, we present the experience gained in operating coordinated distributed caches on cloud and HPC resources. Furthermore, we show benchmarks of a dedicated high throughput cluster, the Throughput-Optimized Analysis-System (TOpAS), which is based on the above-mentioned concept

    Dynamic Resource Extension for Data Intensive Computing with Specialized Software Environments on HPC Systems

    Get PDF
    Modern High Energy Physics (HEP) requires large-scale processing of extensive amounts of scientific data. The needed computing resources are currently provided statically by HEP specific computing centers. To increase the number of available resources, for example to cover peak loads, the HEP computing development team at KIT concentrates on the dynamic integration of additional computing resources into the HEP infrastructure. Therefore, we developed ROCED, a tool to dynamically request and integrate computing resources including resources at HPC centers and commercial cloud providers. Since these resources usually do not support HEP software natively, we rely on virtualization and container technologies, which allows us to run HEP workflows on these so called opportunistic resources. Additionally, we study the efficient processing of huge amounts of data on a distributed infrastructure, where the data is usually stored at HEP specific data centers and is accessed remotely over WAN. To optimize the overall data throughput and to increase the CPU efficiency, we are currently developing an automated caching system for frequently used data that is transparently integrated into the distributed HEP computing infrastructure

    Federation of compute resources available to the German CMS community

    Get PDF
    The German CMS community (DCMS) as a whole can benefit from the various compute resources, available to its different institutes. While Grid-enabled and National Analysis Facility resources are usually shared within the community, local and recently enabled opportunistic resources like HPC centers and cloud resources are not. Furthermore, there is no shared submission infrastructure available. Via HTCondor\u27s [1] mechanisms to connect resource pools, several remote pools can be connected transparently to the users and therefore used more efficiently by a multitude of user groups. In addition to the statically provisioned resources, also dynamically allocated resources from external cloud providers as well as HPC centers can be integrated. However, the usage of such dynamically allocated resources gives rise to additional complexity. Constraints on access policies of the resources, as well as workflow necessities have to be taken care of. To maintain a well-defined and reliable runtime environment on each resource, virtualization and containerization technologies such as virtual machines, Docker, and Singularity, are used

    Mastering Opportunistic Computing Resources for HEP

    Get PDF
    As results of the excellent LHC performance in 2016, more data than expected has been recorded leading to a higher demand for computing resources. It is already foreseeable that for the current and upcoming run periods a flat computing budget and the expected technology advance will not be sufficient to meet the future requirements. This results in a growing gap between supplied and demanded resources. One option to reduce the emerging lack of computing resources is the utilization of opportunistic resources such as local university clusters, public and commercial cloud providers, HPC centers and volunteer computing. However, to use opportunistic resources additional challenges have to be tackled. At the Karlsruhe Institute of Technology (KIT) an infrastructure to dynamically use opportunistic resources is built up. In this paper tools, experiences, future plans and possible improvements are discussed

    Design of E. coli expressed stalk domain immunogens of H1N1 HA that protect mice from lethal challenge

    Get PDF
    The hemagglutinin protein (HA) on the surface of influenza virus is essential for viral entry into the host cells. The HA1 subunit of HA is also the primary target for neutralizing antibodies. The HA2 subunit is less exposed on the virion surface and more conserved than HA1. We have previously designed an HA2 based immunogen derived from the sequence of the H3N2 A/HK/68 virus. In the present study we report the design of an HA2 based immunogen from the H1N1 subtype (PR/8/34). This immunogen (H1HA0HA6) and its circular permutant (H1HA6) were well folded and provided complete protection against homologous viral challenge. Anti-sera of immunized mice showed cross-reactivity with HA proteins of different strains and subtypes. Although no neutralization was observable in a conventional neutralization assay, sera of immunized guinea pigs competed with a broadly neutralizing antibody CR6261 for binding to recombinant Viet/04 HA protein suggesting that CR6261 like antibodies were elicited by the immunogens. Stem domain immunogens from a seasonal H1N1 strain (A/NC/20/99) and a recent pandemic strain (A/Cal/07/09) provided cross-protection against A/PR/8/34 viral challenge. HA2 containing stem domain immunogens therefore have the potential to provide subtype specific protection

    HEP Analyses on Dynamically Allocated Opportunistic Computing Resources

    Get PDF
    The current experiments in high energy physics (HEP) have a huge data rate. To convert the measured data, an enormous number of computing resources is needed and will further increase with upgraded and newer experiments. To fulfill the ever-growing demand the allocation of additional, potentially only temporary available non-HEP dedicated resources is important. These so-called opportunistic resources cannot only be used for analyses in general but are also well-suited to cover the typical unpredictable peak demands for computing resources. For both use cases, the temporary availability of the opportunistic resources requires a dynamic allocation, integration, and management, while their heterogeneity requires optimization to maintain high resource utilization by allocating best matching resources. To find the best matching resources which should be allocated is challenging due to the unpredictable submission behavior as well as an ever-changing mixture of workflows with different requirements. Instead of predicting the best matching resource, we base our decisions on the utilization of resources. For this reason, we are developing the resource manager TARDIS (Transparent Adaptive Resource Dynamic Integration System) which manages and dynamically requests or releases resources. The decision of how many resources TARDIS has to request is implemented in COBalD (COBald - The Opportunistic Balancing Daemon) to ensure further allocation of well-used resources while reducing the amount of insufficiently used ones. TARDIS allocates and manages resources from various resource providers such as HPC centers or commercial and public clouds while ensuring a dynamic allocation and efficient utilization of these heterogeneous opportunistic resources. Furthermore, TARDIS integrates the allocated opportunistic resources into one overlay batch system which provides a single point of entry for all users. In order to provide the dedicated HEP software environment, we use virtualization and container technologies. In this contribution, we give an overview of the dynamic integration of opportunistic resources via TARDIS/COBalD in our HEP institute as well as how user analyses benefit from these additional resources

    Exploring the Anticancer Activity of Tamoxifen-Based Metal Complexes Targeting Mitochondria

    Get PDF
    Two new 'hybrid' metallodrugs of Au(III)(AuTAML)and Cu(II) (CuTAML) were designed featuring a tamoxifen-derived pharmacophoreto ideally synergize the anticancer activity of both the metal centerand the organic ligand. The compounds have antiproliferative effectsagainst human MCF-7 and MDA-MB 231 breast cancer cells. Moleculardynamics studies suggest that the compounds retain the binding activityto estrogen receptor (ER & alpha;). In vitro and in silico studies showed that the Au(III) derivative isan inhibitor of the seleno-enzyme thioredoxin reductase, while theCu(II) complex may act as an oxidant of different intracellular thiols.In breast cancer cells treated with the compounds, a redox imbalancecharacterized by a decrease in total thiols and increased reactiveoxygen species production was detected. Despite their different reactivitiesand cytotoxic potencies, a great capacity of the metal complexes toinduce mitochondrial damage was observed as shown by their effectson mitochondrial respiration, membrane potential, and morphology
    corecore