45 research outputs found

    Modelling of Amperometric Biosensor Used for Synergistic Substrates Determination

    Get PDF
    In this paper the operation of an amperometric biosensor producing a chemically amplified signal is modelled numerically. The chemical amplification is achieved by using synergistic substrates. The model is based on non-stationary reaction-diffusion equations. The model involves three layers (compartments): a layer of enzyme solution entrapped on the electrode surface, a dialysis membrane covering the enzyme layer and an outer diffusion layer which is modelled by the Nernst approach. The equation system is solved numerically by using the finite difference technique. The biosensor response and sensitivity are investigated by altering the model parameters influencing the enzyme kinetics as well as the mass transport by diffusion. The biosensor action was analyzed with a special emphasis to the effect of the chemical amplification. The simulation results qualitatively explain and confirm the experimentally observed effect of the synergistic substrates conversion on the biosensor response

    Inhibited enzymatic reaction of crosslinked lactate oxidase through a pH-dependent mechanism

    Get PDF
    Lactate oxidase (LOx), recognized to selectively catalyze the lactate oxidation in complex matrices, has been highlighted as preferable biorecognition element for the development of lactate biosensors. In a previous work, we have demonstrated that LOx crosslinking on a modified screen-printed electrode results in a dual range lactate biosensor, with one of the analysis linear range (4 to 50 mM) compatible with lactate sweat levels. It was advanced that such behavior results from an atypical substrate inhibition process. To understand such inhibition phenomena, this work relies in the study of LOx structure when submitted to increased substrate concentrations. The results found by fluorescence spectroscopy and dynamic light scattering of LOx solutions, evidenced conformational changes of the enzyme, occurring in presence of inhibitory substrate concentrations. Therefore, the inhibition behavior found at the biosensor, is an outcome of LOx structural alterations as result of a pH-dependent mechanism promoted at high substrate concentrations.Spanish Ministry of Science and Innovation (MICINN), Ministry of Economy and Competitiveness (MINECO) and the European Regional Development Fund (FEDER) (TEC20013-40561-P and MUSSEL RTC-2015-4077-2). Hugo Cunha-Silva would like to acknowledge funding from the Spanish Ministry of Economy (BES-2014-068214

    Modelling of Amperometric Biosensor Used for Synergistic Substrates Determination

    No full text
    In this paper the operation of an amperometric biosensor producing a chemically amplified signal is modelled numerically. The chemical amplification is achieved by using synergistic substrates. The model is based on non-stationary reaction-diffusion equations. The model involves three layers (compartments): a layer of enzyme solution entrapped on the electrode surface, a dialysis membrane covering the enzyme layer and an outer diffusion layer which is modelled by the Nernst approach. The equation system is solved numerically by using the finite difference technique. The biosensor response and sensitivity are investigated by altering the model parameters influencing the enzyme kinetics as well as the mass transport by diffusion. The biosensor action was analyzed with a special emphasis to the effect of the chemical amplification. The simulation results qualitatively explain and confirm the experimentally observed effect of the synergistic substrates conversion on the biosensor response

    A scalable online monitoring system based on Elasticsearch for distributed data acquisition in CMS

    No full text
    The part of the CMS data acquisition (DAQ) system responsible for data readout and event building is a complex network of interdependent distributed applications. To ensure successful data taking, these programs have to be constantly monitored in order to facilitate the timeliness of necessary corrections in case of any deviation from specified behaviour. A large number of diverse monitoring data samples are periodically collected from multiple sources across the network. Monitoring data are kept in memory for online operations and optionally stored on disk for post-mortem analysis. We present a generic, reusable solution based on an open source NoSQL database, Elasticsearch, which is fully compatible and non-intrusive with respect to the existing system. The motivation is to benefit from an off-the-shelf software to facilitate the development, maintenance and support efforts. Elasticsearch provides failover and data redundancy capabilities as well as a programming language independent JSON-over-HTTP interface. The possibility of horizontal scaling matches the requirements of a DAQ monitoring system. The data load from all sources is balanced by redistribution over an Elasticsearch cluster that can be hosted on a computer cloud. In order to achieve the necessary robustness and to validate the scalability of the approach the above monitoring solution currently runs in parallel with an existing in-house developed DAQ monitoring system

    Catalytic and Inhibitory Kinetic Behavior of Horseradish Peroxidase on the Electrode Surface

    Get PDF
    Enzymatic biosensors are often used to detect trace levels of some specific substance. An alternative methodology is applied for enzymatic assays, in which the electrocatalytic kinetic behavior of enzymes is monitored by measuring the faradaic current for a variety of substrate and inhibitor concentrations. Here we examine a steady-state and pre-steady-state reduction of H2O2 on the horseradish peroxidase electrode. The results indicate the substrate-concentration dependence of the steady-state current strictly obeys Michaelis-Menten kinetics rules; in other cases there is ambiguity, whereby he inhibitor-concentration dependence of the steady-state current has a discontinuity under moderate concentration conditions. For pre-steady-state phases, both catalysis and inhibition show an abrupt change of the output current. These anomalous phenomena are universal and there might be an underlying biochemical or electrochemical rationale

    Opportunistic usage of the CMS online cluster using a cloud overlay

    No full text
    After two years of maintenance and upgrade, the Large Hadron Collider (LHC), the largest and most powerful particle accelerator in the world, has started its second three year run. Around 1500 computers make up the CMS (Compact Muon Solenoid) Online cluster. This cluster is used for Data Acquisition of the CMS experiment at CERN, selecting and sending to storage around 20 TBytes of data per day that are then analysed by the Worldwide LHC Computing Grid (WLCG) infrastructure that links hundreds of data centres worldwide. 3000 CMS physicists can access and process data, and are always seeking more computing power and data. The backbone of the CMS Online cluster is composed of 16000 cores which provide as much computing power as all CMS WLCG Tier1 sites (352K HEP-SPEC-06 score in the CMS cluster versus 300K across CMS Tier1 sites). The computing power available in the CMS cluster can significantly speed up the processing of data, so an effort has been made to allocate the resources of the CMS Online cluster to the grid when it isn't used to its full capacity for data acquisition. This occurs during the maintenance periods when the LHC is non-operational, which corresponded to 117 days in 2015. During 2016, the aim is to increase the availability of the CMS Online cluster for data processing by making the cluster accessible during the time between two physics collisions while the LHC and beams are being prepared. This is usually the case for a few hours every day, which would vastly increase the computing power available for data processing. Work has already been undertaken to provide this functionality, as an OpenStack cloud layer has been deployed as a minimal overlay that leaves the primary role of the cluster untouched. This overlay also abstracts the different hardware and networks that the cluster is composed of. The operation of the cloud (starting and stopping the virtual machines) is another challenge that has been overcome as the cluster has only a few hours spare during the aforementioned beam preparation. By improving the virtual image deployment and integrating the OpenStack services with the core services of the Data Acquisition on the CMS Online cluster it is now possible to start a thousand virtual machines within 10 minutes and to turn them off within seconds. This document will explain the architectural choices that were made to reach a fully redundant and scalable cloud, with a minimal impact on the running cluster configuration while giving a maximal segregation between the services. It will also present how to cold start 1000 virtual machines 25 times faster, using tools commonly utilised in all data centres

    The CMS Timing Control and Distribution System

    No full text
    The Compact Muon Solenoid (CMS) experiment operating at the CERN (European Laboratory for Nuclear Physics) Large Hadron Collider (LHC) is in the process of upgrading several of its detector systems. Adding more individual detector components brings the need to test and commission those components separately from existing ones so as not to compromise physics data-taking. The CMS Trigger, Timing and Control (TTC) system had reached its limits in terms of the number of separate elements (partitions) that could be supported. A new Timing and Control Distribution System (TCDS) has been designed, built and commissioned in order to overcome this limit. It also brings additional functionality to facilitate parallel commissioning of new detector elements. We describe the new TCDS system and its components and show results from the first operational experience with the TCDS system in CMS

    Opportunistic usage of the CMS online cluster using a cloud overlay

    No full text
    After two years of maintenance and upgrade, the Large Hadron Collider (LHC), the largest and most powerful particle accelerator in the world, has started its second three year run. Around 1500 computers make up the CMS (Compact Muon Solenoid) Online cluster. This cluster is used for Data Acquisition of the CMS experiment at CERN, selecting and sending to storage around 20 TBytes of data per day that are then analysed by the Worldwide LHC Computing Grid (WLCG) infrastructure that links hundreds of data centres worldwide. 3000 CMS physicists can access and process data, and are always seeking more computing power and data. The backbone of the CMS Online cluster is composed of 16000 cores which provide as much computing power as all CMS WLCG Tier1 sites (352K HEP-SPEC-06 score in the CMS cluster versus 300K across CMS Tier1 sites). The computing power available in the CMS cluster can significantly speed up the processing of data, so an effort has been made to allocate the resources of the CMS Online cluster to the grid when it isn’t used to its full capacity for data acquisition. This occurs during the maintenance periods when the LHC is non-operational, which corresponded to 117 days in 2015. During 2016, the aim is to increase the availability of the CMS Online cluster for data processing by making the cluster accessible during the time between two physics collisions while the LHC and beams are being prepared. This is usually the case for a few hours every day, which would vastly increase the computing power available for data processing. Work has already been undertaken to provide this functionality, as an OpenStack cloud layer has been deployed as a minimal overlay that leaves the primary role of the cluster untouched. This overlay also abstracts the different hardware and networks that the cluster is composed of. The operation of the cloud (starting and stopping the virtual machines) is another challenge that has been overcome as the cluster has only a few hours spare during the aforementioned beam preparation. By improving the virtual image deployment and integrating the OpenStack services with the core services of the Data Acquisition on the CMS Online cluster it is now possible to start a thousand virtual machines within 10 minutes and to turn them off within seconds. This document will explain the architectural choices that were made to reach a fully redundant and scalable cloud, with a minimal impact on the running cluster configuration while giving a maximal segregation between the services. It will also present how to cold start 1000 virtual machines 25 times faster, using tools commonly utilised in all data centres
    corecore