563 research outputs found

    EVOTECH® endoscope cleaner and reprocessor (ECR) simulated-use and clinical-use evaluation of cleaning efficacy

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The objective of this study was to perform simulated-use testing as well as a clinical study to assess the efficacy of the EVOTECH<sup>® </sup>Endoscope Cleaner and Reprocessor (ECR) cleaning for flexible colonoscopes, duodenoscopes, gastroscopes and bronchoscopes. The main aim was to determine if the cleaning achieved using the ECR was at least equivalent to that achieved using optimal manual cleaning.</p> <p>Methods</p> <p>Simulated-use testing consisted of inoculating all scope channels and two surface sites with Artificial Test Soil (ATS) containing 10<sup>8 </sup>cfu/mL of <it>Enterococcus faecalis, Pseudomonas aeruginosa </it>and <it>Candida albicans</it>. Duodenoscopes, colonoscopes, and bronchoscopes (all Olympus endoscopes) were included in the simulated use testing. Each endoscope type was tested in triplicate and all channels and two surface sites were sampled for each scope. The clinical study evaluated patient-used duodenoscopes, bronchoscopes, colonoscopes, and gastroscopes (scopes used for emergency procedures were excluded) that had only a bedside flush prior to being processed in the ECR (i.e. no manual cleaning). There were 10 to 15 endoscopes evaluated post-cleaning and to ensure the entire ECR cycle was effective, 5 endoscopes were evaluated post-cleaning and post-high level disinfection. All channels and two external surface locations were sampled to evaluate the residual organic and microbial load. Effective cleaning of endoscope surfaces and channels was deemed to have been achieved if there was < 6.4 μg/cm<sup>2 </sup>of residual protein, < 1.8 μg/cm<sup>2 </sup>of residual hemoglobin and < 4 Log<sub>10 </sub>viable bacteria/cm<sup>2</sup>. Published data indicate that routine manual cleaning can achieve these endpoints so the ECR cleaning efficacy must meet or exceed these to establish that the ECR cleaning cycle could replace manual cleaning</p> <p>Results</p> <p>In the clinical study 75 patient-used scopes were evaluated post cleaning and 98.8% of surfaces and 99.7% of lumens met or surpassed the cleaning endpoints set for protein, hemoglobin and bioburden residuals. In the simulated-use study 100% of the Olympus colonoscopes, duodenoscopes and bronchoscopes evaluated met or surpassed the cleaning endpoints set for protein, and bioburden residuals (hemoglobin was not evaluated).</p> <p>Conclusions</p> <p>The ECR cleaning cycle provides an effective automated approach that ensures surfaces and channels of flexible endoscopes are adequately cleaned after having only a bedside flush but no manual cleaning. It is crucial to note that endoscopes used for emergency procedures or where reprocessing is delayed for more than one hour MUST still be manually cleaned prior to placing them in the ECR.</p

    Pulsar population synthesis using palfa detections and pulsar search collaboratory discoveries including a wide DNS system and a nearby MSP

    Get PDF
    Using the ensemble of detections from pulsar surveys, we can learn about the sizes and characteristics of underlying populations. In this thesis, I analyze results from the Pulsar Arecibo L-band Feed Array (PALFA) precursor and Green Bank Telescope 350 MHz Drift Scan surveys; I examine survey sensitivity to see how detections can inform pulsar population models, I look at new ways of including young scientists -- high school students -- in the discovery process and I present timing solutions for students\u27 discoveries (including a nearby millisecond pulsar and a pulsar in a wide-orbit double neutron star system).;The PALFA survey is on-going and uses the ALFA 7-beam receiver at 1400 MHz to search both inner and outer Galactic sectors visible from Arecibo (32° ?£? 77° and 168° ?£? 214°) close to the Galactic plane (|b| ? 5°) for pulsars. The PALFA precursor survey observed a subset of this region, (|b| ? 1°) and detected 45 pulsars, including one known millisecond pulsar (MSP) and 11 previously unknown, long-period (normal) pulsars. I assess the sensitivity of the PALFA precursor survey and use the number of normal pulsar and MSP detections to infer the size of each underlying Galactic population. Based on 44 normal pulsar detections and one MSP, we constrain each population size to 107,000+36,000-25,000 and 15,000 +85,000-6,000 respectively with 95% confidence. Based on these constraints, we predict yields for the full PALFA survey and find a deficiency in normal pulsar detections, possibly due to radio frequency interference and/or scintillation, neither of which are currently accounted for in population simulations.;The GBT 350 MHz Drift Scan survey collected data in the summer of 2007 while the GBT was stationary, undergoing track replacement. Results discussed here come from ~20% of the survey data, which were processed and donated to the Pulsar Search Collaboratory (PSC). The PSC is a joint outreach program between WVU and NRAO, involving high school students in the pulsar discovery process -- hands-on, cutting-edge research -- to foster their interest in pursuing Science, Technology, Engineering and Mathematics (STEM) related career paths. The PSC began in 2008; since then, over 100 teachers and 2,500 students from 18 states have participated and discovered seven pulsars. Of these seven, J1400--1431, a bright, nearby MSP shows promising characteristics for inclusion in pulsar timing arrays, which aim to detect gravitational waves by precisely timing an array of MSPs. Two others -- J1821+0155, a disrupted recycled pulsar and J1930--1852 show interesting properties due to interactions with binary companions. PSR J1930--1852 is a partially-recycled, first-to-evolve pulsar in a double neutron star (DNS) system with a high-eccentricity 45 day orbit. Its spin period and orbital period are factors of 2 and 3 higher, respectively, than any previously-known, primary DNS pulsars. We measure the relativistic advance of periastron o=0.00078(4), implying a total system mass of Mtot =2.59(4), which is consistent with other DNS systems. PSR J1930--1852\u27s spin and orbital parameters, however, challenge current DNS evolution models, making it an important system for further investigation

    Wireless industrial intelligent controller for a non-linear system

    Get PDF
    Modern neural network (NN) based control schemes have surmounted many of the limitations found in the traditional control approaches. Nevertheless, these modern control techniques have only recently been introduced for use on high-specification Programmable Logic Controllers (PLCs) and usually at a very high cost in terms of the required software and hardware. This ‗intelligent‘ control in the sector of industrial automation, specifically on standard PLCs thus remains an area of study that is open to further research and development. The research documented in this thesis examined the effectiveness of linear traditional control schemes such as Proportional Integral Derivative (PID), Lead and Lead-Lag control, in comparison to non-linear NN based control schemes when applied on a strongly non-linear platform. To this end, a mechatronic-type balancing system, namely, the Ball-on-Wheel (BOW) system was designed, constructed and modelled. Thereafter various traditional and intelligent controllers were implemented in order to control the system. The BOW platform may be taken to represent any single-input, single-output (SISO) non-linear system in use in the real world. The system makes use of current industrial technology including a standard PLC as the digital computational platform, a servo drive and wireless access for remote control. The results gathered from the research revealed that NN based control schemes (i.e. Pure NN and NN-PID), although comparatively slower in response, have greater advantages over traditional controllers in that they are able to adapt to external system changes as well as system non-linearity through a process of learning. These controllers also reduce the guess work that is usually involved with the traditional control approaches where cumbersome modelling, linearization or manual tuning is required. Furthermore, the research showed that online-learning adaptive traditional controllers such as the NN-PID controller which maintains the best of both the intelligent and traditional controllers may be implemented easily and with minimum expense on standard PLCs

    Efficient and semi-transparent perovskite solar cells using a room-temperature processed MoOx/ITO/Ag/ITO electrode

    Get PDF
    In order to achieve semi-transparency in perovskite solar cells, the electrode materials must be as transparent as possible. In this work, MoOx/ITO/Ag/ITO (MoOx/IAI) thin films with high average transmittance of 79.90% between 400 nm and 900 nm were introduced as the top transparent electrode to explore its influences on optoelectronic properties of the fabricated perovskite solar cells. MoOx has been demonstrated previously as protection from sputtering damage using a conventional ITO top electrode, however it is shown here to provide protection from a sputtered IAI film that provides superior transparency and conductivity and is deposited using more favourable low temperature processing conditions. MoOx and Ag were thermally evaporated and ITO was radio-frequency magnetron sputtered at room temperature. The resulting semi-transparent solar cells showed power conversion efficiency of 12.85% (steady-state efficiency of 11.3%) along with a much-reduced degradation rate as compared to the reference device with only a Ag top electrode. With such a combination of performance and transparency, this work shows great promise in application of perovskite solar cells into window glazing products for building integrated photovoltaic applications (BIPV), powering internet of things (IoT) and combining into tandem solar cells with industrially mature photovoltaic technologies such as silicon and copper indium gallium di-selenide (CIGS)

    Reading the news through its structure: new hybrid connectivity based approaches

    Get PDF
    In this thesis a solution for the problem of identifying the structure of news published by online newspapers is presented. This problem requires new approaches and algorithms that are capable of dealing with the massive number of online publications in existence (and that will grow in the future). The fact that news documents present a high degree of interconnection makes this an interesting and hard problem to solve. The identification of the structure of the news is accomplished both by descriptive methods that expose the dimensionality of the relations between different news, and by clustering the news into topic groups. To achieve this analysis this integrated whole was studied using different perspectives and approaches. In the identification of news clusters and structure, and after a preparatory data collection phase, where several online newspapers from different parts of the globe were collected, two newspapers were chosen in particular: the Portuguese daily newspaper Público and the British newspaper The Guardian. In the first case, it was shown how information theory (namely variation of information) combined with adaptive networks was able to identify topic clusters in the news published by the Portuguese online newspaper Público. In the second case, the structure of news published by the British newspaper The Guardian is revealed through the construction of time series of news clustered by a kmeans process. After this approach an unsupervised algorithm, that filters out irrelevant news published online by taking into consideration the connectivity of the news labels entered by the journalists, was developed. This novel hybrid technique is based on Qanalysis for the construction of the filtered network followed by a clustering technique to identify the topical clusters. Presently this work uses a modularity optimisation clustering technique but this step is general enough that other hybrid approaches can be used without losing generality. A novel second order swarm intelligence algorithm based on Ant Colony Systems was developed for the travelling salesman problem that is consistently better than the traditional benchmarks. This algorithm is used to construct Hamiltonian paths over the news published using the eccentricity of the different documents as a measure of distance. This approach allows for an easy navigation between published stories that is dependent on the connectivity of the underlying structure. The results presented in this work show the importance of taking topic detection in large corpora as a multitude of relations and connectivities that are not in a static state. They also influence the way of looking at multi-dimensional ensembles, by showing that the inclusion of the high dimension connectivities gives better results to solving a particular problem as was the case in the clustering problem of the news published online.Neste trabalho resolvemos o problema da identificação da estrutura das notícias publicadas em linha por jornais e agências noticiosas. Este problema requer novas abordagens e algoritmos que sejam capazes de lidar com o número crescente de publicações em linha (e que se espera continuam a crescer no futuro). Este facto, juntamente com o elevado grau de interconexão que as notícias apresentam tornam este problema num problema interessante e de difícil resolução. A identificação da estrutura do sistema de notícias foi conseguido quer através da utilização de métodos descritivos que expõem a dimensão das relações existentes entre as diferentes notícias, quer através de algoritmos de agrupamento das mesmas em tópicos. Para atingir este objetivo foi necessário proceder a ao estudo deste sistema complexo sob diferentes perspectivas e abordagens. Após uma fase preparatória do corpo de dados, onde foram recolhidos diversos jornais publicados online optou-se por dois jornais em particular: O Público e o The Guardian. A escolha de jornais em línguas diferentes deve-se à vontade de encontrar estratégias de análise que sejam independentes do conhecimento prévio que se tem sobre estes sistemas. Numa primeira análise é empregada uma abordagem baseada em redes adaptativas e teoria de informação (nomeadamente variação de informação) para identificar tópicos noticiosos que são publicados no jornal português Público. Numa segunda abordagem analisamos a estrutura das notícias publicadas pelo jornal Britânico The Guardian através da construção de séries temporais de notícias. Estas foram seguidamente agrupadas através de um processo de k-means. Para além disso desenvolveuse um algoritmo que permite filtrar de forma não supervisionada notícias irrelevantes que apresentam baixa conectividade às restantes notícias através da utilização de Q-analysis seguida de um processo de clustering. Presentemente este método utiliza otimização de modularidade, mas a técnica é suficientemente geral para que outras abordagens híbridas possam ser utilizadas sem perda de generalidade do método. Desenvolveu-se ainda um novo algoritmo baseado em sistemas de colónias de formigas para solução do problema do caixeiro viajante que consistentemente apresenta resultados melhores que os tradicionais bancos de testes. Este algoritmo foi aplicado na construção de caminhos Hamiltonianos das notícias publicadas utilizando a excentricidade obtida a partir da conectividade do sistema estudado como medida da distância entre notícias. Esta abordagem permitiu construir um sistema de navegação entre as notícias publicadas que é dependente da conectividade observada na estrutura de notícias encontrada. Os resultados apresentados neste trabalho mostram a importância de analisar sistemas complexos na sua multitude de relações e conectividades que não são estáticas e que influenciam a forma como tradicionalmente se olha para sistema multi-dimensionais. Mostra-se que a inclusão desta dimensões extra produzem melhores resultados na resolução do problema de identificar a estrutura subjacente a este problema da publicação de notícias em linha

    Digital control of a renewable energy resource interfacing the distribution grid

    Get PDF
    This project focuses on a power electronics unit composed by a boost converter and a three-phase inverter for use in a photovoltaic generation in grid-connected applications. The main aim is to build a setup with its software controller for educational purposes. To achieve this goal, a Digital Signal Controller of Texas Instruments has been used, which is in charge of the control. C code is employed for the controller, which generates the signals that rule the converters by doing measurements on key electrical variables and processing them according to the application objectives. First of all, a complete theoretical analysis is made and a mathematical model of a system with a PV source the voltage level of which is boosted by a DC-DC boost convert and finally connected to the grid by a three-phase inverter.. This model is employed to design a suitable controller which is implemented on the DSC. To implement it, digital control and power electronics concepts are applied. Once the digital controller is coded, the complete circuit is simulated with Typhoon HIL software, and simulations are carried out in a safe environment to ensure that controller works as desired. The goal of this activity is to check the suitability of the design in a safe environment, and study the system response with its similarities and differences with the modeled system. Finally, the setup is prepared in the laboratory and experimental tests are carried out. The aim of the experimental tests is to show that controller works as desired and it can be used for educational purposes. Also, a study of the results is done to evaluate the performance of the controller. Each control loop is tested by applying steps on references variables and comparing the results with the theoretical values. These tests are carried on using different sampling and switching frequencies. MPPT algorithm is also tested to check if system can work permanently at MPP in different irradiance values

    Design of a digital ride quality augmentation system for commuter aircraft

    Get PDF
    Commuter aircraft typically have low wing loadings, and fly at low altitudes, and so they are susceptible to undesirable accelerations caused by random atmospheric turbulence. Larger commercial aircraft typically have higher wing loadings and fly at altitudes where the turbulence level is lower, and so they provide smoother rides. This project was initiated based on the goal of making the ride of the commuter aircraft as smooth as the ride experienced on the major commercial airliners. The objectives of this project were to design a digital, longitudinal mode ride quality augmentation system (RQAS) for a commuter aircraft, and to investigate the effect of selected parameters on those designs
    corecore