227 research outputs found

    Impact of Cultivation Parameters on Cell Physiology of Limosilactobacillus reuteri

    Get PDF
    Optimisation of cultivation conditions in the industrial production of probiotics is crucial to reach a high-quality product with high probiotic functionality. The production process includes a fermentation step to produce biomass, accompanied by centrifugation to concentrate the cells. Subsequently, the cells are treated with stabilising solutions (lyoprotectants) before they are subjected to freezing. The frozen cell pellets can be subjected to freeze-drying to yield a dried final product. The probiotic product needs to withstand adverse environmental conditions both during production and after consumption (gastro-intestinal tract). The objective of this study was to elucidate the cellular response to various production process parameters and evaluate their influence on freeze-drying tolerance. In addition, the stability and probiotic activity of freeze-drying product was studied. Parameters such as temperature, pH, oxygen, media components during fermentation, and the pre-formulation hold time prior to freeze-drying were in focus. Furthermore, flow cytometry-based descriptors of bacterial morphology were evaluated for their potential correlation with process-relevant output parameters and physiological fitness during cultivation to avoid suboptimal growth. Additionally, a pipeline was developed for online flow cytometry combined with automated data processing using the kmeans clustering algorithm, which is a promising process analytical technology tool. The effects of temperature, initial pH, and oxygen levels on cell growth and cell size distributions of Limosilactobacillus reuteri DSM 17938 were investigated using multivariate flow cytometry. Morphological heterogeneities were observed under non-optimal growth conditions, with low temperature, high initial pH, and high oxygen levels triggering changes in morphology towards cell chain formation. High-growth pattern characterised by smaller cell sizes and decreased population heterogeneity was observed using the pulse width distribution parameter. This parameter can be used to distinguish larger cells from smaller cells and to separate singlets from doublets (i.e., single cells from aggregated cells). Although, oxygen is known to inhibit growth in L. reuteri, controlled oxygen supply resulted in noticeable effect on the cell metabolism, in a higher degree of unsaturated fatty acids in the cell, and improved freeze-drying stress tolerance. Another important component that was examined was the addition of exogeneous fatty acid source in the form of Tween 80. A chemically defined minimal medium was developed, with 14 amino acids identified as essential for growth. The addition of Tween 80 to the medium improved biomass yield, growth rate, and shortened cultivation time. L. reuteri DSM 17938 may not be able to efficiently synthesise unsaturated fatty acid without an exogenous fatty acid source, but this requires further investigation. Lastly, the pre-formulation hold time during the manufacture of probiotics was found to significantly affect long-term stability, with direct freeze samples showing better freeze-drying stability compared to those subjected to rest for 3 h incubation at room temperature. These findings suggest that an optimised production process and formulation of agents can lead to the successful production of high-quality probiotics with excellent stability

    Dynamic segmentation techniques applied to load profiles of electric energy consumption from domestic users

    Full text link
    [EN] The electricity sector is currently undergoing a process of liberalization and separation of roles, which is being implemented under the regulatory auspices of each Member State of the European Union and, therefore, with different speeds, perspectives and objectives that must converge on a common horizon, where Europe will benefit from an interconnected energy market in which producers and consumers can participate in free competition. This process of liberalization and separation of roles involves two consequences or, viewed another way, entails a major consequence from which other immediate consequence, as a necessity, is derived. The main consequence is the increased complexity in the management and supervision of a system, the electrical, increasingly interconnected and participatory, with connection of distributed energy sources, much of them from renewable sources, at different voltage levels and with different generation capacity at any point in the network. From this situation the other consequence is derived, which is the need to communicate information between agents, reliably, safely and quickly, and that this information is analyzed in the most effective way possible, to form part of the processes of decision taking that improve the observability and controllability of a system which is increasing in complexity and number of agents involved. With the evolution of Information and Communication Technologies (ICT), and the investments both in improving existing measurement and communications infrastructure, and taking the measurement and actuation capacity to a greater number of points in medium and low voltage networks, the availability of data that informs of the state of the network is increasingly higher and more complete. All these systems are part of the so-called Smart Grids, or intelligent networks of the future, a future which is not so far. One such source of information comes from the energy consumption of customers, measured on a regular basis (every hour, half hour or quarter-hour) and sent to the Distribution System Operators from the Smart Meters making use of Advanced Metering Infrastructure (AMI). This way, there is an increasingly amount of information on the energy consumption of customers, being stored in Big Data systems. This growing source of information demands specialized techniques which can take benefit from it, extracting a useful and summarized knowledge from it. This thesis deals with the use of this information of energy consumption from Smart Meters, in particular on the application of data mining techniques to obtain temporal patterns that characterize the users of electrical energy, grouping them according to these patterns in a small number of groups or clusters, that allow evaluating how users consume energy, both during the day and during a sequence of days, allowing to assess trends and predict future scenarios. For this, the current techniques are studied and, proving that the current works do not cover this objective, clustering or dynamic segmentation techniques applied to load profiles of electric energy consumption from domestic users are developed. These techniques are tested and validated on a database of hourly energy consumption values for a sample of residential customers in Spain during years 2008 and 2009. The results allow to observe both the characterization in consumption patterns of the different types of residential energy consumers, and their evolution over time, and to assess, for example, how the regulatory changes that occurred in Spain in the electricity sector during those years influenced in the temporal patterns of energy consumption.[ES] El sector eléctrico se halla actualmente sometido a un proceso de liberalización y separación de roles, que está siendo aplicado bajo los auspicios regulatorios de cada Estado Miembro de la Unión Europea y, por tanto, con distintas velocidades, perspectivas y objetivos que deben confluir en un horizonte común, en donde Europa se beneficiará de un mercado energético interconectado, en el cual productores y consumidores podrán participar en libre competencia. Este proceso de liberalización y separación de roles conlleva dos consecuencias o, visto de otra manera, conlleva una consecuencia principal de la cual se deriva, como necesidad, otra consecuencia inmediata. La consecuencia principal es el aumento de la complejidad en la gestión y supervisión de un sistema, el eléctrico, cada vez más interconectado y participativo, con conexión de fuentes distribuidas de energía, muchas de ellas de origen renovable, a distintos niveles de tensión y con distinta capacidad de generación, en cualquier punto de la red. De esta situación se deriva la otra consecuencia, que es la necesidad de comunicar información entre los distintos agentes, de forma fiable, segura y rápida, y que esta información sea analizada de la forma más eficaz posible, para que forme parte de los procesos de toma de decisiones que mejoran la observabilidad y controlabilidad de un sistema cada vez más complejo y con más agentes involucrados. Con el avance de las Tecnologías de Información y Comunicaciones (TIC), y las inversiones tanto en mejora de la infraestructura existente de medida y comunicaciones, como en llevar la obtención de medidas y la capacidad de actuación a un mayor número de puntos en redes de media y baja tensión, la disponibilidad de datos sobre el estado de la red es cada vez mayor y más completa. Todos estos sistemas forman parte de las llamadas Smart Grids, o redes inteligentes del futuro, un futuro ya no tan lejano. Una de estas fuentes de información proviene de los consumos energéticos de los clientes, medidos de forma periódica (cada hora, media hora o cuarto de hora) y enviados hacia las Distribuidoras desde los contadores inteligentes o Smart Meters, mediante infraestructura avanzada de medida o Advanced Metering Infrastructure (AMI). De esta forma, cada vez se tiene una mayor cantidad de información sobre los consumos energéticos de los clientes, almacenada en sistemas de Big Data. Esta cada vez mayor fuente de información demanda técnicas especializadas que sepan aprovecharla, extrayendo un conocimiento útil y resumido de la misma. La presente Tesis doctoral versa sobre el uso de esta información de consumos energéticos de los contadores inteligentes, en concreto sobre la aplicación de técnicas de minería de datos (data mining) para obtener patrones temporales que caractericen a los usuarios de energía eléctrica, agrupándolos según estos mismos patrones en un número reducido de grupos o clusters, que permiten evaluar la forma en que los usuarios consumen la energía, tanto a lo largo del día como durante una secuencia de días, permitiendo evaluar tendencias y predecir escenarios futuros. Para ello se estudian las técnicas actuales y, comprobando que los trabajos actuales no cubren este objetivo, se desarrollan técnicas de clustering o segmentación dinámica aplicadas a curvas de carga de consumo eléctrico diario de clientes domésticos. Estas técnicas se prueban y validan sobre una base de datos de consumos energéticos horarios de una muestra de clientes residenciales en España durante los años 2008 y 2009. Los resultados permiten observar tanto la caracterización en consumos de los distintos tipos de consumidores energéticos residenciales, como su evolución en el tiempo, y permiten evaluar, por ejemplo, cómo influenciaron en los patrones temporales de consumos los cambios regulatorios que se produjeron en España en el sector eléctrico durante esos años.[CA] El sector elèctric es troba actualment sotmès a un procés de liberalització i separació de rols, que s'està aplicant davall els auspicis reguladors de cada estat membre de la Unió Europea i, per tant, amb distintes velocitats, perspectives i objectius que han de confluir en un horitzó comú, on Europa es beneficiarà d'un mercat energètic interconnectat, en el qual productors i consumidors podran participar en lliure competència. Aquest procés de liberalització i separació de rols comporta dues conseqüències o, vist d'una altra manera, comporta una conseqüència principal de la qual es deriva, com a necessitat, una altra conseqüència immediata. La conseqüència principal és l'augment de la complexitat en la gestió i supervisió d'un sistema, l'elèctric, cada vegada més interconnectat i participatiu, amb connexió de fonts distribuïdes d'energia, moltes d'aquestes d'origen renovable, a distints nivells de tensió i amb distinta capacitat de generació, en qualsevol punt de la xarxa. D'aquesta situació es deriva l'altra conseqüència, que és la necessitat de comunicar informació entre els distints agents, de forma fiable, segura i ràpida, i que aquesta informació siga analitzada de la manera més eficaç possible, perquè forme part dels processos de presa de decisions que milloren l'observabilitat i controlabilitat d'un sistema cada vegada més complex i amb més agents involucrats. Amb l'avanç de les tecnologies de la informació i les comunicacions (TIC), i les inversions, tant en la millora de la infraestructura existent de mesura i comunicacions, com en el trasllat de l'obtenció de mesures i capacitat d'actuació a un nombre més gran de punts en xarxes de mitjana i baixa tensió, la disponibilitat de dades sobre l'estat de la xarxa és cada vegada major i més completa. Tots aquests sistemes formen part de les denominades Smart Grids o xarxes intel·ligents del futur, un futur ja no tan llunyà. Una d'aquestes fonts d'informació prové dels consums energètics dels clients, mesurats de forma periòdica (cada hora, mitja hora o quart d'hora) i enviats cap a les distribuïdores des dels comptadors intel·ligents o Smart Meters, per mitjà d'infraestructura avançada de mesura o Advanced Metering Infrastructure (AMI). D'aquesta manera, cada vegada es té una major quantitat d'informació sobre els consums energètics dels clients, emmagatzemada en sistemes de Big Data. Aquesta cada vegada major font d'informació demanda tècniques especialitzades que sàpiguen aprofitar-la, extraient-ne un coneixement útil i resumit. La present tesi doctoral versa sobre l'ús d'aquesta informació de consums energètics dels comptadors intel·ligents, en concret sobre l'aplicació de tècniques de mineria de dades (data mining) per a obtenir patrons temporals que caracteritzen els usuaris d'energia elèctrica, agrupant-los segons aquests mateixos patrons en una quantitat reduïda de grups o clusters, que permeten avaluar la forma en què els usuaris consumeixen l'energia, tant al llarg del dia com durant una seqüència de dies, i que permetent avaluar tendències i predir escenaris futurs. Amb aquesta finalitat, s'estudien les tècniques actuals i, en comprovar que els treballs actuals no cobreixen aquest objectiu, es desenvolupen tècniques de clustering o segmentació dinàmica aplicades a corbes de càrrega de consum elèctric diari de clients domèstics. Aquestes tècniques es proven i validen sobre una base de dades de consums energètics horaris d'una mostra de clients residencials a Espanya durant els anys 2008 i 2009. Els resultats permeten observar tant la caracterització en consums dels distints tipus de consumidors energètics residencials, com la seua evolució en el temps, i permeten avaluar, per exemple, com van influenciar en els patrons temporals de consums els canvis reguladors que es van produir a Espanya en el sector elèctric durant aquests anys.Benítez Sánchez, IJ. (2015). Dynamic segmentation techniques applied to load profiles of electric energy consumption from domestic users [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/59236TESI

    Methods and Systems for Fault Diagnosis in Nuclear Power Plants

    Get PDF
    This research mainly deals with fault diagnosis in nuclear power plants (NPP), based on a framework that integrates contributions from fault scope identification, optimal sensor placement, sensor validation, equipment condition monitoring, and diagnostic reasoning based on pattern analysis. The research has a particular focus on applications where data collected from the existing SCADA (supervisory, control, and data acquisition) system is not sufficient for the fault diagnosis system. Specifically, the following methods and systems are developed. A sensor placement model is developed to guide optimal placement of sensors in NPPs. The model includes 1) a method to extract a quantitative fault-sensor incidence matrix for a system; 2) a fault diagnosability criterion based on the degree of singularities of the incidence matrix; and 3) procedures to place additional sensors to meet the diagnosability criterion. Usefulness of the proposed method is demonstrated on a nuclear power plant process control test facility (NPCTF). Experimental results show that three pairs of undiagnosable faults can be effectively distinguished with three additional sensors selected by the proposed model. A wireless sensor network (WSN) is designed and a prototype is implemented on the NPCTF. WSN is an effective tool to collect data for fault diagnosis, especially for systems where additional measurements are needed. The WSN has distributed data processing and information fusion for fault diagnosis. Experimental results on the NPCTF show that the WSN system can be used to diagnose all six fault scenarios considered for the system. A fault diagnosis method based on semi-supervised pattern classification is developed which requires significantly fewer training data than is typically required in existing fault diagnosis models. It is a promising tool for applications in NPPs, where it is usually difficult to obtain training data under fault conditions for a conventional fault diagnosis model. The proposed method has successfully diagnosed nine types of faults physically simulated on the NPCTF. For equipment condition monitoring, a modified S-transform (MST) algorithm is developed by using shaping functions, particularly sigmoid functions, to modify the window width of the existing standard S-transform. The MST can achieve superior time-frequency resolution for applications that involves non-stationary multi-modal signals, where classical methods may fail. Effectiveness of the proposed algorithm is demonstrated using a vibration test system as well as applications to detect a collapsed pipe support in the NPCTF. The experimental results show that by observing changes in time-frequency characteristics of vibration signals, one can effectively detect faults occurred in components of an industrial system. To ensure that a fault diagnosis system does not suffer from erroneous data, a fault detection and isolation (FDI) method based on kernel principal component analysis (KPCA) is extended for sensor validations, where sensor faults are detected and isolated from the reconstruction errors of a KPCA model. The method is validated using measurement data from a physical NPP. The NPCTF is designed and constructed in this research for experimental validations of fault diagnosis methods and systems. Faults can be physically simulated on the NPCTF. In addition, the NPCTF is designed to support systems based on different instrumentation and control technologies such as WSN and distributed control systems. The NPCTF has been successfully utilized to validate the algorithms and WSN system developed in this research. In a real world application, it is seldom the case that one single fault diagnostic scheme can meet all the requirements of a fault diagnostic system in a nuclear power. In fact, the values and performance of the diagnosis system can potentially be enhanced if some of the methods developed in this thesis can be integrated into a suite of diagnostic tools. In such an integrated system, WSN nodes can be used to collect additional data deemed necessary by sensor placement models. These data can be integrated with those from existing SCADA systems for more comprehensive fault diagnosis. An online performance monitoring system monitors the conditions of the equipment and provides key information for the tasks of condition-based maintenance. When a fault is detected, the measured data are subsequently acquired and analyzed by pattern classification models to identify the nature of the fault. By analyzing the symptoms of the fault, root causes of the fault can eventually be identified

    Single-cell optical fingerprinting for microbial community characterization

    Get PDF

    Computational intelligence techniques for HVAC systems: a review

    Get PDF
    Buildings are responsible for 40% of global energy use and contribute towards 30% of the total CO2 emissions. The drive to reduce energy use and associated greenhouse gas emissions from buildings has acted as a catalyst in the development of advanced computational methods for energy efficient design, management and control of buildings and systems. Heating, ventilation and air conditioning (HVAC) systems are the major source of energy consumption in buildings and an ideal candidate for substantial reductions in energy demand. Significant advances have been made in the past decades on the application of computational intelligence (CI) techniques for HVAC design, control, management, optimization, and fault detection and diagnosis. This article presents a comprehensive and critical review on the theory and applications of CI techniques for prediction, optimization, control and diagnosis of HVAC systems.The analysis of trends reveals the minimization of energy consumption was the key optimization objective in the reviewed research, closely followed by the optimization of thermal comfort, indoor air quality and occupant preferences. Hardcoded Matlab program was the most widely used simulation tool, followed by TRNSYS, EnergyPlus, DOE–2, HVACSim+ and ESP–r. Metaheuristic algorithms were the preferred CI method for solving HVAC related problems and in particular genetic algorithms were applied in most of the studies. Despite the low number of studies focussing on MAS, as compared to the other CI techniques, interest in the technique is increasing due to their ability of dividing and conquering an HVAC optimization problem with enhanced overall performance. The paper also identifies prospective future advancements and research directions

    Demand Response in Smart Grids

    Get PDF
    The Special Issue “Demand Response in Smart Grids” includes 11 papers on a variety of topics. The success of this Special Issue demonstrates the relevance of demand response programs and events in the operation of power and energy systems at both the distribution level and at the wide power system level. This reprint addresses the design, implementation, and operation of demand response programs, with focus on methods and techniques to achieve an optimized operation as well as on the electricity consumer

    Ny forståelse av gasshydratfenomener og naturlige inhibitorer i råoljesystemer gjennom massespektrometri og maskinlæring

    Get PDF
    Gas hydrates represent one of the main flow assurance issues in the oil and gas industry as they can cause complete blockage of pipelines and process equipment, forcing shut downs. Previous studies have shown that some crude oils form hydrates that do not agglomerate or deposit, but remain as transportable dispersions. This is commonly believed to be due to naturally occurring components present in the crude oil, however, despite decades of research, their exact structures have not yet been determined. Some studies have suggested that these components are present in the acid fractions of the oils or are related to the asphaltene content of the oils. Crude oils are among the worlds most complex organic mixtures and can contain up to 100 000 different constituents, making them difficult to characterise using traditional mass spectrometers. The high mass accuracy of Fourier Transform Ion Cyclotron Resonance Mass Spectrometry (FT-ICR MS) yields a resolution greater than traditional techniques, making FT-ICR MS able to characterise crude oils to a greater extent, and possibly identify hydrate active components. FT-ICR MS spectra usually contain tens of thousands of peaks, and data treatment methods able to find underlying relationships in big data sets are required. Machine learning and multivariate statistics include many methods suitable for big data. A literature review identified a number of promising methods, and the current status for the use of machine learning for analysis of gas hydrates and FT-ICR MS data was analysed. The literature study revealed that although many studies have used machine learning to predict thermodynamic properties of gas hydrates, very little work have been done in analysing gas hydrate related samples measured by FT-ICR MS. In order to aid their identification, a successive accumulation procedure for increasing the concentrations of hydrate active components was developed by SINTEF. Comparison of the mass spectra from spiked and unspiked samples revealed some peaks that increased in intensity over the spiking levels. Several classification methods were used in combination with variable selection, and peaks related to hydrate formation were identified. The corresponding molecular formulas were determined, and the peaks were assumed to be related to asphaltenes, naphthenes and polyethylene glycol. To aid the characterisation of the oils, infrared spectroscopy (both Fourier Transform infrared and near infrared) was combined with FT-ICR MS in a multiblock analysis to predict the density of crude oils. Two different strategies for data fusion were attempted, and sequential fusion of the blocks achieved the highest prediction accuracy both before and after reducing the dimensions of the data sets by variable selection. As crude oils have such complex matrixes, samples are often very different, and many methods are not able to handle high degrees of variations or non-linearities between the samples. Hierarchical cluster-based partial least squares regression (HC-PLSR) clusters the data and builds local models within each cluster. HC-PLSR can thus handle non-linearities between clusters, but as PLSR is a linear model the data is still required to be locally linear. HC-PLSR was therefore expanded into deep learning (HC-CNN and HC-RNN) and SVR (HC-SVR). The deep learning-based models outperformed HC-PLSR for a data set predicting average molecular weights from hydrolysed raw materials. The analysis of the FT-ICR MS spectra revealed that the large amounts of information contained in the data (due to the high resolution) can disturb the predictive models, but the use of variable selection counteracts this effect. Several methods from machine learning and multivariate statistics were proven valuable for prediction of various parameters from FT-ICR MS using both classification and regression methods.Gasshydrater er et av hovedproblemene for Flow assurance i olje- og gassnæringen ettersom at de kan forårsake blokkeringer i oljerørledninger og prosessutstyr som krever at systemet må stenges ned. Tidligere studier har vist at noen råoljer danner hydrater som ikke agglomererer eller avsetter, men som forblir som transporterbare dispersjoner. Dette antas å være på grunn av naturlig forekommende komponenter til stede i råoljen, men til tross for årevis med forskning er deres nøyaktige strukturer enda ikke bestemt i detalj. Noen studier har indikert at disse komponentene kan stamme fra syrefraksjonene i oljen eller være relatert til asfalteninnholdet i oljene. Råoljer er blant verdens mest komplekse organiske blandinger og kan inneholde opptil 100 000 forskjellige bestanddeler, som gjør dem vanskelig å karakterisere ved bruk av tradisjonelle massespektrometre. Den høye masseoppløsningen Fourier-transform ion syklotron resonans massespektrometri (FT-ICR MS) gir en høyere oppløsning enn tradisjonelle teknikker, som gjør FT-ICR MS i stand til å karakterisere råoljer i større grad og muligens identifisere hydrataktive komponenter. FT-ICR MS spektre inneholder vanligvis titusenvis av topper, og det er nødvendig å bruke databehandlingsmetoder i stand til å håndtere store datasett, med muligheter til å finne underliggende forhold for å analysere spektrene. Maskinlæring og multivariat statistikk har mange metoder som er passende for store datasett. En litteratur studie identifiserte flere metoder og den nåværende statusen for bruken av maskinlæring for analyse av gasshydrater og FT-ICR MS data. Litteraturstudien viste at selv om mange studier har brukt maskinlæring til å predikere termodynamiske egenskaper for gasshydrater, har lite arbeid blitt gjort med å analysere gasshydrat relaterte prøver målt med FT-ICR MS. For å bistå identifikasjonen ble en suksessiv akkumuleringsprosedyre for å øke konsentrasjonene av hydrataktive komponenter utviklet av SINTEF. Sammenligninger av massespektrene fra spikede og uspikede prøver viste at noen topper økte sammen med spikingnivåene. Flere klassifikasjonsmetoder ble brukt i kombinasjon med ariabelseleksjon for å identifisere topper relatert til hydratformasjon. Molekylformler ble bestemt og toppene ble antatt å være relatert til asfaltener, naftener og polyetylenglykol. For å bistå karakteriseringen av oljene ble infrarød spektroskopi inkludert med FT-ICR MS i en multiblokk analyse for å predikere tettheten til råoljene. To forskjellige strategier for datafusjonering ble testet og sekvensiell fusjonering av blokkene oppnådde den høyeste prediksjonsnøyaktigheten både før og etter reduksjon av datasettene med bruk av variabelseleksjon. Ettersom råoljer har så kompleks sammensetning, er prøvene ofte veldig forskjellige og mange metoder er ikke egnet for å håndtere store variasjoner eller ikke-lineariteter mellom prøvene. Hierarchical cluster-based partial least squares regression (HCPLSR) grupperer dataene og lager lokale modeller for hver gruppe. HC-PLSR kan dermed håndtere ikke-lineariteter mellom gruppene, men siden PLSR er en lokal modell må dataene fortsatt være lokalt lineære. HC-PLSR ble derfor utvidet til convolutional neural networks (HC-CNN) og recurrent neural networks (HC-RNN) og support vector regression (HC-SVR). Disse dyp læring metodene utkonkurrerte HC-PLSR for et datasett som predikerte gjennomsnittlig molekylvekt fra hydrolyserte råmaterialer. Analysen av FT-ICR MS spektre viste at spektrene inneholder veldig mye informasjon. Disse store mengdene med data kan forstyrre prediksjonsmodeller, men bruken av variabelseleksjon motvirket denne effekten. Flere metoder fra maskinlæring og multivariat statistikk har blitt vist å være nyttige for prediksjon av flere parametere from FT-ICR MS data ved bruk av både klassifisering og regresjon
    corecore