11,913 research outputs found

    Statistical evaluation of research performance of young university scholars: A case study

    Get PDF
    The research performance of a small group of 49 young scholars, such as doctoral students, postdoctoral and junior researchers, working in different technical and scientific fields, was evaluated based on 11 types of research outputs. The scholars worked at a technical university in the fields of Civil Engineering, Ecology, Economics, Informatics, Materials Engineering, Mechanical Engineering, and Safety Engineering. Principal Component Analysis was used to statistically analyze the research outputs and its results were compared with factor and cluster analysis. The metrics of research productivity describing the types of research outputs included the number of papers, books and chapters published in books, the number of patents, utility models and function samples, and the number of research projects conducted. The metrics of citation impact included the number of citations and h-index. From these metrics -the variables -the principal component analysis extracted 4 main principal components. The 1st principal component characterized the cited publications in high-impact journals indexed by the Web of Science. The 2nd principal component represented the outputs of applied research and the 3rd and 4th principal components represented other kinds of publications. The results of the principal component analysis were compared with the hierarchical clustering using Ward's method. The scatter plots of the principal component analysis and the Mahalanobis distances were calculated from the 4 main principal component scores, which allowed us to statistically evaluate the research performance of individual scholars. Using variance analysis, no influence of the field of research on the overall research performance was found. Unlike the statistical analysis of individual research metrics, the approach based on the principal component analysis can provide a complex view of the research systems.Web of Science30217716

    An investigation on automatic systems for fault diagnosis in chemical processes

    Get PDF
    Plant safety is the most important concern of chemical industries. Process faults can cause economic loses as well as human and environmental damages. Most of the operational faults are normally considered in the process design phase by applying methodologies such as Hazard and Operability Analysis (HAZOP). However, it should be expected that failures may occur in an operating plant. For this reason, it is of paramount importance that plant operators can promptly detect and diagnose such faults in order to take the appropriate corrective actions. In addition, preventive maintenance needs to be considered in order to increase plant safety. Fault diagnosis has been faced with both analytic and data-based models and using several techniques and algorithms. However, there is not yet a general fault diagnosis framework that joins detection and diagnosis of faults, either registered or non-registered in records. Even more, less efforts have been focused to automate and implement the reported approaches in real practice. According to this background, this thesis proposes a general framework for data-driven Fault Detection and Diagnosis (FDD), applicable and susceptible to be automated in any industrial scenario in order to hold the plant safety. Thus, the main requirement for constructing this system is the existence of historical process data. In this sense, promising methods imported from the Machine Learning field are introduced as fault diagnosis methods. The learning algorithms, used as diagnosis methods, have proved to be capable to diagnose not only the modeled faults, but also novel faults. Furthermore, Risk-Based Maintenance (RBM) techniques, widely used in petrochemical industry, are proposed to be applied as part of the preventive maintenance in all industry sectors. The proposed FDD system together with an appropriate preventive maintenance program would represent a potential plant safety program to be implemented. Thus, chapter one presents a general introduction to the thesis topic, as well as the motivation and scope. Then, chapter two reviews the state of the art of the related fields. Fault detection and diagnosis methods found in literature are reviewed. In this sense a taxonomy that joins both Artificial Intelligence (AI) and Process Systems Engineering (PSE) classifications is proposed. The fault diagnosis assessment with performance indices is also reviewed. Moreover, it is exposed the state of the art corresponding to Risk Analysis (RA) as a tool for taking corrective actions to faults and the Maintenance Management for the preventive actions. Finally, the benchmark case studies against which FDD research is commonly validated are examined in this chapter. The second part of the thesis, integrated by chapters three to six, addresses the methods applied during the research work. Chapter three deals with the data pre-processing, chapter four with the feature processing stage and chapter five with the diagnosis algorithms. On the other hand, chapter six introduces the Risk-Based Maintenance techniques for addressing the plant preventive maintenance. The third part includes chapter seven, which constitutes the core of the thesis. In this chapter the proposed general FD system is outlined, divided in three steps: diagnosis model construction, model validation and on-line application. This scheme includes a fault detection module and an Anomaly Detection (AD) methodology for the detection of novel faults. Furthermore, several approaches are derived from this general scheme for continuous and batch processes. The fourth part of the thesis presents the validation of the approaches. Specifically, chapter eight presents the validation of the proposed approaches in continuous processes and chapter nine the validation of batch process approaches. Chapter ten raises the AD methodology in real scaled batch processes. First, the methodology is applied to a lab heat exchanger and then it is applied to a Photo-Fenton pilot plant, which corroborates its potential and success in real practice. Finally, the fifth part, including chapter eleven, is dedicated to stress the final conclusions and the main contributions of the thesis. Also, the scientific production achieved during the research period is listed and prospects on further work are envisaged.La seguridad de planta es el problema más inquietante para las industrias químicas. Un fallo en planta puede causar pérdidas económicas y daños humanos y al medio ambiente. La mayoría de los fallos operacionales son previstos en la etapa de diseño de un proceso mediante la aplicación de técnicas de Análisis de Riesgos y de Operabilidad (HAZOP). Sin embargo, existe la probabilidad de que pueda originarse un fallo en una planta en operación. Por esta razón, es de suma importancia que una planta pueda detectar y diagnosticar fallos en el proceso y tomar las medidas correctoras adecuadas para mitigar los efectos del fallo y evitar lamentables consecuencias. Es entonces también importante el mantenimiento preventivo para aumentar la seguridad y prevenir la ocurrencia de fallos. La diagnosis de fallos ha sido abordada tanto con modelos analíticos como con modelos basados en datos y usando varios tipos de técnicas y algoritmos. Sin embargo, hasta ahora no existe la propuesta de un sistema general de seguridad en planta que combine detección y diagnosis de fallos ya sea registrados o no registrados anteriormente. Menos aún se han reportado metodologías que puedan ser automatizadas e implementadas en la práctica real. Con la finalidad de abordar el problema de la seguridad en plantas químicas, esta tesis propone un sistema general para la detección y diagnosis de fallos capaz de implementarse de forma automatizada en cualquier industria. El principal requerimiento para la construcción de este sistema es la existencia de datos históricos de planta sin previo filtrado. En este sentido, diferentes métodos basados en datos son aplicados como métodos de diagnosis de fallos, principalmente aquellos importados del campo de “Aprendizaje Automático”. Estas técnicas de aprendizaje han resultado ser capaces de detectar y diagnosticar no sólo los fallos modelados o “aprendidos”, sino también nuevos fallos no incluidos en los modelos de diagnosis. Aunado a esto, algunas técnicas de mantenimiento basadas en riesgo (RBM) que son ampliamente usadas en la industria petroquímica, son también propuestas para su aplicación en el resto de sectores industriales como parte del mantenimiento preventivo. En conclusión, se propone implementar en un futuro no lejano un programa general de seguridad de planta que incluya el sistema de detección y diagnosis de fallos propuesto junto con un adecuado programa de mantenimiento preventivo. Desglosando el contenido de la tesis, el capítulo uno presenta una introducción general al tema de esta tesis, así como también la motivación generada para su desarrollo y el alcance delimitado. El capítulo dos expone el estado del arte de las áreas relacionadas al tema de tesis. De esta forma, los métodos de detección y diagnosis de fallos encontrados en la literatura son examinados en este capítulo. Asimismo, se propone una taxonomía de los métodos de diagnosis que unifica las clasificaciones propuestas en el área de Inteligencia Artificial y de Ingeniería de procesos. En consecuencia, se examina también la evaluación del performance de los métodos de diagnosis en la literatura. Además, en este capítulo se revisa y reporta el estado del arte correspondiente al “Análisis de Riesgos” y a la “Gestión del Mantenimiento” como técnicas complementarias para la toma de medidas correctoras y preventivas. Por último se abordan los casos de estudio considerados como puntos de referencia en el campo de investigación para la aplicación del sistema propuesto. La tercera parte incluye el capítulo siete, el cual constituye el corazón de la tesis. En este capítulo se presenta el esquema o sistema general de diagnosis de fallos propuesto. El sistema es dividido en tres partes: construcción de los modelos de diagnosis, validación de los modelos y aplicación on-line. Además incluye un modulo de detección de fallos previo a la diagnosis y una metodología de detección de anomalías para la detección de nuevos fallos. Por último, de este sistema se desglosan varias metodologías para procesos continuos y por lote. La cuarta parte de esta tesis presenta la validación de las metodologías propuestas. Específicamente, el capítulo ocho presenta la validación de las metodologías propuestas para su aplicación en procesos continuos y el capítulo nueve presenta la validación de las metodologías correspondientes a los procesos por lote. El capítulo diez valida la metodología de detección de anomalías en procesos por lote reales. Primero es aplicada a un intercambiador de calor escala laboratorio y después su aplicación es escalada a un proceso Foto-Fenton de planta piloto, lo cual corrobora el potencial y éxito de la metodología en la práctica real. Finalmente, la quinta parte de esta tesis, compuesta por el capítulo once, es dedicada a presentar y reafirmar las conclusiones finales y las principales contribuciones de la tesis. Además, se plantean las líneas de investigación futuras y se lista el trabajo desarrollado y presentado durante el periodo de investigación

    Interoperability and computational framework for simulating open channel hydraulics: application to sensitivity analysis and calibration of Gironde Estuary model

    Full text link
    Water resource management is of crucial societal and economic importance, requiring a strong capacity for anticipating environmental change. Progress in physical process knowledge, numerical methods and computational power, allows us to address hydro-environmental problems of growing complexity. Modeling of river and marine flows is no exception. With the increase in IT resources, environmental modeling is evolving to meet the challenges of complex real-world problems. This paper presents a new distributed Application Programming Interface (API) of the open source TELEMAC-MASCARET system to run hydro-environmental simulations with the help of the interoperability concept. Use of the API encourages and facilitates the combination of worldwide reference environmental libraries with the hydro-informatic system. Consequently, the objective of the paper is to promote the interoperability concept for studies dealing with such issues as uncertainty propagation, global sensitivity analysis, optimization, multi-physics or multi-dimensional coupling. To illustrate the capability of the API, an operational problem for improving the navigation capacity of the Gironde Estuary is presented. The API potential is demonstrated in a re-calibration context. The API is used for a multivariate sensitivity analysis to quickly reveal the most influential parameters which can then be optimally calibrated with the help of a data assimilation technique

    Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy

    Full text link
    Purpose: To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Methods: Given a set of volumetric images of a patient at N breathing phases as the training data, we perform deformable image registration between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, we can generate new DVFs, which, when applied on the reference image, lead to new volumetric images. We then can reconstruct a volumetric image from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. Our algorithm was implemented on graphics processing units (GPUs) to achieve real-time efficiency. We generated the training data using a realistic and dynamic mathematical phantom with 10 breathing phases. The testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude. Results: The average relative image intensity error of the reconstructed volumetric images is 6.9% +/- 2.4%. The average 3D tumor localization error is 0.8 mm +/- 0.5 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for reconstructing a volumetric image from each projection is 0.24 seconds (range: 0.17 and 0.35 seconds). Conclusions: We have shown the feasibility of reconstructing volumetric images and localizing tumor positions in 3D in near real-time from a single x-ray image.Comment: 8 pages, 3 figures, submitted to Medical Physics Lette

    Probabilistic Intra-Retinal Layer Segmentation in 3-D OCT Images Using Global Shape Regularization

    Full text link
    With the introduction of spectral-domain optical coherence tomography (OCT), resulting in a significant increase in acquisition speed, the fast and accurate segmentation of 3-D OCT scans has become evermore important. This paper presents a novel probabilistic approach, that models the appearance of retinal layers as well as the global shape variations of layer boundaries. Given an OCT scan, the full posterior distribution over segmentations is approximately inferred using a variational method enabling efficient probabilistic inference in terms of computationally tractable model components: Segmenting a full 3-D volume takes around a minute. Accurate segmentations demonstrate the benefit of using global shape regularization: We segmented 35 fovea-centered 3-D volumes with an average unsigned error of 2.46 ±\pm 0.22 {\mu}m as well as 80 normal and 66 glaucomatous 2-D circular scans with errors of 2.92 ±\pm 0.53 {\mu}m and 4.09 ±\pm 0.98 {\mu}m respectively. Furthermore, we utilized the inferred posterior distribution to rate the quality of the segmentation, point out potentially erroneous regions and discriminate normal from pathological scans. No pre- or postprocessing was required and we used the same set of parameters for all data sets, underlining the robustness and out-of-the-box nature of our approach.Comment: Accepted for publication in Medical Image Analysis (MIA), Elsevie

    Towards a better understanding of clogged steam generators: a sensitivity analysis of dynamic themohydraulic model output

    No full text
    Communication available online at http://hans.wackernagel.free.fr/article_ICONE_final_140311.pdfInternational audienceTube support plate clogging of steam generators affects their operating and requires frequent maintenance operations. A diagnosis method based on dynamic behaviour analysis is under development at EDF to provide means of optimisation of maintenance strategies. Previous work showed that the dynamic response to a power transient of the wide range level measurement contains informations about the clogging state of steam generators. The diagnosis method consists of comparisons of the measured dynamic response with simulations on a mono-dimensional dynamic steam generator model for various input clogging configurations. In order to assess the potential of this method, a sensitivity analysis has been conducted through a quasi-Monte Carlo scheme to compute sensitivity indices for each half tube support plate's clogging ratio. Sensitivity indices are usually defined for scalar model outputs. Principal component analysis has been used to determine a small subset of variables that condense the information about the shape of the response curves. Finally, estimation variability was assessed by construction of bootstrap confidence intervals. The results showed that half of the preselected input variables have negligible influence and allowed to rank the most important ones. Interactions of input variables have been estimated to exert only a small influence on the output. The effects of clogging on the steam generator dynamics has been characterised qualitatively and quantitatively

    Geo-information identification for exploring non-stationary relationships between volcanic sedimentary Fe mineralization and controlling factors in an area with overburden in eastern Tianshan region, China

    Get PDF
    GIS-based spatial analysis has been a common practice in mineral exploration, by which mineral potentials can be delineated to support following sequences of exploration. Mineral potential mapping is generally composed of geo-information extraction and integration. Geological anomalies frequently indicate mineralization. Volcanic sedimentary Fe deposits in eastern Tianshan mineral district, China provide an example of such an indication. However, mineral exploration in this area has been impeded by the desert coverage and geo-anomalies indicative to the presence of mineralization are often weak and may not be efficiently identified by traditional exploring methods. Furthermore, geological guidance regarding to spatially non-stationary relationships between Fe mineralization and its controlling factors were not sufficiently concerned in former studies, which limited the application of proper statistics in mineral exploration. In this dissertation, geochemical distributions associated with controlling factors of the Fe mineralization are characterized by various GIS-based spatial analysis methods. The singularity index mapping technique is attempted to separate geochemical anomalies from background, especially in the desert covered areas. Principal component analysis is further used in integrating the geochemical anomalies to identify geo-information of geological bodies or geological activities associated with Fe mineralization. In order to delineate mineral potentials, spatially weighted principal component analysis with more geological guidance is tried to integrate these identified controlling factors. At the end, as the first time been introduced to mineral exploration, a geographically weighted regression method is currently attempted investigate spatially non-stationary interrelationships presented across the space. Based on the results, superimposition of these controlling factors can be qualitatively and quantitatively summarized that provides a constructive geo-information to Fe mineral exploration in this area. From the practices in this dissertation, GIS-based mineral exploration will not only be efficient in mapping mineral potentials but also be supportive to strategies making of following mineral exploration. All of these experiences can be suggested to future mineral exploration in the other regions

    Composite indicators for measuring well-being of Italian municipalities

    Get PDF
    Well-being is a complex phenomenon. Multidimensionality is recognized in literature as its main feature. This phenomenon is in some aspects elusive and difficult to monitor, and the definition is the combination of heterogeneous components, which assume different meanings in different contexts. A universally accepted definition of well-being does not exist (yet): each country (or areas) attributes importance to dimensions that for others may not be as relevant, consistent with their culture and social dynamics. Accurate measurement of well-being is a prerequisite for the implementation of effective welfare policies, which, through targeted actions in the most critical areas, are geared to the progressive improvement of living conditions. Until some time ago, such a plurality of components was poorly valued, believing that the only income dimension could represent in an exhaustive way such a complex reality. For many years, GDP (Gross Domestic Product) has been an indisputable landmark for states all over the world, playing the key role in defining, implementing and evaluating the effects of government action. Recently, the international debate has questioned the supremacy of GDP, and initiatives have been launched which, through the involvement of a growing number of countries, aim to develop alternative ways of measuring well-being that assign the same value to its components, Economic, Social and Environmental. Since well-being, as mentioned above, is a multidimensional phenomenon then it cannot be measured by a single descriptive indicator and that it should be represented by multiple dimensions. It requires, to be measured, the “combination” of different dimensions, to be considered together as components of the phenomenon (Mazziotta and Pareto, 2013). This combination can be obtained by applying methodologies known as composite indicators (Salzman, 2003; Mazziotta and Pareto, 2011; Diamantopoulos et al., 2008). In this ever-evolving scenario, the Italian experience is represented by the BES (Equitable and Sustainable Well-Being) project that is now considered globally as the most advanced experience of study and analysis. It consists in a dashboard of 134 individual indicators distributed in 12 domains. In the last three BES reports, published in December 2015, 2016 and 2017 by Istat (Italian Institute of Statistics) (Istat, 2015; Istat, 2016; Istat 2017), composite indicators at regional level and over time were calculated for the 9 outcome domains, creating a unique precedent in the official statistics at international level. Recently, the debate has become from a scientific to a policy scope: parliamentary and local administrators are affirming the necessity to link the Istat well-being indicators to interventions/actions in the socio-economic field, thus constructing an even stronger connection between official statistics and policy evaluation. In fact, the Italian Parliament has finally approved on 2016 July 28 the reform of the Budget Law, in which it is expected that the BES indicators, selected by an ad hoc Committee, are included in the Document of Economics and Finance (DEF). The new regulations also provide that by February 15th of each year Parliament receives by the Minister of Economy a report on the evolution of the BES indicators. A Committee for equitable and sustainable well-being indicators is established, chaired by the Minister of Economics and composed by the President of Istat, the Governor of the Bank of Italy and two experts coming from universities or research institutions (Mazziotta, 2017). The project, from national, is becoming local and already several local authorities, although they not have legislative obligations, are studying the well-being indicators of their territory. With these assumptions, it seems necessary to calculate well-being measures for all Italian municipalities so that administrators and citizens can dispose of them to understand and decide better policies. Since the current statistical surveys do not provide socio-economic indicators disaggregated at municipalities level (Census is the only source, every ten years and it does not collect all the information contained in the BES), it is necessary to use administrative sources, hopefully, collected in informative systems. The thesis wants to present an experimental statistics conducted on all the municipalities of Italy where nine domains of BES are selected (Population, Health, Education, Labour, Economic well-being, Environment, Economy on the territory, Research and Innovation, Infrastructure and Mobility) and the twenty individual indicators are selected so that they can represent the phenomenon at the municipal level. The individual indicators are calculated starting from administrative sources and then composite indicators are computed in order to have a unidimensional measure. The theoretical framework adopted is represented, therefore, by the conceptual and methodological one developed by Istat and CNEL (National Council of Economy and Labour) for the BES project (Istat, 2015). The structure of the domains and the selection of indicators are derived from the national BES. In each of the domains, some individual indicators are selected so that the starting matrix has 7,998 rows (the municipalities) and a variable numbers of columns (the indicators). A Composite indicator for each domain is calculated and then a unique composite indicator that synthesizes all the composite indicators is computed. Different composite indicators are calculated in order to assess the robustness of the methodologies. The results present interesting reflections also in the key of economic planning. Therefore, the aim of the thesis is to provide socio-economic indicators for measuring well-being at the municipal level. To achieve this goal it is necessary to define a theoretical framework, to build indicators matrix at the municipal level, to calculate composite indicators in order to obtain a simpler reading and interpretation of the data. The four chapters of the paper are designed to answer these research questions. The thesis is divide in two parts. The first, Theories and Methods, is composed by two chapters: “Theoretical framework: GDP versus well-being” in which recent well-being theories are presented with a view to supporting GDP; “Composite indicators: theories and methods” in which all the techniques for constructing composite indicators are presented in order to understand how synthesize data and measure multidimensional socio-economic phenomena. The second part, “Application to administrative data”, is composed by two chapters: Administrative data sources in which the data base ARCHIMEDE is described; Well-being of Italian municipalities where a robust composite indicator is applied to the domains and individual indicators in order to have a measure of well-being for all Italian municipalities. The analysis of the results leads to original conclusions in which the application of particular data classification methodologies contributes to the discussion concerning the use of databases from administrative sources for local economic planning based on well-being
    corecore