500 research outputs found

    Ingenium Research Group, Universidad de Castilla-La Mancha Ciudad Real, Edificio Politécnica 13071, Spain [email protected]

    Get PDF
    Renewable energy is being one of the options to cover the demand due to the environmental restrictions. One of the most relevant renewable energy sources is the solar energy, where the concentrated solar power is nowadays the source that is getting more importance. The correct performance of solar receiver is crucial because its failure can result in significant costs and availability of the energy service. Non-destructive testing is broadly used in structural health monitoring systems in order to detect and diagnose faults/failures. The aim of this paper is to present a fault detection and diagnosis approach based on long range ultrasonic technology, together with novel analytical procedures on signal processing of high frequency waves (Lamb waves). These waves flow through the material via the piezoelectric transducers, where these transducers are also employed as sensors. The fault can be detected and diagnosed by changes in the signal when it is modified by the fault. The experimental platform consists of: i) a data logger able to generate and read voltage signals at high frequency; ii) a sensing system based on piezoelectric transducers placed at the solar collector. A novel method of analyzing the data generated in the platform by means of time series is employed

    Acoustic emission and signal processing for fault detection and location in composite materials

    Get PDF
    The renewable energy industry is in a constant improvement in order to compete and cover any evolving opportunity presented. Nowadays one of those remarkable competitive advantages is focused on maintenance management and terms as operating and maintenance costs, availability, reliability, safety, lifetime, etc. The objectives of this paper are focused on the blades of a wind turbine. A structural health monitoring study is presented, that starts with the collection and analysis of data coming from different non-destructive tests. Signals from acoustic emissions are studied by a novel signal processing approach to detect cracks on the surface of the blades. The case study proposes a new localization method using macro-fibre composite sensors and actuators. The monitoring system uses three sensors strategically located on the blade section. Among the main difficulties involved in this first approach, the modal separation of the wave is taken into account for its importance when drawing conclusions concerning the crack. This effect is the result of the blade breakdown, producing different signals at multiple frequencies. Another drawback is associated to the direction of the fibres in the composite material. This is known as slowness profile, a function depending on the propagation speed. On the other hand, the main novelty of the approach presented is that it is able to predict the failure. In addition, it can be considered an accurate analysis as the solution will be always a single point obtained from a graphical method, i.e. the location of the crack can be detected with precision. The results are also checked quantitatively using nonlinear equations

    Deep Risk Prediction and Embedding of Patient Data: Application to Acute Gastrointestinal Bleeding

    Get PDF
    Acute gastrointestinal bleeding is a common and costly condition, accounting for over 2.2 million hospital days and 19.2 billion dollars of medical charges annually. Risk stratification is a critical part of initial assessment of patients with acute gastrointestinal bleeding. Although all national and international guidelines recommend the use of risk-assessment scoring systems, they are not commonly used in practice, have sub-optimal performance, may be applied incorrectly, and are not easily updated. With the advent of widespread electronic health record adoption, longitudinal clinical data captured during the clinical encounter is now available. However, this data is often noisy, sparse, and heterogeneous. Unsupervised machine learning algorithms may be able to identify structure within electronic health record data while accounting for key issues with the data generation process: measurements missing-not-at-random and information captured in unstructured clinical note text. Deep learning tools can create electronic health record-based models that perform better than clinical risk scores for gastrointestinal bleeding and are well-suited for learning from new data. Furthermore, these models can be used to predict risk trajectories over time, leveraging the longitudinal nature of the electronic health record. The foundation of creating relevant tools is the definition of a relevant outcome measure; in acute gastrointestinal bleeding, a composite outcome of red blood cell transfusion, hemostatic intervention, and all-cause 30-day mortality is a relevant, actionable outcome that reflects the need for hospital-based intervention. However, epidemiological trends may affect the relevance and effectiveness of the outcome measure when applied across multiple settings and patient populations. Understanding the trends in practice, potential areas of disparities, and value proposition for using risk stratification in patients presenting to the Emergency Department with acute gastrointestinal bleeding is important in understanding how to best implement a robust, generalizable risk stratification tool. Key findings include a decrease in the rate of red blood cell transfusion since 2014 and disparities in access to upper endoscopy for patients with upper gastrointestinal bleeding by race/ethnicity across urban and rural hospitals. Projected accumulated savings of consistent implementation of risk stratification tools for upper gastrointestinal bleeding total approximately $1 billion 5 years after implementation. Most current risk scores were designed for use based on the location of the bleeding source: upper or lower gastrointestinal tract. However, the location of the bleeding source is not always clear at presentation. I develop and validate electronic health record based deep learning and machine learning tools for patients presenting with symptoms of acute gastrointestinal bleeding (e.g., hematemesis, melena, hematochezia), which is more relevant and useful in clinical practice. I show that they outperform leading clinical risk scores for upper and lower gastrointestinal bleeding, the Glasgow Blatchford Score and the Oakland score. While the best performing gradient boosted decision tree model has equivalent overall performance to the fully connected feedforward neural network model, at the very low risk threshold of 99% sensitivity the deep learning model identifies more very low risk patients. Using another deep learning model that can model longitudinal risk, the long-short-term memory recurrent neural network, need for transfusion of red blood cells can be predicted at every 4-hour interval in the first 24 hours of intensive care unit stay for high risk patients with acute gastrointestinal bleeding. Finally, for implementation it is important to find patients with symptoms of acute gastrointestinal bleeding in real time and characterize patients by risk using available data in the electronic health record. A decision rule-based electronic health record phenotype has equivalent performance as measured by positive predictive value compared to deep learning and natural language processing-based models, and after live implementation appears to have increased the use of the Acute Gastrointestinal Bleeding Clinical Care pathway. Patients with acute gastrointestinal bleeding but with other groups of disease concepts can be differentiated by directly mapping unstructured clinical text to a common ontology and treating the vector of concepts as signals on a knowledge graph; these patients can be differentiated using unbalanced diffusion earth mover’s distances on the graph. For electronic health record data with data missing not at random, MURAL, an unsupervised random forest-based method, handles data with missing values and generates visualizations that characterize patients with gastrointestinal bleeding. This thesis forms a basis for understanding the potential for machine learning and deep learning tools to characterize risk for patients with acute gastrointestinal bleeding. In the future, these tools may be critical in implementing integrated risk assessment to keep low risk patients out of the hospital and guide resuscitation and timely endoscopic procedures for patients at higher risk for clinical decompensation

    The AI Revolution: Opportunities and Challenges for the Finance Sector

    Full text link
    This report examines Artificial Intelligence (AI) in the financial sector, outlining its potential to revolutionise the industry and identify its challenges. It underscores the criticality of a well-rounded understanding of AI, its capabilities, and its implications to effectively leverage its potential while mitigating associated risks. The potential of AI potential extends from augmenting existing operations to paving the way for novel applications in the finance sector. The application of AI in the financial sector is transforming the industry. Its use spans areas from customer service enhancements, fraud detection, and risk management to credit assessments and high-frequency trading. However, along with these benefits, AI also presents several challenges. These include issues related to transparency, interpretability, fairness, accountability, and trustworthiness. The use of AI in the financial sector further raises critical questions about data privacy and security. A further issue identified in this report is the systemic risk that AI can introduce to the financial sector. Being prone to errors, AI can exacerbate existing systemic risks, potentially leading to financial crises. Regulation is crucial to harnessing the benefits of AI while mitigating its potential risks. Despite the global recognition of this need, there remains a lack of clear guidelines or legislation for AI use in finance. This report discusses key principles that could guide the formation of effective AI regulation in the financial sector, including the need for a risk-based approach, the inclusion of ethical considerations, and the importance of maintaining a balance between innovation and consumer protection. The report provides recommendations for academia, the finance industry, and regulators

    Hybrid Twin in Complex System Settings

    Get PDF
    Los beneficios de un conocimiento profundo de los procesos tecnológicos e industriales de nuestro mundo son incuestionables. La optimización, el análisis inverso o el control basado en la simulación son algunos de los procedimientos que pueden llevarse a cabo una vez que los conocimientos anteriores se transforman en valor para las empresas. Con ello se consiguen mejores tecnologías que acaban beneficiando enormemente a la sociedad. Pensemos en una actividad rutinaria para muchas personas hoy en día, como coger un avión. Todos los procedimientos anteriores se llevan a cabo en el diseño del avión, en el control a bordo y en el mantenimiento, lo que culmina en un producto tecnológicamente eficiente en cuanto a recursos. Este alto valor añadido es lo que está impulsando a la Ciencia de la Ingeniería Basada en la Simulación (Simulation Based Engineering Science, SBES) a introducir importantes mejoras en estos procedimientos, lo que ha supuesto avances importantes en una gran variedad de sectores como la sanidad, las telecomunicaciones o la ingeniería.Sin embargo, la SBES se enfrenta actualmente a varias dificultades para proporcionar resultados precisos en escenarios industriales complejos. Una de ellas es el elevado coste computacional asociado a muchos problemas industriales, que limita seriamente o incluso inhabilita los procesos clave descritos anteriormente. Otro problema es que, en otras aplicaciones, los modelos más precisos (que a su vez son los más caros computacionalmente) no son capaces de tener en cuenta todos los detalles que rigen el sistema físico estudiado, con desviaciones observadas que parecen escapar de nuestro conocimiento.Por lo tanto, en este contexto, a lo largo de este manuscrito se proponen novedosas estrategias y técnicas numéricas para hacer frente a los retos a los que se enfrenta la SBES. Para ello, se analizan diferentes aplicaciones industriales.El panorama anterior junto con el exhaustivo desarrollo producido en la Ciencia de Datos, brinda además una oportunidad perfecta para los denominados Dynamic Data Driven Application Systems (DDDAS), cuyo objetivo principal es fusionar los algoritmos clásicos de simulación con los datos procedentes de medidas experimentales. En este escenario, los datos y las simulaciones ya no estarían desacoplados, sino que formarían una relación simbiótica que alcanzaría hitos inconcebibles hasta estos días. Más en detalle, los datos ya no se entenderán como una calibración estática de un determinado modelo constitutivo, sino que el modelo se corregirá dinámicamente tan pronto como los datos experimentales y las simulaciones tiendan a diverger.Por esta razón, la presente tesis ha hecho especial énfasis en las técnicas de reducción de modelos, ya que no sólo son una herramienta para reducir la complejidad computacional, sino también un elemento clave para cumplir con las restricciones de tiempo real que surgen del marco de los DDDAS.Además, esta tesis presenta nuevas metodologías basadas en datos para enriquecer el denominado paradigma Hybrid Twin. Un paradigma cuya motivación radica en su habilidad de posibilitar los DDDAS. ¿Cómo? combinando soluciones paramétricas y técnicas de reducción de modelos con correcciones dinámicas generadas “al vuelo'' basadas en los datos experimentales recogidos en cada instante.<br /

    Real-time human ambulation, activity, and physiological monitoring:taxonomy of issues, techniques, applications, challenges and limitations

    Get PDF
    Automated methods of real-time, unobtrusive, human ambulation, activity, and wellness monitoring and data analysis using various algorithmic techniques have been subjects of intense research. The general aim is to devise effective means of addressing the demands of assisted living, rehabilitation, and clinical observation and assessment through sensor-based monitoring. The research studies have resulted in a large amount of literature. This paper presents a holistic articulation of the research studies and offers comprehensive insights along four main axes: distribution of existing studies; monitoring device framework and sensor types; data collection, processing and analysis; and applications, limitations and challenges. The aim is to present a systematic and most complete study of literature in the area in order to identify research gaps and prioritize future research directions

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Remaining Useful Life Prediction for Railway Switch Engines Using Classification Techniques

    Get PDF
    A highly available infrastructure is a premise for capable railway operation of high quality. Therefore, maintenance is necessary to keep railway infrastructure elements available. Railway switches, especially, are critical because they connect different tracks and allow a train to change its moving direction without stopping. Their inspection, maintenance and repair have long been identified as a cost driver. Switch failures, particularly, are responsible for a comparable high number of failures and delay minutes. The reduction of failures would not only save maintenance costs, but also let more trains arrive on time and hence increase the attractiveness of the railway transport. Therefore, upcoming failures need to be revealed early enough to allow an effective planning and execution of failure preventing maintenance activities. Research is exploring ways to predict the remaining useful life of switches. This paper presents an approach to predict the remaining useful life (RUL) of railway switch engine failures. The development is based on measurement data of the electrical power consumption of switch engines. The two year time series of 29 switches of Deutsche Bahn was recorded by a commercial switch diagnostic system leading to roughly 250 000 measurement tuples. Since earlier researched showed that the electrical data alone is not sufficient enough, additional data is integrated. It takes into account the dependency of the switch condition data from climatic conditions and certain properties of the switch construction type. Predicting a RUL is quite challenging in many PHM applications. To avoid common problems with uncertainty in measurement data, a long prediction horizon (month) of small time units (hours) and to stabilise end user acceptance the approach transforms the RUL prediction problem into a classification problem of multiple classes. It, then, uses two different supervised classification techniques, Artificial Neural Networks (aNN) and Support Vector Machines (SVM), to predict the RUL in the form of classes. However, as known from the no free lunch-theorem of classification, there is no ultimately best performing technique. The success depends on the problem and data structure as well as on the parametrisation of the technique or the selected algorithm respectively. Especially aNN and SVM have a high number of possible parametrisations. They can fail the task or result in a very good performance under the heavy influence of their parametrisation. Hence, it is an important aspect of this paper to share how the different parameters effect the RUL prediction and which parameters result in maximum performance. In order to compare the performance, two metrics are chosen, the Matthews Correlation Coefficient (MCC) as single value metric and a visualisation of the confusion matrix as more comprehensible metric. Finally, deriving those parameters maximising the RUL prediction results enables one of the two classification techniques to reveal upcoming failures of the switch engine early enough to prevent them

    A fault detection method for railway point systems

    Get PDF
    Failures of railway point systems (RPSs) often lead to service delays or hazardous situations. A condition monitoring system can be used by railway infrastructure operators to detect the early signs of the deteriorated condition of RPSs and thereby prevent failures. This paper presents a methodology for early detection of the changes in the measurement of the current drawn by the motor of the point operating equipment (POE) of an RPS, which can be used to warn about a possible failure in the system. The proposed methodology uses the one-class support vector machine classification method with the similarity measure of edit distance with real penalties. The technique has been developed taking into account specific features of the data of infield RPSs and therefore is able to detect the changes in the measurements of the current of the POE with greater accuracy compared with the commonly used threshold-based technique. The data from infield RPSs, which relate to incipient failures of RPSs, were used after the deficiencies in the data labelling were removed using expert knowledge. In addition, possible improvements in the proposed methodology were identified in order for it to be used as an automatic online condition monitoring system
    • …
    corecore