221 research outputs found
Evocación del historiador Vicent Llombart
Esta evocación muestra cómo la formación inicial que recibió Vicent Llombart en el Seminario de historia del pensamiento económico en la Universidad de Valencia fue decisiva, pero no suficiente, para entender una trayectoria rotunda por sus contribuciones. Su propósito decidido, su firme exigencia personal y su apertura intelectual permanente nos ayudan a comprender mejor a un gran historiador
On the design of Neutral Scanning Helium Atom Microscopes (SHeM) : Optimal configurations and evaluation of experimental findings
Scanning Helium Microscopes (SHeMs) are novel microscopy tools using neutral helium atoms as the imaging probe. Helium atoms have several advantages compared to other probes such as electrons or helium ions. Helium atoms are neutral and inert and when compared to electrons their higher mass leads to a smaller de-Broglie wavelength for a given energy. Furthermore, helium atoms are strictly surface sensitive, scattering off the electron density distribution off the surface. These combined properties allow for non-destructive mapping of the surface of virtually any vacuum-compatible solid sample. Helium ions have a similar mass but they interact more strongly with the sample because they are not inert and require much higher energies to achieve electrostatic focusing. Charge neutrality makes helium a great imaging corpuscle, but also means that designing SHeMs is very difficult. Neutral helium atoms are very hard to manipulate, as electromagnetic fields cannot be used to focus and redirect the beam - instead, one needs to use diffraction optics and apertures. They are also hard to detect because helium has the highest ionisation potential of all atoms - hindering the task of ionisation based detectors. Therefore, to have a functioning microscope, one needs to form a highly intense atom beam. This thesis presents the work done over the last years to optimise the intensity of SHeMs, and more generally their atom-optics configuration. Amongst the papers included here are the first ones to show that SHeM optics have well-defined intensity maxima that give optimal designs. These papers show that existing designs were suboptimal and that the intensity could be increased several orders of magnitude. This thesis also features the first paper to present a design for a 3D imaging SHeM. A true nano-scale stereo microscope based on Heliometric stereo, a technique adapted from light. Besides these theoretical papers, two papers are included that focus on understanding the helium beam using experimental data. These papers are important as they provide the experimental foundations for the theoretical models used. Amongst other findings, the papers explore the importance of the Knudsen number at the skimmer, the validity of different intensity models, and the top-hat profile of the beam. The research presented here happened in parallel to a two order of magnitude improvement in detector efficiency. I believe that now we are in the position to build high-resolution SHeMs that have the potential to become an important tool for science and industry.. . .Doktorgradsavhandlin
Recommended from our members
Distributed Collaborative Prognostics
Managing large fleets of machines in a cost-effective way is becoming more important as corporations own increasingly large amounts of assets. The steady improvement in cost and reliability of sensors, processors and communication devices has helped the spread of a new paradigm: the Internet of Things. This paradigm allows for real-time monitoring of countless physical objects, obtaining data that can be fed to machine learning algorithms to predict their future state and take managerial decisions.
Despite rapid technological change, industries have been slow to react, and it has been only recently that many have transitioned towards a new business model: servitisation. Servitisation is based on selling the services that assets provide, instead of the assets themselves. Although more companies are adopting this business model, there is a lack of solutions aimed to maximise its economic value. This thesis presents one such solution capable of predicting failures in real time, thus reducing a crucial cost contribution to asset ownership: unexpected failures. This new approach, Distributed Collaborative Prognostics, consists of providing each machine with its own particular agent, that enables it to communicate with other similar machines in order to improve its failure predictions.
This thesis implements Distributed Collaborative Prognostics in three different scenarios: (i) using a multi-agent simulation framework, (ii) using synthetic data from a well-established prognostics data set, and (iii) using real data from a fleet of industrial gas turbines. Each of these scenarios is used to study different elements of the prognostics problem. Multi-agent simulations allow for the calculation of the cost of predictive maintenance coupled with Distributed Collaborative Prognostics, and for the estimation of the cost of agent failures in different architectures. Synthetic data is used as a test bench and to study assets operating in dynamic situations. Real industrial data from the Siemens industrial gas turbine fleet serves to test the applicability of the tool in a real scenario.
This thesis concludes that Distributed Collaborative Prognostics is the adequate solution for large and heterogeneous fleets of assets operating dynamically. Its cost effectiveness depends on the value of the assets; in general, highly-valued assets are more conducive to Distributed Collaborative Prognostics, as the savings from improved failure predictions compensate the cost of enabling them with Internet of Things technologies.This PhD Thesis has been supported by a “la Caixa" Fellowship (ID 100010434), with code LCF/BQ/EU17/11590049
Recommended from our members
Multi-agent system architectures for collaborative prognostics
This paper provides a methodology to assess the optimal Multi-Agent architecture for collaborative prognostics in modern fleets of assets. The use of Multi- Agent Systems has been shown to improve the ability to predict equipment failures by enabling machines with communication and collaborative learning capabilities. Di fferent architectures have been postulated for industrial Multi-Agent Systems in general. A rigorous analysis of the implications of their implementation for collaborative prognostics is essential to guide industrial deployment. In this paper, we investigate the cost and reliability implications of using di fferent Multi-Agent Systems architectures for collaborative failure prediction and maintenance optimization in large fleets of industrial assets. Results show that purely distributed architectures are optimal for high-value assets, while hierarchical architectures optimize communication costs for low-value assets. This enables asset managers to design and implement Multi-Agent systems for predictive maintenance that signi ficantly decrease the whole-life cost of their assets.The project that has generated these results has been supported by a la Caixa Fellowship (ID 100010434), with code LCF/BQ/EU17/11590049. This research was partly supported by Siemens Industrial Turbomachinery UK. This research was also partly supported by the Next Generation Converged Digital Infrastructure project (EP/R004935/1) funded by the Engineering and Physical Sciences Research Council and BT. The server used to perform the experiments in this paper was funded by the Centre for Digital Built Britain
Recommended from our members
Exploiting traffic data to improve asset management and citizen quality of life
The main goal of this project was to demonstrate how large data sources such as Google Maps can be used to inform transportation-related asset management decisions. Specifically, we investigated how the interdependence between infrastructures and assets can be studied using transportation data and heat maps. This involves linking the effect of disruptions in lower-order assets to travel accessibility to private and public infrastructure. In order to demonstrate the viability of our approach, we conducted 5 case studies, 3 public and 2 private. On the public side, we collaborated with two county councils in the United Kingdom, specifically Cambridgeshire and Hertfordshire, and offered solutions to existing infrastructure-related problems proposed by them. For Cambridgeshire, we analysed the accessibility to Cambridge University’s new research centers and the criticality of roads leading to Addenbrooke’s Hospital in Cambridge. Similarly for Hertfordshire, the accessibility to different critical assets in the county were examined with the aim of supporting planning decisions. In addition, to highlight how our approach can bring benefits to private citizens, we solved two examples of commuting-related problems posed by students at the Institute for Manufacturing (IfM). We conclude that heat maps generated using the Google Maps API are powerful and efficient tools for use in infrastructure asset management. Our approach appears to be more cost-efficient and offers a higher quality of visualisation and presentation than other available tools. Furthermore, there exists the potential for a commercial spin-off: our approach can be employed in local, regional and national administrations to inform infrastructure-related decision-making, and can be used by commercial parties to improve employees’ commutes, parking, et ceteraCentre for Digital Built Britai
- …