269 research outputs found

    A Bayes Linear Approach to Making Inferences from X-rays

    Get PDF
    X-ray images are often used to make inferences about physical phenomena and the entities about which inferences are made are complex. The Bayes linear approach is a generalisation of subjective Bayesian analysis suited to uncertainty quantification for complex systems. Therefore, Bayes linear is an appropriate tool for making inferences from X-ray images. In this thesis, I will propose methodology for making inferences about quantities, which may be organised as multivariate random fields. A number of problems will be addressed: anomaly detection, emulation, inverse problem solving and transferable databases. Anomaly detection is deciding whether a new observation belongs to the same population as a reference population, emulation is the task of building a statistical model of a complex computer model, inverse problem solving is the task of making inferences about system values, given an observation of system behaviour and transferable databases is the task of using a data-set created using a simulator to make inferences about physical phenomena. The methods we use to address these problems will be exemplified using applications from the X-ray industry. Anomaly detection will be used to identify plastic contaminants in chocolate bars, emulation will be used to efficiently predict the scatter present in an X-ray image, inverse problem solving will be used to infer an entity's composition from an X-ray image and transferable databases will be used to improve image quality and return diagnostic measures from clinical X-ray images. The Bayes linear approach to making inferences from an X-ray image enables improvements over the state-of-the-art approaches to high impact problems

    Maintenance Management of Wind Turbines

    Get PDF
    “Maintenance Management of Wind Turbines” considers the main concepts and the state-of-the-art, as well as advances and case studies on this topic. Maintenance is a critical variable in industry in order to reach competitiveness. It is the most important variable, together with operations, in the wind energy industry. Therefore, the correct management of corrective, predictive and preventive politics in any wind turbine is required. The content also considers original research works that focus on content that is complementary to other sub-disciplines, such as economics, finance, marketing, decision and risk analysis, engineering, etc., in the maintenance management of wind turbines. This book focuses on real case studies. These case studies concern topics such as failure detection and diagnosis, fault trees and subdisciplines (e.g., FMECA, FMEA, etc.) Most of them link these topics with financial, schedule, resources, downtimes, etc., in order to increase productivity, profitability, maintainability, reliability, safety, availability, and reduce costs and downtime, etc., in a wind turbine. Advances in mathematics, models, computational techniques, dynamic analysis, etc., are employed in analytics in maintenance management in this book. Finally, the book considers computational techniques, dynamic analysis, probabilistic methods, and mathematical optimization techniques that are expertly blended to support the analysis of multi-criteria decision-making problems with defined constraints and requirements

    Deep Learning Based Novelty Detection

    Get PDF
    Given a set of image instances from known classes, the goal of novelty detection is to determine whether an observed image during inference belongs to one of the known classes. In this thesis, deep learning-based approaches to solve novelty detection are studied under four different settings. In the first two settings, availability of out-of- distributional data (OOD) is assumed. With this assumption, novelty detection can be studied for cases where there are multiple known classes and a single known class separately. The thesis further explores this problem in a more constrained setting where only the data from known classes are considered for training. Finally, we study a practical application of novelty detection in mobile Active Authentication (AA) where latency and efficiency are as important as the detection accuracy

    Semi-supervised and unsupervised kernel-based novelty detection with application to remote sensing images

    Get PDF
    The main challenge of new information technologies is to retrieve intelligible information from the large volume of digital data gathered every day. Among the variety of existing data sources, the satellites continuously observing the surface of the Earth are key to the monitoring of our environment. The new generation of satellite sensors are tremendously increasing the possibilities of applications but also increasing the need for efficient processing methodologies in order to extract information relevant to the users' needs in an automatic or semi-automatic way. This is where machine learning comes into play to transform complex data into simplified products such as maps of land-cover changes or classes by learning from data examples annotated by experts. These annotations, also called labels, may actually be difficult or costly to obtain since they are established on the basis of ground surveys. As an example, it is extremely difficult to access a region recently flooded or affected by wildfires. In these situations, the detection of changes has to be done with only annotations from unaffected regions. In a similar way, it is difficult to have information on all the land-cover classes present in an image while being interested in the detection of a single one of interest. These challenging situations are called novelty detection or one-class classification in machine learning. In these situations, the learning phase has to rely only on a very limited set of annotations, but can exploit the large set of unlabeled pixels available in the images. This setting, called semi-supervised learning, allows significantly improving the detection. In this Thesis we address the development of methods for novelty detection and one-class classification with few or no labeled information. The proposed methodologies build upon the kernel methods, which take place within a principled but flexible framework for learning with data showing potentially non-linear feature relations. The thesis is divided into two parts, each one having a different assumption on the data structure and both addressing unsupervised (automatic) and semi-supervised (semi-automatic) learning settings. The first part assumes the data to be formed by arbitrary-shaped and overlapping clusters and studies the use of kernel machines, such as Support Vector Machines or Gaussian Processes. An emphasis is put on the robustness to noise and outliers and on the automatic retrieval of parameters. Experiments on multi-temporal multispectral images for change detection are carried out using only information from unchanged regions or none at all. The second part assumes high-dimensional data to lie on multiple low dimensional structures, called manifolds. We propose a method seeking a sparse and low-rank representation of the data mapped in a non-linear feature space. This representation allows us to build a graph, which is cut into several groups using spectral clustering. For the semi-supervised case where few labels of one class of interest are available, we study several approaches incorporating the graph information. The class labels can either be propagated on the graph, constrain spectral clustering or used to train a one-class classifier regularized by the given graph. Experiments on the unsupervised and oneclass classification of hyperspectral images demonstrate the effectiveness of the proposed approaches

    Machine learning for advanced characterisation of silicon solar cells

    Full text link
    Improving the efficiency, reliability, and durability of photovoltaic cells and modules is key to accelerating the transition towards a carbon-free society. With tens of millions of solar cells manufactured every day, this thesis aims to leverage the available characterisation data to identify defects in solar cells using powerful machine learning techniques. Firstly, it explores temperature and injection dependent lifetime data to characterise bulk defects in silicon solar cells. Machine learning algorithms were trained to model the recombination statistics’ inverse function and predict the defect parameters. The proposed image representation of lifetime data and access to powerful deep learning techniques surpasses traditional defect parameter extraction techniques and enables the extraction of temperature dependent defect parameters. Secondly, it makes use of end-of-line current-voltage measurements and luminescence images to demonstrate how luminescence imaging can satisfy the needs of end-of-line binning. By introducing a deep learning framework, the cell efficiency is correlated to the luminescence image and shows that a luminescence-based binning does not impact the mismatch losses of the fabricated modules while having a greater capability of detecting defects in solar cells. The framework is shown in multiple transfer learning and fine-tuning applications such as half-cut and shingled cells. The method is then extended for automated efficiency-loss analysis, where a new deep learning framework identifies the defective regions in the luminescence image and their impact on the overall cell efficiency. Finally, it presents a machine learning algorithm to model the relationship between input process parameters and output efficiency to identify the recipe for achieving the highest solar cell efficiency with the help of a genetic algorithm optimiser. The development of machine learning-powered characterisation truly unlocks new insight and brings the photovoltaic industry to the next level, making the most of the available data to accelerate the rate of improvement of solar cell and module efficiency while identifying the potential defects impacting their reliability and durability

    Studies of Space Weather Phenomena

    Get PDF
    Not availabl

    Towards A Computational Intelligence Framework in Steel Product Quality and Cost Control

    Get PDF
    Steel is a fundamental raw material for all industries. It can be widely used in vari-ous fields, including construction, bridges, ships, containers, medical devices and cars. However, the production process of iron and steel is very perplexing, which consists of four processes: ironmaking, steelmaking, continuous casting and rolling. It is also extremely complicated to control the quality of steel during the full manufacturing pro-cess. Therefore, the quality control of steel is considered as a huge challenge for the whole steel industry. This thesis studies the quality control, taking the case of Nanjing Iron and Steel Group, and then provides new approaches for quality analysis, manage-ment and control of the industry. At present, Nanjing Iron and Steel Group has established a quality management and control system, which oversees many systems involved in the steel manufacturing. It poses a high statistical requirement for business professionals, resulting in a limited use of the system. A lot of data of quality has been collected in each system. At present, all systems mainly pay attention to the processing and analysis of the data after the manufacturing process, and the quality problems of the products are mainly tested by sampling-experimental method. This method cannot detect product quality or predict in advance the hidden quality issues in a timely manner. In the quality control system, the responsibilities and functions of different information systems involved are intricate. Each information system is merely responsible for storing the data of its corresponding functions. Hence, the data in each information system is relatively isolated, forming a data island. The iron and steel production process belongs to the process industry. The data in multiple information systems can be combined to analyze and predict the quality of products in depth and provide an early warning alert. Therefore, it is necessary to introduce new product quality control methods in the steel industry. With the waves of industry 4.0 and intelligent manufacturing, intelligent technology has also been in-troduced in the field of quality control to improve the competitiveness of the iron and steel enterprises in the industry. Applying intelligent technology can generate accurate quality analysis and optimal prediction results based on the data distributed in the fac-tory and determine the online adjustment of the production process. This not only gives rise to the product quality control, but is also beneficial to in the reduction of product costs. Inspired from this, this paper provide in-depth discussion in three chapters: (1) For scrap steel to be used as raw material, how to use artificial intelligence algorithms to evaluate its quality grade is studied in chapter 3; (2) the probability that the longi-tudinal crack occurs on the surface of continuous casting slab is studied in chapter 4;(3) The prediction of mechanical properties of finished steel plate in chapter 5. All these 3 chapters will serve as the technical support of quality control in iron and steel production

    Investigation of electronic and magnetic responses in topological semimetals

    Get PDF
    Numerous advancements and benefits of the digital age have been made possible by the advent of quantum computers, which is the result of a countless effort of researchers. The rate at which tasks are completed has significantly picked up, while at the same time, the size of these devices is continuing shrinking. When it became clear that even the silicon industry would soon reach its point of saturation, those in the research community became aware of the need to look for an alternative solution. And if we are talking about boosting the speed of computers and reducing the amount of storage space they occupy, there is yet another significant obstacle to overcome in terms of the conservation of energy. Researchers should be working on a solution right now because we are in the midst of a significant energy crisis, and this would be the best time for them to do so. It would be in their best interest to look into ways to reduce their energy consumption, given that we are already aware of how vital it is to pursue such avenues of inquiry. We are certain that the investigation of topological materials can make a contribution to the solution of a good deal of these issues, and we are very optimistic about this prospect (Figure 1.1). It is anticipated that perhaps up to 24 % of all materials will have some topological features [2]. As a consequence of this, the range of possible applications can be increased due to the wide variety of materials that are available. Over the course of the last decade, the expansion of the field of research that focuses on condensed matter physics has directly caused a sea change in the field as a direct result of the growth of materials [3]. These topological materials have the potential to bring scientists one step closer to discovering practical applications for unusual phases. Some of these applications include having the potential to revolutionize electronics and catalysis. These topological materials provide researchers with additional hope to find a solution for the energy crisis. Additionally, prior to the development of applications, it is necessary to identify materials that are suitable for these applications and to study the physical phenomena that are associated with these materials. There are a variety of topological materials that are currently being reexamined for use in improved thermoelectric devices, improved catalytic processes, and various spintronic devices. At the same time, researchers are also looking into new materials which can be used for technical applications in these fields. With this motivation of PhD thesis, several topological semimetals were synthesized to investigate their electronic and magnetic response, and the search for new topological materials with intriguing physical properties were also sought

    Sensitivity study and first prototype tests for the CHIPS neutrino detector R&D program

    Get PDF
    CHIPS (CHerenkov detectors In mine PitS) is an R&D project aiming to develop novel cost-effective detectors for long baseline neutrino oscillation experiments. Water Cherenkov detector modules will be submerged in an existing lake in the path of an accelerator neutrino beam, eliminating the need for expensive excavation. In a staged approach, the first detectors will be deployed in a flooded mine pit in northern Minnesota, 7 mrad off-axis from the existing NuMI beam. A small proof-of-principle model (CHIPS-M) has already been tested and the first stage of a fully functional 10 kt module (CHIPS-10) is planned for 2018. The main physics aim is to measure the CP-violating neutrino mixing phase (δCP). A sensitivity study was performed with the GLoBES package, using results from a dedicated detector simulation and a preliminary reconstruction algorithm. The predicted physics reach of CHIPS-10 and potential bigger modules is presented and compared with currently running experiments and future projects. One of the instruments submerged on board CHIPS-M in autumn 2015 was a prototype detection unit, constructed at Nikhef. The unit contains hardware borrowed from the KM3NeT experiment, including 16 3 inch photomultiplier tubes and readout electronics. In addition to testing the mechanical design and data acquisition, the detector was used to record a large sample of cosmic ray muon events. A preliminary analysis of the collected data was performed, in order to measure the cosmic background interaction rates and validate the Monte Carlo simulation used to optimise future designs. The first in situ measurement of the cosmic muon rate at the bottom of the Wentworth Pit is presented, and extrapolated values for CHIPS-10 show that the dead time due to muons is below 0.3 %
    • …
    corecore