630 research outputs found
Tissue clearing
Tissue clearing of gross anatomical samples was first described more than a century ago and has only recently found widespread use in the field of microscopy. This renaissance has been driven by the application of modern knowledge of optical physics and chemical engineering to the development of robust and reproducible clearing techniques, the arrival of new microscopes that can image large samples at cellular resolution and computing infrastructure able to store and analyse large volumes of data. Many biological relationships between structure and function require investigation in three dimensions, and tissue clearing therefore has the potential to enable broad discoveries in the biological sciences. Unfortunately, the current literature is complex and could confuse researchers looking to begin a clearing project. The goal of this Primer is to outline a modular approach to tissue clearing that allows a novice researcher to develop a customized clearing pipeline tailored to their tissue of interest. Furthermore, the Primer outlines the required imaging and computational infrastructure needed to perform tissue clearing at scale, gives an overview of current applications, discusses limitations and provides an outlook on future advances in the field
Development of Mining Sector Applications for Emerging Remote Sensing and Deep Learning Technologies
This thesis uses neural networks and deep learning to address practical, real-world problems in the mining sector. The main focus is on developing novel applications in the area of object detection from remotely sensed data. This area has many potential mining applications and is an important part of moving towards data driven strategic decision making across the mining sector. The scientific contributions of this research are twofold; firstly, each of the three case studies demonstrate new applications which couple remote sensing and neural network based technologies for improved data driven decision making. Secondly, the thesis presents a framework to guide implementation of these technologies in the mining sector, providing a guide for researchers and professionals undertaking further studies of this type. The first case study builds a fully connected neural network method to locate supporting rock bolts from 3D laser scan data. This method combines input features from the remote sensing and mobile robotics research communities, generating accuracy scores up to 22% higher than those found using either feature set in isolation. The neural network approach also is compared to the widely used random forest classifier and is shown to outperform this classifier on the test datasets. Additionally, the algorithms’ performance is enhanced by adding a confusion class to the training data and by grouping the output predictions using density based spatial clustering. The method is tested on two datasets, gathered using different laser scanners, in different types of underground mines which have different rock bolting patterns. In both cases the method is found to be highly capable of detecting the rock bolts with recall scores of 0.87-0.96. The second case study investigates modern deep learning for LiDAR data. Here, multiple transfer learning strategies and LiDAR data representations are examined for the task of identifying historic mining remains. A transfer learning approach based on a Lunar crater detection model is used, due to the task similarities between both the underlying data structures and the geometries of the objects to be detected. The relationship between dataset resolution and detection accuracy is also examined, with the results showing that the approach is capable of detecting pits and shafts to a high degree of accuracy with precision and recall scores between 0.80-0.92, provided the input data is of sufficient quality and resolution. Alongside resolution, different LiDAR data representations are explored, showing that the precision-recall balance varies depending on the input LiDAR data representation. The third case study creates a deep convolutional neural network model to detect artisanal scale mining from multispectral satellite data. This model is trained from initialisation without transfer learning and demonstrates that accurate multispectral models can be built from a smaller training dataset when appropriate design and data augmentation strategies are adopted. Alongside the deep learning model, novel mosaicing algorithms are developed both to improve cloud cover penetration and to decrease noise in the final prediction maps. When applied to the study area, the results from this model provide valuable information about the expansion, migration and forest encroachment of artisanal scale mining in southwestern Ghana over the last four years. Finally, this thesis presents an implementation framework for these neural network based object detection models, to generalise the findings from this research to new mining sector deep learning tasks. This framework can be used to identify applications which would benefit from neural network approaches; to build the models; and to apply these algorithms in a real world environment. The case study chapters confirm that the neural network models are capable of interpreting remotely sensed data to a high degree of accuracy on real world mining problems, while the framework guides the development of new models to solve a wide range of related challenges
Learning the Language of Chemical Reactions – Atom by Atom. Linguistics-Inspired Machine Learning Methods for Chemical Reaction Tasks
Over the last hundred years, not much has changed how organic chemistry is conducted. In most laboratories, the current state is still trial-and-error experiments guided by human expertise acquired over decades. What if, given all the knowledge published, we could develop an artificial intelligence-based assistant to accelerate the discovery of novel molecules? Although many approaches were recently developed to generate novel molecules in silico, only a few studies complete the full design-make-test cycle, including the synthesis and the experimental assessment. One reason is that the synthesis part can be tedious, time-consuming, and requires years of experience to perform successfully. Hence, the synthesis is one of the critical limiting factors in molecular discovery.
In this thesis, I take advantage of similarities between human language and organic chemistry to apply linguistic methods to chemical reactions, and develop artificial intelligence-based tools for accelerating chemical synthesis. First, I investigate reaction prediction models focusing on small data sets of challenging stereo- and regioselective carbohydrate reactions. Second, I develop a multi-step synthesis planning tool predicting reactants and suitable reagents (e.g. catalysts and solvents). Both forward prediction and retrosynthesis approaches use black-box models. Hence, I then study methods to provide more information about the models’ predictions. I develop a reaction classification model that labels chemical reaction and facilitates the communication of reaction concepts. As a side product of the classification models, I obtain reaction fingerprints that enable efficient similarity searches in chemical reaction space. Moreover, I study approaches for predicting reaction yields. Lastly, after I approached all chemical reaction tasks with atom-mapping independent models, I demonstrate the generation of accurate atom-mapping from the patterns my models have learned while being trained self-supervised on chemical reactions.
My PhD thesis’s leitmotif is the use of the attention-based Transformer architecture to molecules and reactions represented with a text notation. It is like atoms are my letters, molecules my words, and reactions my sentences. With this analogy, I teach my neural network models the language of chemical reactions - atom by atom. While exploring the link between organic chemistry and language, I make an essential step towards the automation of chemical synthesis, which could significantly reduce the costs and time required to discover and create new molecules and materials
Two and three dimensional segmentation of multimodal imagery
The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes
DIGITAL WINE: HOW PLATFORMS AND ALGORITHMS WILL RESHAPE THE WINE INDUSTRY
La tesi si propone di analizzare come la digitalizzazione e gli approcci basati sui dati, in particolare quelli che sfruttano l'intelligenza artificiale, stiano impattando il settore vitivinicolo e facendo emergere modelli nuovi di business. Quest'ultimo aspetto sarà approfondito tramite due casi studio di piattaforme digitali che, attraverso approcci diversi, stanno contribuendo a generare un ecosistema digitale virtuoso, con potenziali benefici per tutta la catena del valore a livello di settore.The thesis aims to analyze how digitalization and data-driven approaches, in particular those that leverage artificial intelligence, are impacting the wine industry and generating new business models. The latter aspect will be explored through two case studies of digital platforms which, through different approaches, are helping to generate a virtuous digital ecosystem, with potential benefits for the entire value chain at the industry level
Recommended from our members
Criteria-based patent mapping for assessing potential conflicts between patent claims
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London.Evaluating claim conflicts between patents is a crucial issue in patent applications and validity allegations. Existing patent informatics tools do not relate well to the legal requirements of identifying claim conflicts; innovation theory does not address patent evaluations; and the current legal approach has weaknesses in the repeatability between cases. Therefore, a need emerges to design a scientific method for evaluating conflicts between patent claims. This thesis presents research on the topic of identifying, evaluating, and visualising patent conflicts. ‘Conflict’ is used to have the same meaning as obviousness, which is an essential legal term under the UK Patents Act 1977. Building on existing methods, this research provides a novel method called Criteria-Based Patent Mapping, for assessing claim conflicts between patents. ‘Criteria-Based’ means that this assessment uses evaluation criteria that clarify the inventive step of the patent. The source of these criteria is the well-known Theory of Inventive Problem Solving (TRIZ), which is incorporated into a statistical method of 'Patent Mapping' for evaluating and visualising differences between patent claims. The application of the new method to four case studies shows that there are differences in judging standards between the legal authorities; and also shows an average value of 52% agreement in predicting potential conflicts between patent claims. Based upon these results, the original 39 TRIZ parameters can usually be refined to about 12 criteria. The scope of this method is restricted to patents in mechanical engineering due to the relevancy of TRIZ parameters. This research transforms difficult claim-to-claim evaluations into simpler claim-to-criteria comparisons that lead to more efficient and transparent patent evaluations. Such improvements will be useful for better decision-making in patent strategy
- …