23 research outputs found
Neuro-Visualizer: An Auto-encoder-based Loss Landscape Visualization Method
In recent years, there has been a growing interest in visualizing the loss
landscape of neural networks. Linear landscape visualization methods, such as
principal component analysis, have become widely used as they intuitively help
researchers study neural networks and their training process. However, these
linear methods suffer from limitations and drawbacks due to their lack of
flexibility and low fidelity at representing the high dimensional landscape. In
this paper, we present a novel auto-encoder-based non-linear landscape
visualization method called Neuro-Visualizer that addresses these shortcoming
and provides useful insights about neural network loss landscapes. To
demonstrate its potential, we run experiments on a variety of problems in two
separate applications of knowledge-guided machine learning (KGML). Our findings
show that Neuro-Visualizer outperforms other linear and non-linear baselines
and helps corroborate, and sometime challenge, claims proposed by machine
learning community. All code and data used in the experiments of this paper are
available at an anonymous link
https://anonymous.4open.science/r/NeuroVisualizer-FDD
Making microscopy count: quantitative light microscopy of dynamic processes in living plants
First published: April 2016This is the author accepted manuscript. The final version is available from the publisher via the DOI in this record.Cell theory has officially reached 350 years of age as the first use of the word ‘cell’ in a biological context can be traced to a description of plant material by Robert Hooke in his historic publication “Micrographia: or some physiological definitions of minute bodies”. The 2015 Royal Microscopical Society Botanical Microscopy meeting was a celebration of the streams of investigation initiated by Hooke to understand at the sub-cellular scale how plant cell function and form arises. Much of the work presented, and Honorary Fellowships awarded, reflected the advanced application of bioimaging informatics to extract quantitative data from micrographs that reveal dynamic molecular processes driving cell growth and physiology. The field has progressed from collecting many pixels in multiple modes to associating these measurements with objects or features that are meaningful biologically. The additional complexity involves object identification that draws on a different type of expertise from computer science and statistics that is often impenetrable to biologists. There are many useful tools and approaches being developed, but we now need more inter-disciplinary exchange to use them effectively. In this review we show how this quiet revolution has provided tools available to any personal computer user. We also discuss the oft-neglected issue of quantifying algorithm robustness and the exciting possibilities offered through the integration of physiological information generated by biosensors with object detection and tracking
Neuro-Visualizer: An Auto-encoder-based Loss Landscape Visualization Method
<p>This is the data needed to run the code found in: </p>
<p>https://github.com/elhamod/NeuroVisualizer</p>
<p> </p>
<p>- Input trajectory model data: trajectories_v3.zip</p>
<p>- Trained Neuro-Visualizers and results: saved_models_v3.zip</p>
<p>- Other important auxiliary data: data_v3.zip</p>
Understanding The Effects of Incorporating Scientific Knowledge on Neural Network Outputs and Loss Landscapes
While machine learning (ML) methods have achieved considerable success on several mainstream problems in vision and language modeling, they are still challenged by their lack of interpretable decision-making that is consistent with scientific knowledge, limiting their applicability for scientific discovery applications. Recently, a new field of machine learning that infuses domain knowledge into data-driven ML approaches, termed Knowledge-Guided Machine Learning (KGML), has gained traction to address the challenges of traditional ML. Nonetheless, the inner workings of KGML models and algorithms are still not fully understood, and a better comprehension of its advantages and pitfalls over a suite of scientific applications is yet to be realized.
In this thesis, I first tackle the task of understanding the role KGML plays at shaping the outputs of a neural network, including its latent space, and how such influence could be harnessed to achieve desirable properties, including robustness, generalizability beyond training data, and capturing knowledge priors that are of importance to experts.
Second, I use and further develop loss landscape visualization tools to better understand ML model optimization at the network parameter level. Such an understanding has proven to be effective at evaluating and diagnosing different model architectures and loss functions in the field of KGML, with potential applications to a broad class of ML problems.Doctor of PhilosophyMy research aims to address some of the major shortcomings of machine learning, namely its opaque decision-making process and the inadequate understanding of its inner workings when applied in scientific problems. In this thesis, I address some of these shortcomings by investigating the effect of supplementing the traditionally data-centric method with human knowledge. This includes developing visualization tools that make understanding such practice and further advancing it easier. Conducting this research is critical to achieving wider adoption of machine learning in scientific fields as it builds up the community's confidence not only in the accuracy of the framework's results, but also in its ability to provide satisfactory rationale
Automated Real-Time Detection of Potentially Suspicious Behavior in Public Transport Areas
<i>CoPhy</i> -PGNN: Learning Physics-guided Neural Networks with Competing Loss Functions for Solving Eigenvalue Problems
Physics-guided Neural Networks (PGNNs) represent an emerging class of neural networks that are trained using physics-guided (PG) loss functions (capturing violations in network outputs with known physics), along with the supervision contained in data. Existing work in PGNNs has demonstrated the efficacy of adding single PG loss functions in the neural network objectives, using constant tradeoff parameters, to ensure better generalizability. However, in the presence of multiple PG functions with competing gradient directions, there is a need to
adaptively
tune the contribution of different PG loss functions during the course of training to arrive at generalizable solutions. We demonstrate the presence of competing PG losses in the generic neural network problem of solving for the lowest (or highest) eigenvector of a physics-based eigenvalue equation, which is commonly encountered in many scientific problems. We present a novel approach to handle competing PG losses and demonstrate its efficacy in learning generalizable solutions in two motivating applications of quantum mechanics and electromagnetic propagation. All the code and data used in this work are available at https://github.com/jayroxis/Cophy-PGNN.
</jats:p
Physics-Informed Machine Learning for Optical Modes in Composites
We demonstrate that embedding physics-driven constraints into machine learning process can dramatically improve accuracy and generalizability of the resulting model. Physics-informed learning is illustrated on the example of analysis of optical modes propagating through a spatially periodic composite. The approach presented can be readily utilized in other situations mapped onto an eigenvalue problem, a known bottleneck of computational electrodynamics. Physics-informed learning can be used to improve machine-learning-driven design, optimization, and characterization, in particular in situations where exact solutions are scarce or are slow to come up with
