367,301 research outputs found
Interpretable Deep Learning applied to Plant Stress Phenotyping
Availability of an explainable deep learning model that can be applied to
practical real world scenarios and in turn, can consistently, rapidly and
accurately identify specific and minute traits in applicable fields of
biological sciences, is scarce. Here we consider one such real world example
viz., accurate identification, classification and quantification of biotic and
abiotic stresses in crop research and production. Up until now, this has been
predominantly done manually by visual inspection and require specialized
training. However, such techniques are hindered by subjectivity resulting from
inter- and intra-rater cognitive variability. Here, we demonstrate the ability
of a machine learning framework to identify and classify a diverse set of
foliar stresses in the soybean plant with remarkable accuracy. We also present
an explanation mechanism using gradient-weighted class activation mapping that
isolates the visual symptoms used by the model to make predictions. This
unsupervised identification of unique visual symptoms for each stress provides
a quantitative measure of stress severity, allowing for identification,
classification and quantification in one framework. The learnt model appears to
be agnostic to species and make good predictions for other (non-soybean)
species, demonstrating an ability of transfer learning
Uncovering strata: an investigation into the graphic innovations of geologist Henry T. De la Beche
An historical investigation into the types of illustrations in the Golden Age of Geology (1788-1840) revealed the nature and progression of graphic representation at the dawning of geology as a science. Exhaustive sampling of geology texts published in the period of focus proceeded until saturation was achieved. Qualitative analysis and evaluation of early illustrations were accomplished with Edward R. Tufte\u27s theory of graphic design. Hypothesis testing around a correlation coefficient revealed significance at the 99% confidence level for relationships between publication year and number of included graphics, and publication year and the graphic density of texts. Henry T. De la Beche emerged as an important geologist who made numerous innovative graphic contributions in the Golden Age of Geology. De la Beche promoted colliding theory graphics, or the accurate portrayal of the earth\u27s sections and scenes that would remain valuable for future generations of geologists. He was apparently the first geologist to utilize the small multiple format. De la Beche also designed and drew scientific caricatures that encapsulated the theoretical debates of the day, as well as the social, cultural, and historical influences on the emerging theories of geology. These scientific caricatures have emerged as instructional graphics with significant classroom potential for teaching the nature of science. De la Beche also drew the first portrayal of a scene from deep time, Duria antiquior, which became the first innovative classroom geology teaching graphic. Through his introduction and development of several important genres of visual explanation, De la Beche emerged as the Father of Visual Geology Education
Recommended from our members
Explainable and Advisable Learning for Self-driving Vehicles
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers, etc., can understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. Our work has focused on the challenge of generating introspective explanations of deep models for self-driving vehicles. In Chapter 3, we begin by exploring the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. In Chapter 4, we add an attention-based video-to-text model to produce textual explanations of model actions, e.g. "the car slows down because the road is wet". The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. These explainable systems represent an externalization of tacit knowledge. The network's opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. In Chapter 5, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Further, in Chapter 6, we propose a new approach that learns vehicle control with the help of long-term (global) human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly
- …