128,014 research outputs found
Early error detection predicted by reduced pre-response control process: an ERP topographic mapping study
Advanced ERP topographic mapping techniques were used to study error monitoring functions in human adult participants, and test whether proactive attentional effects during the pre-response time period could later influence early error detection mechanisms (as measured by the ERN component) or not. Participants performed a speeded go/nogo task, and made a substantial number of false alarms that did not differ from correct hits as a function of behavioral speed or actual motor response. While errors clearly elicited an ERN component generated within the dACC following the onset of these incorrect responses, I also found that correct hits were associated with a different sequence of topographic events during the pre-response baseline time-period, relative to errors. A main topographic transition from occipital to posterior parietal regions (including primarily the precuneus) was evidenced for correct hits similar to 170-150 ms before the response, whereas this topographic change was markedly reduced for errors. The same topographic transition was found for correct hits that were eventually performed slower than either errors or fast (correct) hits, confirming the involvement of this distinctive posterior parietal activity in top-down attentional control rather than motor preparation. Control analyses further ensured that this pre-response topographic effect was not related to differences in stimulus processing. Furthermore, I found a reliable association between the magnitude of the ERN following errors and the duration of this differential precuneus activity during the pre-response baseline, suggesting a functional link between an anticipatory attentional control component subserved by the precuneus and early error detection mechanisms within the dACC. These results suggest reciprocal links between proactive attention control and decision making processes during error monitoring
Characterizing driving behavior using automatic visual analysis
In this work, we present the problem of rash driving detection algorithm
using a single wide angle camera sensor, particularly useful in the Indian
context. To our knowledge this rash driving problem has not been addressed
using Image processing techniques (existing works use other sensors such as
accelerometer). Car Image processing literature, though rich and mature, does
not address the rash driving problem. In this work-in-progress paper, we
present the need to address this problem, our approach and our future plans to
build a rash driving detector.Comment: 4 pages,7 figures, IBM-ICARE201
Recommended from our members
Explainable and Advisable Learning for Self-driving Vehicles
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers, etc., can understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. Our work has focused on the challenge of generating introspective explanations of deep models for self-driving vehicles. In Chapter 3, we begin by exploring the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. In Chapter 4, we add an attention-based video-to-text model to produce textual explanations of model actions, e.g. "the car slows down because the road is wet". The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. These explainable systems represent an externalization of tacit knowledge. The network's opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. In Chapter 5, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Further, in Chapter 6, we propose a new approach that learns vehicle control with the help of long-term (global) human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly
Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World
This report documents the program and the outcomes of GI-Dagstuhl Seminar
16394 "Software Performance Engineering in the DevOps World".
The seminar addressed the problem of performance-aware DevOps. Both, DevOps
and performance engineering have been growing trends over the past one to two
years, in no small part due to the rise in importance of identifying
performance anomalies in the operations (Ops) of cloud and big data systems and
feeding these back to the development (Dev). However, so far, the research
community has treated software engineering, performance engineering, and cloud
computing mostly as individual research areas. We aimed to identify
cross-community collaboration, and to set the path for long-lasting
collaborations towards performance-aware DevOps.
The main goal of the seminar was to bring together young researchers (PhD
students in a later stage of their PhD, as well as PostDocs or Junior
Professors) in the areas of (i) software engineering, (ii) performance
engineering, and (iii) cloud computing and big data to present their current
research projects, to exchange experience and expertise, to discuss research
challenges, and to develop ideas for future collaborations
- âŠ