20,880 research outputs found
How hard is it to cross the room? -- Training (Recurrent) Neural Networks to steer a UAV
This work explores the feasibility of steering a drone with a (recurrent)
neural network, based on input from a forward looking camera, in the context of
a high-level navigation task. We set up a generic framework for training a
network to perform navigation tasks based on imitation learning. It can be
applied to both aerial and land vehicles. As a proof of concept we apply it to
a UAV (Unmanned Aerial Vehicle) in a simulated environment, learning to cross a
room containing a number of obstacles. So far only feedforward neural networks
(FNNs) have been used to train UAV control. To cope with more complex tasks, we
propose the use of recurrent neural networks (RNN) instead and successfully
train an LSTM (Long-Short Term Memory) network for controlling UAVs. Vision
based control is a sequential prediction problem, known for its highly
correlated input data. The correlation makes training a network hard,
especially an RNN. To overcome this issue, we investigate an alternative
sampling method during training, namely window-wise truncated backpropagation
through time (WW-TBPTT). Further, end-to-end training requires a lot of data
which often is not available. Therefore, we compare the performance of
retraining only the Fully Connected (FC) and LSTM control layers with networks
which are trained end-to-end. Performing the relatively simple task of crossing
a room already reveals important guidelines and good practices for training
neural control networks. Different visualizations help to explain the behavior
learned.Comment: 12 pages, 30 figure
A sub-mW IoT-endnode for always-on visual monitoring and smart triggering
This work presents a fully-programmable Internet of Things (IoT) visual
sensing node that targets sub-mW power consumption in always-on monitoring
scenarios. The system features a spatial-contrast binary
pixel imager with focal-plane processing. The sensor, when working at its
lowest power mode ( at 10 fps), provides as output the number of
changed pixels. Based on this information, a dedicated camera interface,
implemented on a low-power FPGA, wakes up an ultra-low-power parallel
processing unit to extract context-aware visual information. We evaluate the
smart sensor on three always-on visual triggering application scenarios.
Triggering accuracy comparable to RGB image sensors is achieved at nominal
lighting conditions, while consuming an average power between and
, depending on context activity. The digital sub-system is extremely
flexible, thanks to a fully-programmable digital signal processing engine, but
still achieves 19x lower power consumption compared to MCU-based cameras with
significantly lower on-board computing capabilities.Comment: 11 pages, 9 figures, submitteted to IEEE IoT Journa
Pixel Detectors
Pixel detectors for precise particle tracking in high energy physics have
been developed to a level of maturity during the past decade. Three of the LHC
detectors will use vertex detectors close to the interaction point based on the
hybrid pixel technology which can be considered the state of the art in this
field of instrumentation. A development period of almost 10 years has resulted
in pixel detector modules which can stand the extreme rate and timing
requirements as well as the very harsh radiation environment at the LHC without
severe compromises in performance. From these developments a number of
different applications have spun off, most notably for biomedical imaging.
Beyond hybrid pixels, a number of monolithic or semi-monolithic developments,
which do not require complicated hybridization but come as single sensor/IC
entities, have appeared and are currently developed to greater maturity. Most
advanced in terms of maturity are so called CMOS active pixels and DEPFET
pixels. The present state in the construction of the hybrid pixel detectors for
the LHC experiments together with some hybrid pixel detector spin-off is
reviewed. In addition, new developments in monolithic or semi-monolithic pixel
devices are summarized.Comment: 14 pages, 38 drawings/photographs in 21 figure
Isolating contour information from arbitrary images
Aspects of natural vision (physiological and perceptual) serve as a basis for attempting the development of a general processing scheme for contour extraction. Contour information is assumed to be central to visual recognition skills. While the scheme must be regarded as highly preliminary, initial results do compare favorably with the visual perception of structure. The scheme pays special attention to the construction of a smallest scale circular difference-of-Gaussian (DOG) convolution, calibration of multiscale edge detection thresholds with the visual perception of grayscale boundaries, and contour/texture discrimination methods derived from fundamental assumptions of connectivity and the characteristics of printed text. Contour information is required to fall between a minimum connectivity limit and maximum regional spatial density limit at each scale. Results support the idea that contour information, in images possessing good image quality, is (centered at about 10 cyc/deg and 30 cyc/deg). Further, lower spatial frequency channels appear to play a major role only in contour extraction from images with serious global image defects
Belle II Technical Design Report
The Belle detector at the KEKB electron-positron collider has collected
almost 1 billion Y(4S) events in its decade of operation. Super-KEKB, an
upgrade of KEKB is under construction, to increase the luminosity by two orders
of magnitude during a three-year shutdown, with an ultimate goal of 8E35 /cm^2
/s luminosity. To exploit the increased luminosity, an upgrade of the Belle
detector has been proposed. A new international collaboration Belle-II, is
being formed. The Technical Design Report presents physics motivation, basic
methods of the accelerator upgrade, as well as key improvements of the
detector.Comment: Edited by: Z. Dole\v{z}al and S. Un
A mosaic of eyes
Autonomous navigation is a traditional research topic in intelligent robotics and vehicles, which requires a robot to perceive its environment through onboard sensors such as cameras or laser scanners, to enable it to drive to its goal. Most research to date has focused on the development of a large and smart brain to gain autonomous capability for robots. There are three fundamental questions to be answered by an autonomous mobile robot: 1) Where am I going? 2) Where am I? and 3) How do I get there? To answer these basic questions, a robot requires a massive spatial memory and considerable computational resources to accomplish perception, localization, path planning, and control. It is not yet possible to deliver the centralized intelligence required for our real-life applications, such as autonomous ground vehicles and wheelchairs in care centers. In fact, most autonomous robots try to mimic how humans navigate, interpreting images taken by cameras and then taking decisions accordingly. They may encounter the following difficulties
Entropy of Highly Correlated Quantized Data
This paper considers the entropy of highly correlated quantized samples. Two results are shown. The first concerns sampling and identically scalar quantizing a stationary continuous-time random process over a finite interval. It is shown that if the process crosses a quantization threshold with positive probability, then the joint entropy of the quantized samples tends to infinity as the sampling rate goes to infinity. The second result provides an upper bound to the rate at which the joint entropy tends to infinity, in the case of an infinite-level uniform threshold scalar quantizer and a stationary Gaussian random process. Specifically, an asymptotic formula for the conditional entropy of one quantized sample conditioned on the previous quantized sample is derived. At high sampling rates, these results indicate a sharp contrast between the large encoding rate (in bits/sec) required by a lossy source code consisting of a fixed scalar quantizer and an ideal, sampling-rate-adapted lossless code, and the bounded encoding rate required by an ideal lossy source code operating at the same distortion
Level based sampling techniques for energy conservation in large scale wireless sensor networks
As the size and node density of wireless sensor networks (WSN) increase,the energy conservation problem becomes more critical and the conventional methods become inadequate. This dissertation addresses two different problems in large scale WSNs where all sensors are involved in monitoring,but the traditional practice of periodic transmissions of observations from all sensors would drain excessive amount of energy.
In the first problem,monitoring of the spatial distribution of a two dimensional correlated signal is considered using a large scale WSN. It is assumed that sensor observations are heavily affected by noise. We present an approach that is based on detecting contour lines of the signal distribution to estimate the spatial distribution of the signal without involving all sensors in the network. Energy efficient algorithms are proposed for detecting and tracking the temporal variation of the contours. Optimal contour levels that minimize the estimation error and a practical approach for selection of contour levels are explored. Performance of the proposed algorithm is explored with different types of contour levels and detection parameters.
In the second problem,a WSN is considered that performs health monitoring of equipment from a power substation. The monitoring applications require transmissions of sensor observations from all sensor nodes on a regular basis to the base station,which is very costly in terms of communication cost. To address this problem,an efficient sampling technique using level-crossings (LCS) is proposed. This technique saves communication cost by suppressing transmissions of data samples that do not convey much information. The performance and cost of LCS for several different level-selection schemes are investigated. The number of required levels and the maximum sampling period for practical implementation of LCS are studied. Finally,in an experimental implementation of LCS with MICAzmote,the performance and cost of LCS for temperature sensing with uniform,logarithmic and a combined version of uniform and logarithmically spaced levels are compared with that using periodic sampling
- …