7,233 research outputs found
High performance photonic reservoir computer based on a coherently driven passive cavity
Reservoir computing is a recent bio-inspired approach for processing
time-dependent signals. It has enabled a breakthrough in analog information
processing, with several experiments, both electronic and optical,
demonstrating state-of-the-art performances for hard tasks such as speech
recognition, time series prediction and nonlinear channel equalization. A
proof-of-principle experiment using a linear optical circuit on a photonic chip
to process digital signals was recently reported. Here we present a photonic
implementation of a reservoir computer based on a coherently driven passive
fiber cavity processing analog signals. Our experiment has error rate as low or
lower than previous experiments on a wide variety of tasks, and also has lower
power consumption. Furthermore, the analytical model describing our experiment
is also of interest, as it constitutes a very simple high performance reservoir
computer algorithm. The present experiment, given its good performances, low
energy consumption and conceptual simplicity, confirms the great potential of
photonic reservoir computing for information processing applications ranging
from artificial intelligence to telecommunicationsComment: non
Automatic differentiation in machine learning: a survey
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in
machine learning. Automatic differentiation (AD), also called algorithmic
differentiation or simply "autodiff", is a family of techniques similar to but
more general than backpropagation for efficiently and accurately evaluating
derivatives of numeric functions expressed as computer programs. AD is a small
but established field with applications in areas including computational fluid
dynamics, atmospheric sciences, and engineering design optimization. Until very
recently, the fields of machine learning and AD have largely been unaware of
each other and, in some cases, have independently discovered each other's
results. Despite its relevance, general-purpose AD has been missing from the
machine learning toolbox, a situation slowly changing with its ongoing adoption
under the names "dynamic computational graphs" and "differentiable
programming". We survey the intersection of AD and machine learning, cover
applications where AD has direct relevance, and address the main implementation
techniques. By precisely defining the main differentiation techniques and their
interrelationships, we aim to bring clarity to the usage of the terms
"autodiff", "automatic differentiation", and "symbolic differentiation" as
these are encountered more and more in machine learning settings.Comment: 43 pages, 5 figure
Design and Implementation of Wheelchair Controller Based Electroencephalogram Signal using Microcontroller
Wheelchair is a medical device that can help patients, especially for persons with physical disabilities. In this research has designed a wheelchair that can be controlled using brain wave. Mind wave device is used as a sensor to capture brain waves. Fuzzy method is used to process data from mind wave. In the design was used a modified wheelchair (original wheelchair modified with addition dc motor that can be control using microcontroller ). After processing data from mindwave using fuzzy method, then microcontroller ordered dc motor to rotate.The dc motor connected to gear of wheelchair using chain. So when the dc motor rotated the wheelchair rotated as well. Controlling of DC motor used PID control method. Input encoder was used as feedback for PID control at each wheel.From the experimental results concentration level data of the human brain waves can be used to adjust the rate of speed of the wheelchair. The level accuracy of respons Fuzzy method ton system obtained by devide total true respons data with total tested data and the result is 85.71 %. Wheelchairs can run at a maximum speed of 31.5 cm/s when the battery voltage is more than 24.05V. Moreover, the maximum load of wheelchair is 110 kg
ED-Scorbot: A Robotic test-bed Framework for FPGA-based Neuromorphic systems
Neuromorphic engineering is a growing and
promising discipline nowadays. Neuro-inspiration and
brain understanding applied to solve engineering
problems is boosting new architectures, solutions and
products today. The biological brain and neural systems
process information at relatively low speeds through
small components, called neurons, and it is impressive how
they connect each other to construct complex
architectures to solve in a quasi-instantaneous way
visual and audio processing tasks, object detection and
tracking, target approximation, grasping…, etc., with very
low power. Neuromorphs are beginning to be very promising
for a new era in the development of new sensors,
processors, robots and software systems that mimic
these biological systems. The event-driven Scorbot (EDScorbot)
is a robotic arm plus a set of FPGA / microcontroller’s
boards and a library of FPGA logic joined in a completely
event-based framework (spike-based) from the sensors to the
actuators. It is located in Seville (University of Seville) and
can be used remotely. Spike-based commands, through
neuro-inspired motor controllers, can be sent to the
robot after visual processing object detection and
tracking for grasping or manipulation, after complex
visual and audio-visual sensory fusion, or after performing
a learning task. Thanks to the cascade FPGA
architecture through the Address-Event-Representation
(AER) bus, supported by specialized boards, resources for
algorithms implementation are not limited.Ministerio de Economía y Competitividad TEC2012-37868-C04-02Junta de Andalucía P12-TIC-130
Idealized computational models for auditory receptive fields
This paper presents a theory by which idealized models of auditory receptive
fields can be derived in a principled axiomatic manner, from a set of
structural properties to enable invariance of receptive field responses under
natural sound transformations and ensure internal consistency between
spectro-temporal receptive fields at different temporal and spectral scales.
For defining a time-frequency transformation of a purely temporal sound
signal, it is shown that the framework allows for a new way of deriving the
Gabor and Gammatone filters as well as a novel family of generalized Gammatone
filters, with additional degrees of freedom to obtain different trade-offs
between the spectral selectivity and the temporal delay of time-causal temporal
window functions.
When applied to the definition of a second-layer of receptive fields from a
spectrogram, it is shown that the framework leads to two canonical families of
spectro-temporal receptive fields, in terms of spectro-temporal derivatives of
either spectro-temporal Gaussian kernels for non-causal time or the combination
of a time-causal generalized Gammatone filter over the temporal domain and a
Gaussian filter over the logspectral domain. For each filter family, the
spectro-temporal receptive fields can be either separable over the
time-frequency domain or be adapted to local glissando transformations that
represent variations in logarithmic frequencies over time. Within each domain
of either non-causal or time-causal time, these receptive field families are
derived by uniqueness from the assumptions.
It is demonstrated how the presented framework allows for computation of
basic auditory features for audio processing and that it leads to predictions
about auditory receptive fields with good qualitative similarity to biological
receptive fields measured in the inferior colliculus (ICC) and primary auditory
cortex (A1) of mammals.Comment: 55 pages, 22 figures, 3 table
Recommended from our members
Explainable and Advisable Learning for Self-driving Vehicles
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers, etc., can understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. Our work has focused on the challenge of generating introspective explanations of deep models for self-driving vehicles. In Chapter 3, we begin by exploring the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. In Chapter 4, we add an attention-based video-to-text model to produce textual explanations of model actions, e.g. "the car slows down because the road is wet". The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. These explainable systems represent an externalization of tacit knowledge. The network's opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. In Chapter 5, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Further, in Chapter 6, we propose a new approach that learns vehicle control with the help of long-term (global) human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly
A Novel Method to Increase LinLog CMOS Sensors’ Performance in High Dynamic Range Scenarios
Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor’s maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method
Virtual environment for assistant mobile robot
This paper shows the development of a virtual environment for a mobile robotic system with the ability to recognize basic voice commands, which are oriented to the recognition of a valid command of bring or take an object from a specific destination in residential spaces. The recognition of the voice command and the objects with which the robot will assist the user, is performed by a machine vision system based on the capture of the scene, where the robot is located. In relation to each captured image, a convolutional network based on regions is used with transfer learning, to identify the objects of interest. For human-robot interaction through voice, a convolutional neural network (CNN) of 6 convolution layers is used, oriented to recognize the commands to carry and bring specific objects inside the residential virtual environment. The use of convolutional networks allowed the adequate recognition of words and objects, which by means of the associated robot kinematics give rise to the execution of carry/bring commands, obtaining a navigation algorithm that operates successfully, where the manipulation of the objects exceeded 90%. Allowing the robot to move in the virtual environment even with the obstruction of objects in the navigation path.<
Real time model validation and control of DC motor using matlab and USB
Mechatronics system needs motion or action of some sort. It is created by a force or torque that results in acceleration and displacement. To produce this motion or action, actuators are the device being used. There are many types of actuators and one of the common types of electromechanical actuators is the direct current (DC) motor. The main goal of this project is to estimate the actual model of DC motor and control its speed using an embedded system interfaced to computer. The model identification is achieved using simple and low cost data acquisition system. An Arduino Uno embedded board system is used to collect the data from the sensors, send it to the computer, and control the model. The data processing is performed using MATLAB/SIMULINK. The validation for both the model and the controller are verified through simulations and experiments. The identification and the controllers results were coherent and successful
- …