1,104 research outputs found
A NOVEL BIO-INSPIRED STATIC IMAGE COMPRESSION SCHEME FOR NOISY DATA TRANSMISSION OVER LOW-BANDWIDTH CHANNELS
International audienceWe present a novel bio-inspired static image compression scheme. Our model is a combination of a simplified spiking retina model and well known data compression techniques. The fundamental hypothesis behind this work is that the mammalian retina generates an efficient neural code associated to the visual flux. The main novelty of this work is to show how this neural code can be exploited in the context of still image compression. Our model has three main stages. The first stage is the bio-inspired retina model proposed by Thorpe et al [1, 2], which transforms an image into a wave of spikes. This transform is based on the so-called rank order coding. In the second stage, we show how this wave of spikes can be expressed using a 4-ary dictionary alphabet, through a stack run coder. The third stage consists of applying a first order arithmetic coder to the stack run coded signal. We compare our results to JPEG standards and we show that our model has comparable performance for lower computational cost under strong bit rate restrictions when data is highly contaminated with noise. In addition, our model offers scalability for monitoring data transmission flow. The subject matter presented highlights a variety of important issues in the conception of novel bio-inspired compression schemes and additionally presents many potential avenues for future research efforts
A bio-inspired image coder with temporal scalability
We present a novel bio-inspired and dynamic coding scheme for static images.
Our coder aims at reproducing the main steps of the visual stimulus processing
in the mammalian retina taking into account its time behavior. The main novelty
of this work is to show how to exploit the time behavior of the retina cells to
ensure, in a simple way, scalability and bit allocation. To do so, our main
source of inspiration will be the biologically plausible retina model called
Virtual Retina. Following a similar structure, our model has two stages. The
first stage is an image transform which is performed by the outer layers in the
retina. Here it is modelled by filtering the image with a bank of difference of
Gaussians with time-delays. The second stage is a time-dependent
analog-to-digital conversion which is performed by the inner layers in the
retina. Thanks to its conception, our coder enables scalability and bit
allocation across time. Also, our decoded images do not show annoying artefacts
such as ringing and block effects. As a whole, this article shows how to
capture the main properties of a biological system, here the retina, in order
to design a new efficient coder.Comment: 12 pages; Advanced Concepts for Intelligent Vision Systems (ACIVS
2011
Streaming an image through the eye: The retina seen as a dithered scalable image coder
We propose the design of an original scalable image coder/decoder that is
inspired from the mammalians retina. Our coder accounts for the time-dependent
and also nondeterministic behavior of the actual retina. The present work
brings two main contributions: As a first step, (i) we design a deterministic
image coder mimicking most of the retinal processing stages and then (ii) we
introduce a retinal noise in the coding process, that we model here as a dither
signal, to gain interesting perceptual features. Regarding our first
contribution, our main source of inspiration will be the biologically plausible
model of the retina called Virtual Retina. The main novelty of this coder is to
show that the time-dependent behavior of the retina cells could ensure, in an
implicit way, scalability and bit allocation. Regarding our second
contribution, we reconsider the inner layers of the retina. We emit a possible
interpretation for the non-determinism observed by neurophysiologists in their
output. For this sake, we model the retinal noise that occurs in these layers
by a dither signal. The dithering process that we propose adds several
interesting features to our image coder. The dither noise whitens the
reconstruction error and decorrelates it from the input stimuli. Furthermore,
integrating the dither noise in our coder allows a faster recognition of the
fine details of the image during the decoding process. Our present paper goal
is twofold. First, we aim at mimicking as closely as possible the retina for
the design of a novel image coder while keeping encouraging performances.
Second, we bring a new insight concerning the non-deterministic behavior of the
retina.Comment: arXiv admin note: substantial text overlap with arXiv:1104.155
An Efficient method of image compression by merging IWPT transform coding with index vector Quantization through FNN
Abstract--By the use of Neural Network, it was found that the reconstructed image has least image complexity and the image size has been reduced considerably by reducing the number of samples. This causes a remarkable increase in quality of the reconstructed image. A new quantization method is proposed in this paper. This method is useful for enhancement of compression quality when each kind of neural network is used to compress the image. Quantization, involved in image processing is achieved by compressing a range of values to a single quantum value. When the number of discrete symbols in a given stream is reduced, the stream becomes more compressible. For example, reducing the number of colors required to represent a digital image makes it possible to reduce its file size. This causes a remarkable enhancement in quality of the reconstructed image. For testing the proposed method we use IWPT transform coding and by merging it with the proposed quantization method a new compression algorithm is obtained. Then results of compression by the merged method are compared with some other transform methods. Compression time and complexity in the merged method is also better than JPEG and make it suitable for the systems with low processor and hardware implementation. Obtained results show that the proposed compression algorithm increases the compression quality of the images remarkably
Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey
Wireless sensor networks (WSNs) consist of autonomous and resource-limited
devices. The devices cooperate to monitor one or more physical phenomena within
an area of interest. WSNs operate as stochastic systems because of randomness
in the monitored environments. For long service time and low maintenance cost,
WSNs require adaptive and robust methods to address data exchange, topology
formulation, resource and power optimization, sensing coverage and object
detection, and security challenges. In these problems, sensor nodes are to make
optimized decisions from a set of accessible strategies to achieve design
goals. This survey reviews numerous applications of the Markov decision process
(MDP) framework, a powerful decision-making tool to develop adaptive algorithms
and protocols for WSNs. Furthermore, various solution methods are discussed and
compared to serve as a guide for using MDPs in WSNs
Cognitive Security Framework For Heterogeneous Sensor Network Using Swarm Intelligence
Rapid development of sensor technology has led to applications ranging from academic to military in a short time span. These tiny sensors are deployed in environments where security for data or hardware cannot be guaranteed. Due to resource constraints, traditional security schemes cannot be directly applied. Unfortunately, due to minimal or no communication security schemes, the data, link and the sensor node can be easily tampered by intruder attacks. This dissertation presents a security framework applied to a sensor network that can be managed by a cohesive sensor manager. A simple framework that can support security based on situation assessment is best suited for chaotic and harsh environments. The objective of this research is designing an evolutionary algorithm with controllable parameters to solve existing and new security threats in a heterogeneous communication network. An in-depth analysis of the different threats and the security measures applied considering the resource constrained network is explored. Any framework works best, if the correlated or orthogonal performance parameters are carefully considered based on system goals and functions. Hence, a trade-off between the different performance parameters based on weights from partially ordered sets is applied to satisfy application specific requirements and security measures. The proposed novel framework controls heterogeneous sensor network requirements,and balance the resources optimally and efficiently while communicating securely using a multi-objection function. In addition, the framework can measure the affect of single or combined denial of service attacks and also predict new attacks under both cooperative and non-cooperative sensor nodes. The cognitive intuition of the framework is evaluated under different simulated real time scenarios such as Health-care monitoring, Emergency Responder, VANET, Biometric security access system, and Battlefield monitoring. The proposed three-tiered Cognitive Security Framework is capable of performing situation assessment and performs the appropriate security measures to maintain reliability and security of the system. The first tier of the proposed framework, a crosslayer cognitive security protocol defends the communication link between nodes during denial-of-Service attacks by re-routing data through secure nodes. The cognitive nature of the protocol balances resources and security making optimal decisions to obtain reachable and reliable solutions. The versatility and robustness of the protocol is justified by the results obtained in simulating health-care and emergency responder applications under Sybil and Wormhole attacks. The protocol considers metrics from each layer of the network model to obtain an optimal and feasible resource efficient solution. In the second tier, the emergent behavior of the protocol is further extended to mine information from the nodes to defend the network against denial-of-service attack using Bayesian models. The jammer attack is considered the most vulnerable attack, and therefore simulated vehicular ad-hoc network is experimented with varied types of jammer. Classification of the jammer under various attack scenarios is formulated to predict the genuineness of the attacks on the sensor nodes using receiver operating characteristics. In addition to detecting the jammer attack, a simple technique of locating the jammer under cooperative nodes is implemented. This feature enables the network in isolating the jammer or the reputation of node is affected, thus removing the malicious node from participating in future routes. Finally, a intrusion detection system using `bait\u27 architecture is analyzed where resources is traded-off for the sake of security due to sensitivity of the application. The architecture strategically enables ant agents to detect and track the intruders threateningthe network. The proposed framework is evaluated based on accuracy and speed of intrusion detection before the network is compromised. This process of detecting the intrusion earlier helps learn future attacks, but also serves as a defense countermeasure. The simulated scenarios of this dissertation show that Cognitive Security Framework isbest suited for both homogeneous and heterogeneous sensor networks
An Analog VLSI Deep Machine Learning Implementation
Machine learning systems provide automated data processing and see a wide range of applications. Direct processing of raw high-dimensional data such as images and video by machine learning systems is impractical both due to prohibitive power consumption and the “curse of dimensionality,” which makes learning tasks exponentially more difficult as dimension increases. Deep machine learning (DML) mimics the hierarchical presentation of information in the human brain to achieve robust automated feature extraction, reducing the dimension of such data. However, the computational complexity of DML systems limits large-scale implementations in standard digital computers. Custom analog signal processing (ASP) can yield much higher energy efficiency than digital signal processing (DSP), presenting means of overcoming these limitations.
The purpose of this work is to develop an analog implementation of DML system.
First, an analog memory is proposed as an essential component of the learning systems. It uses the charge trapped on the floating gate to store analog value in a non-volatile way. The memory is compatible with standard digital CMOS process and allows random-accessible bi-directional updates without the need for on-chip charge pump or high voltage switch.
Second, architecture and circuits are developed to realize an online k-means clustering algorithm in analog signal processing. It achieves automatic recognition of underlying data pattern and online extraction of data statistical parameters. This unsupervised learning system constitutes the computation node in the deep machine learning hierarchy.
Third, a 3-layer, 7-node analog deep machine learning engine is designed featuring online unsupervised trainability and non-volatile floating-gate analog storage. It utilizes massively parallel reconfigurable current-mode analog architecture to realize efficient computation. And algorithm-level feedback is leveraged to provide robustness to circuit imperfections in analog signal processing. At a processing speed of 8300 input vectors per second, it achieves 1Ă—1012 operation per second per Watt of peak energy efficiency.
In addition, an ultra-low-power tunable bump circuit is presented to provide similarity measures in analog signal processing. It incorporates a novel wide-input-range tunable pseudo-differential transconductor. The circuit demonstrates tunability of bump center, width and height with a power consumption significantly lower than previous works
- …