92,651 research outputs found
An Energy Driven Architecture for Wireless Sensor Networks
Most wireless sensor networks operate with very limited energy sources-their
batteries, and hence their usefulness in real life applications is severely
constrained. The challenging issues are how to optimize the use of their energy
or to harvest their own energy in order to lengthen their lives for wider
classes of application. Tackling these important issues requires a robust
architecture that takes into account the energy consumption level of functional
constituents and their interdependency. Without such architecture, it would be
difficult to formulate and optimize the overall energy consumption of a
wireless sensor network. Unlike most current researches that focus on a single
energy constituent of WSNs independent from and regardless of other
constituents, this paper presents an Energy Driven Architecture (EDA) as a new
architecture and indicates a novel approach for minimising the total energy
consumption of a WS
Recommended from our members
System clock estimation based on clock wastage minimization
When synthesizing a hardware implementation from behavioral descriptions, an important decision is the selection of a clock cycle to schedule the datapath operations into control steps. Most existing behavioral synthesis systems either require the designer to specify the clock cycle explicitly or require that the delays of the operators used in the design be specified in multiples of a clock cycle. In the absence of any tool to guide the selection of a clock cycle, a bad choice of the clock period could adversely affect the performance of the synthesized design. We present an algorithm for estimating the system clock based on a clock wastage minimization criteria. Limitations of previous approaches to the problem are discussed. The results obtained prove that the clock cycle estimated by the Clock Wastage Minimization method produce faster designs than previous solutions to the problem
LoCoH: nonparameteric kernel methods for constructing home ranges and utilization distributions.
Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: "fixed sphere-of-influence," or r-LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an "adaptive sphere-of-influence," or a-LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a), and compare them to the original "fixed-number-of-points," or k-LoCoH (all kernels constructed from k-1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a-LoCoH is generally superior to k- and r-LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu)
Computer Architectures to Close the Loop in Real-time Optimization
© 2015 IEEE.Many modern control, automation, signal processing and machine learning applications rely on solving a sequence of optimization problems, which are updated with measurements of a real system that evolves in time. The solutions of each of these optimization problems are then used to make decisions, which may be followed by changing some parameters of the physical system, thereby resulting in a feedback loop between the computing and the physical system. Real-time optimization is not the same as fast optimization, due to the fact that the computation is affected by an uncertain system that evolves in time. The suitability of a design should therefore not be judged from the optimality of a single optimization problem, but based on the evolution of the entire cyber-physical system. The algorithms and hardware used for solving a single optimization problem in the office might therefore be far from ideal when solving a sequence of real-time optimization problems. Instead of there being a single, optimal design, one has to trade-off a number of objectives, including performance, robustness, energy usage, size and cost. We therefore provide here a tutorial introduction to some of the questions and implementation issues that arise in real-time optimization applications. We will concentrate on some of the decisions that have to be made when designing the computing architecture and algorithm and argue that the choice of one informs the other
- …