3,101 research outputs found
Cost-Aware Coalitions for Collaborative Tracking in Resource-Constrained Camera Networks
Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. J. C. SanMiguel and A. Cavallaro, "Cost-Aware Coalitions for Collaborative Tracking in Resource-Constrained Camera Networks," in IEEE Sensors Journal, vol. 15, no. 5, pp. 2657-2668, May 2015. doi: 10.1109/JSEN.2014.2367015We propose an approach to create camera coalitions in resource-constrained camera networks and demonstrate it for collaborative target tracking. We cast coalition formation as a decentralized resource allocation process where the best cameras among those viewing a target are assigned to a coalition based on marginal utility theory. A manager is dynamically selected to negotiate with cameras whether they will join the coalition and to coordinate the tracking task. This negotiation is based not only on the utility brought by each camera to the coalition, but also on the associated cost (i.e. additional processing and communication). Experimental results and comparisons using simulations and real data show that the proposed approach outperforms related state-of-the-art methods by improving tracking accuracy in cost-free settings. Moreover, under resource limitations, the proposed approach controls the tradeoff between accuracy and cost, and achieves energy savings with only a minor reduction in accuracy.This work was supported in part by the EU Crowded Environments monitoring for Activity Understanding and Recognition (CEN-TAUR, FP7-PEOPLE-2012-IAPP) Project under GA number 324359, and in part by the Artemis JU and U.K. Technology Strategy Board as part of the Cognitive and Perceptive Cameras (COPCAMS) Project under GA number 332913
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Robotic Wireless Sensor Networks
In this chapter, we present a literature survey of an emerging, cutting-edge,
and multi-disciplinary field of research at the intersection of Robotics and
Wireless Sensor Networks (WSN) which we refer to as Robotic Wireless Sensor
Networks (RWSN). We define a RWSN as an autonomous networked multi-robot system
that aims to achieve certain sensing goals while meeting and maintaining
certain communication performance requirements, through cooperative control,
learning and adaptation. While both of the component areas, i.e., Robotics and
WSN, are very well-known and well-explored, there exist a whole set of new
opportunities and research directions at the intersection of these two fields
which are relatively or even completely unexplored. One such example would be
the use of a set of robotic routers to set up a temporary communication path
between a sender and a receiver that uses the controlled mobility to the
advantage of packet routing. We find that there exist only a limited number of
articles to be directly categorized as RWSN related works whereas there exist a
range of articles in the robotics and the WSN literature that are also relevant
to this new field of research. To connect the dots, we first identify the core
problems and research trends related to RWSN such as connectivity,
localization, routing, and robust flow of information. Next, we classify the
existing research on RWSN as well as the relevant state-of-the-arts from
robotics and WSN community according to the problems and trends identified in
the first step. Lastly, we analyze what is missing in the existing literature,
and identify topics that require more research attention in the future
Socio-economic vision graph generation and handover in distributed smart camera networks
In this article we present an approach to object tracking handover in a network of smart cameras, based on self-interested autonomous agents, which exchange responsibility for tracking objects in a market mechanism, in order to maximise their own utility. A novel ant-colony inspired mechanism is used to learn the vision graph, that is, the camera neighbourhood relations, during runtime, which may then be used to optimise communication between cameras. The key benefits of our completely decentralised approach are on the one hand generating the vision graph online, enabling efficient deployment in unknown scenarios and camera network topologies, and on the other hand relying only on local information, increasing the robustness of the system. Since our market-based approach does not rely on a priori topology information, the need for any multicamera calibration can be avoided. We have evaluated our approach both in a simulation study and in network of real distributed smart cameras
Present and Future of SLAM in Extreme Underground Environments
This paper reports on the state of the art in underground SLAM by discussing
different SLAM strategies and results across six teams that participated in the
three-year-long SubT competition. In particular, the paper has four main goals.
First, we review the algorithms, architectures, and systems adopted by the
teams; particular emphasis is put on lidar-centric SLAM solutions (the go-to
approach for virtually all teams in the competition), heterogeneous multi-robot
operation (including both aerial and ground robots), and real-world underground
operation (from the presence of obscurants to the need to handle tight
computational constraints). We do not shy away from discussing the dirty
details behind the different SubT SLAM systems, which are often omitted from
technical papers. Second, we discuss the maturity of the field by highlighting
what is possible with the current SLAM systems and what we believe is within
reach with some good systems engineering. Third, we outline what we believe are
fundamental open problems, that are likely to require further research to break
through. Finally, we provide a list of open-source SLAM implementations and
datasets that have been produced during the SubT challenge and related efforts,
and constitute a useful resource for researchers and practitioners.Comment: 21 pages including references. This survey paper is submitted to IEEE
Transactions on Robotics for pre-approva
From Social Simulation to Integrative System Design
As the recent financial crisis showed, today there is a strong need to gain
"ecological perspective" of all relevant interactions in
socio-economic-techno-environmental systems. For this, we suggested to set-up a
network of Centers for integrative systems design, which shall be able to run
all potentially relevant scenarios, identify causality chains, explore feedback
and cascading effects for a number of model variants, and determine the
reliability of their implications (given the validity of the underlying
models). They will be able to detect possible negative side effect of policy
decisions, before they occur. The Centers belonging to this network of
Integrative Systems Design Centers would be focused on a particular field, but
they would be part of an attempt to eventually cover all relevant areas of
society and economy and integrate them within a "Living Earth Simulator". The
results of all research activities of such Centers would be turned into
informative input for political Decision Arenas. For example, Crisis
Observatories (for financial instabilities, shortages of resources,
environmental change, conflict, spreading of diseases, etc.) would be connected
with such Decision Arenas for the purpose of visualization, in order to make
complex interdependencies understandable to scientists, decision-makers, and
the general public.Comment: 34 pages, Visioneer White Paper, see http://www.visioneer.ethz.c
- …