23 research outputs found
Multi-Dimensional Range Querying using a Modification of the Skip Graph
Skip graphs are an application layer-based distributed routing data structure that can be used in a sensor network to facilitate user queries of data collected by the sensor nodes. This research investigates the impact of a proposed modification to the skip graph proposed by Aspnes and Shah. Nodes contained in a standard skip graph are sorted by their key value into successively smaller groups based on random membership vectors computed locally at each node. The proposed modification inverts the node key and membership vector roles, where group membership is computed deterministically and node keys are computed randomly. Both skip graph types are modeled in Java. Range query and node mobility simulations are performed. The number of skip graph levels, total node count, and query precision are varied for query simulations; number of levels and total node count is varied for the mobility simulation. Query performance is measured by the number of skip graph messages used to execute the query while mobility performance is measured by the number of messages transmitted to maintain skip graph coherence. When the number of levels is limited and query precision is low, or when query precision is matched by the number of levels in the skip graph and total network node counts are increased, the modified skip graph transmits fewer messages to execute the query. Furthermore, fewer update messages are needed to fix lost node references due to mobile nodes
Zoom: A multi-resolution tasking framework for crowdsourced geo-spatial sensing
Abstract—As sensor networking technologies continue to de-velop, the notion of adding large-scale mobility into sensor networks is becoming feasible by crowd-sourcing data collection to personal mobile devices. However, tasking such networks at fine granularity becomes problematic because the sensors are heterogeneous, owned by the crowd and not the network operators. In this paper, we present Zoom, a multi-resolution tasking framework for crowdsourced geo-spatial sensor networks. Zoom allows users to define arbitrary sensor groupings over heterogeneous, unstructured and mobile networks and assign different sensing tasks to each group. The key idea is the separation of the task information ( what task a particular sensor should perform) from the task implementation ( code). Zoom consists of (i) a map, an overlay on top of a geographic region, to represent both the sensor groups and the task information, and (ii) adaptive encoding of the map at multiple resolutions and region-of-interest cropping for resource-constrained devices, allowing sensors to zoom in quickly to a specific region to determine their task. Simulation of a realistic traffic application over an area of 1 sq. km with a task map of size 1.5 KB shows that more than 90 % of nodes are tasked correctly. Zoom also outperforms Logical Neighborhoods, the state-of-the-art tasking protocol in task information size for similar tasks. Its encoded map size is always less than 50 % of Logical Neighborhood’s predicate size. I
Decentralized Convex Optimization for Wireless Sensor Networks
Many real-world applications arising in domains such as large-scale machine learning, wired and wireless networks can be formulated as distributed linear least-squares over a large network. These problems often have their data naturally distributed. For instance applications such as seismic imaging, smart grid have the sensors geographically distributed and the current algorithms to analyze these data rely on centralized approach. The data is either gathered manually, or relayed by expensive broadband stations, and then processed at a base station. This approach is time-consuming (weeks to months) and hazardous as the task involves manual data gathering in extreme conditions. To obtain the solution in real-time, we require decentralized algorithms that do not rely on a fusion center, cluster heads, or multi-hop communication. In this thesis, we propose several decentralized least squares optimization algorithm that are suitable for performing real-time seismic imaging in a sensor network. The algorithms are evaluated and tested using both synthetic and real-data traces. The results validate that our distributed algorithm is able to obtain a satisfactory image similar to centralized computation under constraints of network resources, while distributing the computational burden to sensor nodes
Greedy routing and virtual coordinates for future networks
At the core of the Internet, routers are continuously struggling with
ever-growing routing and forwarding tables. Although hardware advances
do accommodate such a growth, we anticipate new requirements e.g. in
data-oriented networking where each content piece has to be referenced
instead of hosts, such that current approaches relying on global
information will not be viable anymore, no matter the hardware
progress. In this thesis, we investigate greedy routing methods that
can achieve similar routing performance as today but use much less
resources and which rely on local information only. To this end, we
add specially crafted name spaces to the network in which virtual
coordinates represent the addressable entities. Our scheme enables participating
routers to make forwarding decisions using only neighbourhood information,
as the overarching pseudo-geometric name space structure already
organizes and incorporates "vicinity" at a global level.
A first challenge to the application of greedy routing on virtual
coordinates to future networks is that of "routing dead-ends"
that are local minima due to the difficulty of consistent coordinates
attribution. In this context, we propose a routing recovery scheme
based on a multi-resolution embedding of the network in low-dimensional Euclidean spaces.
The recovery is performed by routing greedily on a blurrier view of the network. The
different network detail-levels are obtained though the embedding of
clustering-levels of the graph. When compared with
higher-dimensional embeddings of a given network, our method shows a
significant diminution of routing failures for similar header and
control-state sizes.
A second challenge to the application of virtual coordinates and
greedy routing to future networks is the support of
"customer-provider" as well as "peering" relationships between
participants, resulting in a differentiated services
environment. Although an application of greedy routing within such a
setting would combine two very common fields of today's networking
literature, such a scenario has, surprisingly, not been studied so
far. In this context we propose two approaches to address this scenario.
In a first approach we implement a path-vector protocol similar to
that of BGP on top of a greedy embedding of the network. This allows
each node to build a spatial map associated with each of its
neighbours indicating the accessible regions. Routing is then
performed through the use of a decision-tree classifier taking the
destination coordinates as input. When applied on a real-world dataset
(the CAIDA 2004 AS graph) we demonstrate an up to 40% compression ratio of
the routing control information at the network's core as well as a computationally efficient
decision process comparable to methods such as binary trees and tries.
In a second approach, we take inspiration from consensus-finding in social
sciences and transform the three-dimensional distance data structure
(where the third dimension encodes the service differentiation) into a
two-dimensional matrix on which classical embedding tools can be used.
This transformation is achieved by agreeing on a set of
constraints on the inter-node distances guaranteeing an
administratively-correct greedy routing. The computed distances are
also enhanced to encode multipath support. We demonstrate a good
greedy routing performance as well as an above 90% satisfaction of multipath constraints
when relying on the non-embedded obtained distances on synthetic datasets.
As various embeddings of the consensus distances do not fully exploit their multipath potential, the use of compression techniques such as transform coding to
approximate the obtained distance allows for better routing performances
Basissoftware für drahtlose Ad-hoc- und Sensornetze
Mit dem Titel "Basissoftware für selbstorganisierende Infrastrukturen für vernetzte mobile Systeme" vereint das Schwerpunktprogramm 1140 der DFG Forschungsvorhaben zum Thema drahtloser Ad-hoc- und Sensornetze. Durch die Konzeption höherwertiger Dienste für diese aufstrebenden Netztypen leistet das Schwerpunktprogramm einen essentiellen Beitrag zur aktuellen Forschung und erschafft gleichzeitig ein solides Fundament zur Entwicklung zahlreicher Anwendungen
Bayesian Search Under Dynamic Disaster Scenarios
Search and Rescue (SAR) is a hard decision making context where there is available a limited amount of resources that should be strategically allocated over the search region in order to find missing people opportunely. In this thesis, we consider those SAR scenarios where the search region is being affected by some type of dynamic threat such as a wilder or a hurricane. In spite of the large amount of SAR missions that consistently take place under these circumstances, and being Search Theory a research area dating back from more than a half century, to the best of our knowledge, this kind of search problem has not being considered in any previous research. Here we propose a bi-objective mathematical optimization model and three solution methods for the problem: (1) Epsilon-constraint; (2) Lexicographic; and (3) Ant Colony based heuristic. One of the objectives of our model pursues the allocation of resources in riskiest zones. This objective attempts to find victims located at the closest regions to the threat, presenting a high risk of being reached by the disaster. In contrast, the second objective is oriented to allocate resources in regions where it is more likely to find the victim. Furthermore, we implemented a receding horizon approach oriented to provide our planning methodology with the ability to adapt to disaster's behavior based on updated information gathered during the mission. All our products were validated through computational experiments.MaestríaMagister en Ingeniería Industria
Transition in Monitoring and Network Offloading - Handling Dynamic Mobile Applications and Environments
Communication demands increased significantly in recent years, as evidenced in studies by Cisco and Ericsson. Users demand connectivity anytime and anywhere, while new application domains such as the Internet of Things and vehicular networking, amplify heterogeneity and dynamics of the resource-constrained environment of mobile networks. These developments pose major challenges to an efficient utilization of existing communication infrastructure.
To reduce the burden on the communication infrastructure, mechanisms for network offloading can be utilized. However, to deal with the dynamics of new application scenarios, these mechanisms need to be highly adaptive. Gathering information about the current status of the network is a fundamental requirement for meaningful adaptation. This requires network monitoring mechanisms that are able to operate under the same highly dynamic environmental conditions and changing requirements.
In this thesis, we design and realize a concept for transitions within network offloading to handle the former challenges, which constitutes our first contribution. We enable adaptive offloading by introducing a methodology for the identification and encapsulation of gateway selection and clustering mechanisms in the transition-enabled service AssignMe.KOM. To handle the dynamics of environmental conditions, we allow for centralized and decentralized offloading. We generalize and show the significant impact of our concept of transitions within offloading in various, heterogeneous applications domains such as vehicular networking or publish/subscribe.
We extend the methodology of identification and encapsulation to the domain of network monitoring in our second contribution. Our concept of a transition-enabled monitoring service AdaptMon.KOM enables adaptive network state observation by executing transitions between monitoring mechanisms. We introduce extensive transition coordination concepts for reconfiguration in both of our contributions. To prevent data loss during complex transition plans that cover multiple coexisting transition-enabled mechanisms, we develop the methodology of inter-proxy state transfer. We target the coexistence of our contributions for the use case of collaborative location retrieval on the example of location-based services.
Based on our prototypes of AssignMe.KOM and AdaptMon.KOM, we conduct an extensive evaluation of our contributions in the Simonstrator.KOM platform. We show that our proposed inter-proxy state transfer prevents information loss, enabling seamless execution of complex transition plans that cover multiple coexisting transition-enabled mechanisms. Additionally, we demonstrate the influence of transition coordination and spreading on the success of the network adaptation. We manifest a cost-efficient and reliable methodology for location retrieval by combining our transition-enabled contributions. We show that our contributions allow for adaption on dynamic
environmental conditions and requirements in network offloading and monitoring
High-Performance Modelling and Simulation for Big Data Applications
This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
Contributions to mobile robot navigation based embedded systems and grid mapping
Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones. Fecha de lectura: 13-07-2015Path planning is a problem as old as humankind. The necessity of optimizing the resources to reach a
location has been a concern since prehistory. Technology has allowed approaching this problematic using
new resources. However, it has also introduced new requirements.
This thesis is focused on path planning from the perspective of an embedded system using grid maps. For
battery-dependent robots, path length is very relevant because it is directly related to motor consumption
and the autonomy of the system. Nevertheless, a second aspect to be considered when using embedded
systems is the HW requirements. These requirements comprise floating point units or storage capacity.
When computer-based path planning algorithms are directly ported to these embedded systems, their HW
requirements become a limitation. This thesis presents two novel path planning algorithms which take into
account both the search of the shortest path and the optimization of HW resources. These algorithms are
HCTNav and NafisNav.
The HCTNav algorithm was developed using the intuitive approach as trying to reach the goal in a straight
trajectory until an obstacle is found. When an obstacle is found, it must be surrounded until the straight
path to the goal can be continued, reaching this goal or another obstacle. Considering HCTNav as a path
planning algorithm, both possible surrounding trajectories can be explored and then choose the best
solution. Therefore, for each obstacle the algorithm finds, there is a branch in the search of the solution.
Finally, the algorithm includes an optimization procedure which reduces the length of the obtained paths if
it is possible to go between nonconsecutive waypoints in straight line.
The NafisNav algorithm evolves from a depth-first search. For each iteration of the algorithm, the straight
trajectory to the goal position is verified. If this trajectory is not available, the algorithm selects from the
unexplored neighbor cells the closest one to the target. If two neighbors were at the same distance, the
algorithm would branch evaluating both alternatives. This algorithm includes a backtracking procedure
just in case it finds a dead end. Finally, from every possible solution, the algorithm proposes the one that,
after optimization, provides the shortest path.
The new algorithms have been evaluated and compared with the most extended algorithms of the state of
the art: Dijkstra and A*. The two chosen evaluation metrics have been final path length and required
dynamic memory. HCTNav provides an average penalization in the path length of 2.1% and NafisNav has
this penalization increased to 4.5%. However, these algorithms present a decrease of the memory
requirements of a 19% for HCTNav and of a 49% for the NafisNav algorithmLa planificación de una ruta es un problema casi tan antiguo como la humanidad. La necesidad de
optimizar esfuerzos para alcanzar un objetivo ha sido una gran preocupación desde la prehistoria. La
tecnología ha permitido abordar la solución de esta problemática con nuevos medios, pero también ha
planteado otros requisitos distintos.
Esta tesis aborda el problema de navegación desde la perspectiva de los sistemas empotrados en entornos
de mapas de rejilla. En todo robot dependiente de batería, la longitud final es un factor relevante porque se
traduce directamente en el consumo de los motores y repercute en la autonomía del sistema. No obstante,
un segundo factor que aparece al utilizar sistemas empotrados es el uso de recursos HW, ya sean unidades
de coma flotante o capacidad de almacenamiento. Cuando se intenta adaptar los algoritmos diseñados para
ser ejecutados en un ordenador nos enfrentamos a una gran demanda de estos recursos. La tesis plantea dos
algoritmos novedosos que tienen en cuenta tanto la búsqueda de un camino lo más corto posible como la
optimización de recursos HW: HCTNav y NafisNav.
El algoritmo HCTNav se desarrolló siguiendo el movimiento intuitivo de quien trata de ir en línea recta
hasta que encuentra un obstáculo y lo rodea hasta que puede continuar en línea recta hasta el destino, o en
caso contrario hasta otro obstáculo. Dado que se trata un algoritmo de planificación, se puede plantear
rodear el obstáculo por ambos lados y elegir cuál es la mejor opción. Por lo tanto, cada obstáculo genera
una bifurcación en la búsqueda de solución. Este algoritmo incluye un proceso de optimización por el que
se reduce el recorrido final si se pueden saltar puntos intermedios viajando en línea recta.
El algoritmo NafisNav plantea una búsqueda en profundidad modificada. En cada iteración se intenta
alcanzar el destino verificando si se puede alcanzar en línea recta. En caso de no poder alcanzarlo, se
avanza al vecino, de entre los contiguos no explorados, aplicando un criterio de mínima distancia al
objetivo. Si hubiera dos candidatos posibles, la búsqueda se bifurca, evaluando ambas opciones. Por
último, se incluye un proceso de retroceso para el caso en el que se llegara a un punto sin salida. De entre
las soluciones posibles se presenta aquella que, tras la optimización, obtiene el mínimo recorrido.
Los nuevos algoritmos han sido evaluados y comparados con los algoritmos más extendidos en el estado
del arte: Dijkstra y A*. Los dos criterios utilizados han sido la longitud final del camino y el espacio de
memoria que se necesita. HCTNav tiene una penalización promedio del 2,1 % en la longitud de la
solución, mientras que NafisNav aplica una penalización promedio del 4,5 %. HCTNav obtiene una
reducción del consumo de memoria del 19 % comparado con la mejor solución entre Dijkstra y A*.
NafisNav mejora estos resultados con una reducción del 49