291 research outputs found
RSSi-Based Visitor Tracking in Museums via Cascaded AI Classifiers and Coloured Graph Representations
Individual tracking of museum visitors based on portable radio beacons, an asset for behavioural analyses and comfort/performance improvements, is seeing increasing diffusion. Conceptually, this approach enables room-level localisation based on a network of small antennas (thus, without invasive modification of the existent structures). The antennas measure the intensity (RSSi) of self-advertising signals broadcasted by beacons individually assigned to the visitors. The signal intensity provides a proxy for the distance to the antennas and thus indicative positioning. However, RSSi signals are well-known to be noisy, even in ideal conditions (high antenna density, absence of obstacles, absence of crowd, ...). In this contribution, we present a method to perform accurate RSSi-based visitor tracking when the density of antennas is relatively low, e.g. due to technical constraints imposed by historic buildings. We combine an ensemble of "simple" localisers, trained based on ground-truth, with an encoding of the museum topology in terms of a total-coloured graph. This turns the localisation problem into a cascade process, from large to small scales, in space and in time. Our use case is visitors tracking in Galleria Borghese, Rome (Italy), for which our method manages >96% localisation accuracy, significantly improving on our previous work (J. Comput. Sci. 101357, 2021)
Performance Evaluation of Vehicular Ad Hoc Networks using simulation tools
Recent studies demonstrate that the routing protocol performances in vehicular networks can improve using dynamic information on the traffic conditions. WSNs (Wireless Sensor Networks) and VANETs (Vehicular Ad Hoc Networks) are exactly related with this statement and represent the trend of wireless networks research program in the last years.
In this context, a new type of network has been developed: in fact, HSVN (Hybrid Sensor and Vehicular Network) let WSNs and VANETs cooperate through dynamic information data exchanges with the aim to improve road safety, and especially to warn the driver and the co-pilot of any event occurred in the road ahead, such as traffic jam, accidents or bad weather. The results will be immediate: less accidents means more saved lives, less traffic means a pollution decrease, and from the technological point of view, this communication protocol will open the door to attractive services, such as downloading of multimedia services or internet browsing, that means easier, safer and more comfortable trips.
It is out of doubt that speaking about cars and road technology developments, the market and the interests about this field increase exponentially. Recent projects such as CVIS [1] and COMeSafety [2], focused on improving the road driving, and are the concrete demonstration that this entire context can get soon very close to reality.
Owing to their peculiar characteristics, VANETs require the definition of specific networking techniques, whose feasibility and performance are usually tested by means of simulation. Starting from this point, this project will present a HSVN platform, and will also introduce and evaluate a communication protocol between VANETs and WSNs using the NCTUns 6.0 [3] simulator. We will particularly analyze the performances of 2 types of Scenarios developed during our project. Both of them are in an urban context, but we will extract different useful results analyzing the packet losses, the throughput and the end-to-end packet delay
Self-organizing Network Optimization via Placement of Additional Nodes
Das Hauptforschungsgebiet des Graduiertenkollegs "International Graduate
School on Mobile Communication" (GS Mobicom) der Technischen Universität
Ilmenau ist die Kommunikation in Katastrophenszenarien. Wegen eines
Desasters oder einer Katastrophe können die terrestrischen Elementen der
Infrastruktur eines Kommunikationsnetzwerks beschädigt oder komplett
zerstört werden. Dennoch spielen verfügbare Kommunikationsnetze eine sehr
wichtige Rolle während der Rettungsmaßnahmen, besonders für die
Koordinierung der Rettungstruppen und für die Kommunikation zwischen ihren
Mitgliedern. Ein solcher Service kann durch ein mobiles Ad-Hoc-Netzwerk
(MANET) zur Verfügung gestellt werden. Ein typisches Problem der MANETs
ist Netzwerkpartitionierung, welche zur Isolation von verschiedenen
Knotengruppen führt. Eine mögliche Lösung dieses Problems ist die
Positionierung von zusätzlichen Knoten, welche die Verbindung zwischen den
isolierten Partitionen wiederherstellen können. Hauptziele dieser Arbeit
sind die Recherche und die Entwicklung von Algorithmen und Methoden zur
Positionierung der zusätzlichen Knoten. Der Fokus der Recherche liegt auf
Untersuchung der verteilten Algorithmen zur Bestimmung der Positionen für
die zusätzlichen Knoten. Die verteilten Algorithmen benutzen nur die
Information, welche in einer lokalen Umgebung eines Knotens verfügbar ist,
und dadurch entsteht ein selbstorganisierendes System. Jedoch wird das
gesamte Netzwerk hier vor allem innerhalb eines ganz speziellen Szenarios -
Katastrophenszenario - betrachtet. In einer solchen Situation kann die
Information über die Topologie des zu reparierenden Netzwerks im Voraus
erfasst werden und soll, natürlich, für die Wiederherstellung mitbenutzt
werden. Dank der eventuell verfügbaren zusätzlichen Information können
die Positionen für die zusätzlichen Knoten genauer ermittelt werden. Die
Arbeit umfasst eine Beschreibung, Implementierungsdetails und eine
Evaluierung eines selbstorganisierendes Systems, welche die
Netzwerkwiederherstellung in beiden Szenarien ermöglicht.The main research area of the International Graduate School on Mobile
Communication (GS Mobicom) at Ilmenau University of Technology is
communication in disaster scenarios. Due to a disaster or an accident, the
network infrastructure can be damaged or even completely destroyed.
However, available communication networks play a vital role during the
rescue activities especially for the coordination of the rescue teams and
for the communication between their members. Such a communication service
can be provided by a Mobile Ad-Hoc Network (MANET). One of the typical
problems of a MANET is network partitioning, when separate groups of nodes
become isolated from each other. One possible solution for this problem is
the placement of additional nodes in order to reconstruct the communication
links between isolated network partitions. The primary goal of this work is
the research and development of algorithms and methods for the placement of
additional nodes. The focus of this research lies on the investigation of
distributed algorithms for the placement of additional nodes, which use
only the information from the nodes’ local environment and thus form a
self-organizing system. However, during the usage specifics of the system
in a disaster scenario, global information about the topology of the
network to be recovered can be known or collected in advance. In this case,
it is of course reasonable to use this information in order to calculate
the placement positions more precisely. The work provides the description,
the implementation details and the evaluation of a self-organizing system
which is able to recover from network partitioning in both situations
Neural networks-on-chip for hybrid bio-electronic systems
PhD ThesisBy modelling the brains computation we can further our understanding
of its function and develop novel treatments for neurological disorders. The
brain is incredibly powerful and energy e cient, but its computation does
not t well with the traditional computer architecture developed over the
previous 70 years. Therefore, there is growing research focus in developing
alternative computing technologies to enhance our neural modelling capability,
with the expectation that the technology in itself will also bene t from
increased awareness of neural computational paradigms.
This thesis focuses upon developing a methodology to study the design
of neural computing systems, with an emphasis on studying systems suitable
for biomedical experiments. The methodology allows for the design to be
optimized according to the application. For example, di erent case studies
highlight how to reduce energy consumption, reduce silicon area, or to
increase network throughput.
High performance processing cores are presented for both Hodgkin-Huxley
and Izhikevich neurons incorporating novel design features. Further, a complete
energy/area model for a neural-network-on-chip is derived, which is
used in two exemplar case-studies: a cortical neural circuit to benchmark
typical system performance, illustrating how a 65,000 neuron network could
be processed in real-time within a 100mW power budget; and a scalable highperformance
processing platform for a cerebellar neural prosthesis. From
these case-studies, the contribution of network granularity towards optimal
neural-network-on-chip performance is explored
Internet of Satellites (IoSat): analysis of network models and routing protocol requirements
The space segment has been evolved from monolithic to distributed satellite systems. One of these distributed systems is called the federated satellite system (FSS) which aims at establishing a win-win collaboration between satellites to improve their mission performance by using the unused on-board resources. The FSS concept requires sporadic and direct communications between satellites, using inter satellite links. However, this point-to-point communication is temporal and thus it can break existent federations. Therefore, the conception of a multi-hop scenario needs to be addressed. This is the goal of the Internet of satellites (IoSat) paradigm which, as opposed to a common backbone, proposes the creation of a network using a peer-to-peer architecture. In particular, the same satellites take part of the network by establishing intermediate collaborations to deploy a FSS. This paradigm supposes a major challenge in terms of network definition and routing protocol. Therefore, this paper not only details the IoSat paradigm, but it also analyses the different satellite network models. Furthermore, it evaluates the routing protocol candidates that could be used to implement the IoSat paradigm.Peer ReviewedPostprint (author's final draft
Mobile Ad-Hoc Networks
Being infrastructure-less and without central administration control, wireless ad-hoc networking is playing a more and more important role in extending the coverage of traditional wireless infrastructure (cellular networks, wireless LAN, etc). This book includes state-of-the-art techniques and solutions for wireless ad-hoc networks. It focuses on the following topics in ad-hoc networks: quality-of-service and video communication, routing protocol and cross-layer design. A few interesting problems about security and delay-tolerant networks are also discussed. This book is targeted to provide network engineers and researchers with design guidelines for large scale wireless ad hoc networks
Smart PIN: performance and cost-oriented context-aware personal information network
The next generation of networks will involve interconnection of heterogeneous individual
networks such as WPAN, WLAN, WMAN and Cellular network, adopting the IP as common infrastructural protocol and providing virtually always-connected network. Furthermore,
there are many devices which enable easy acquisition and storage of information as pictures, movies, emails, etc. Therefore, the information overload and divergent content’s
characteristics make it difficult for users to handle their data in manual way. Consequently, there is a need for personalised automatic services which would enable data exchange across heterogeneous network and devices. To support these personalised services, user centric approaches
for data delivery across the heterogeneous network are also required.
In this context, this thesis proposes Smart PIN - a novel performance and cost-oriented context-aware Personal Information Network. Smart PIN's architecture is detailed including its network, service and management components. Within the service component, two novel schemes for efficient delivery of context and content data are proposed:
Multimedia Data Replication Scheme (MDRS) and Quality-oriented Algorithm for Multiple-source Multimedia Delivery (QAMMD).
MDRS supports efficient data accessibility among distributed devices using data replication which is based on a utility function and a minimum data set. QAMMD employs a buffer underflow avoidance scheme for streaming, which achieves high multimedia quality without content adaptation to network conditions. Simulation models for MDRS and
QAMMD were built which are based on various heterogeneous network scenarios. Additionally a multiple-source streaming based on QAMMS was implemented as a prototype and tested in an emulated network environment. Comparative tests show that MDRS and QAMMD perform significantly better than other approaches
Techniques of data prefetching, replication, and consistency in the Internet
Internet has become a major infrastructure for information sharing in our daily life, and indispensable to critical and large applications in industry, government, business, and education. Internet bandwidth (or the network speed to transfer data) has been dramatically increased, however, the latency time (or the delay to physically access data) has been reduced in a much slower pace. The rich bandwidth and lagging latency can be effectively coped with in Internet systems by three data management techniques: caching, replication, and prefetching. The focus of this dissertation is to address the latency problem in Internet by utilizing the rich bandwidth and large storage capacity for efficiently prefetching data to significantly improve the Web content caching performance, by proposing and implementing scalable data consistency maintenance methods to handle Internet Web address caching in distributed name systems (DNS), and to handle massive data replications in peer-to-peer systems. While the DNS service is critical in Internet, peer-to-peer data sharing is being accepted as an important activity in Internet.;We have made three contributions in developing prefetching techniques. First, we have proposed an efficient data structure for maintaining Web access information, called popularity-based Prediction by Partial Matching (PB-PPM), where data are placed and replaced guided by popularity information of Web accesses, thus only important and useful information is stored. PB-PPM greatly reduces the required storage space, and improves the prediction accuracy. Second, a major weakness in existing Web servers is that prefetching activities are scheduled independently of dynamically changing server workloads. Without a proper control and coordination between the two kinds of activities, prefetching can negatively affect the Web services and degrade the Web access performance. to address this problem, we have developed a queuing model to characterize the interactions. Guided by the model, we have designed a coordination scheme that dynamically adjusts the prefetching aggressiveness in Web Servers. This scheme not only prevents the Web servers from being overloaded, but it can also minimize the average server response time. Finally, we have proposed a scheme that effectively coordinates the sharing of access information for both proxy and Web servers. With the support of this scheme, the accuracy of prefetching decisions is significantly improved.;Regarding data consistency support for Internet caching and data replications, we have conducted three significant studies. First, we have developed a consistency support technique to maintain the data consistency among the replicas in structured P2P networks. Based on Pastry, an existing and popular P2P system, we have implemented this scheme, and show that it can effectively maintain consistency while prevent hot-spot and node-failure problems. Second, we have designed and implemented a DNS cache update protocol, called DNScup, to provide strong consistency for domain/IP mappings. Finally, we have developed a dynamic lease scheme to timely update the replicas in Internet
Smart network caches : localized content and application negotiated recovery mechanisms for multicast media distribution
Thesis (Ph. D.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1998.Includes bibliographical references (p. 133-138).by Roger George Kermode.Ph.D
Content-Aware Multimedia Communications
The demands for fast, economic and reliable dissemination of multimedia
information are steadily growing within our society. While people and
economy increasingly rely on communication technologies, engineers still
struggle with their growing complexity.
Complexity in multimedia communication originates from several sources. The
most prominent is the unreliability of packet networks like the Internet.
Recent advances in scheduling and error control mechanisms for streaming
protocols have shown that the quality and robustness of multimedia delivery
can be improved significantly when protocols are aware of the content they
deliver. However, the proposed mechanisms require close cooperation between
transport systems and application layers which increases the overall system
complexity. Current approaches also require expensive metrics and focus on
special encoding formats only. A general and efficient model is missing so
far.
This thesis presents efficient and format-independent solutions to support
cross-layer coordination in system architectures. In particular, the first
contribution of this work is a generic dependency model that enables
transport layers to access content-specific properties of media streams,
such as dependencies between data units and their importance. The second
contribution is the design of a programming model for streaming
communication and its implementation as a middleware architecture. The
programming model hides the complexity of protocol stacks behind simple
programming abstractions, but exposes cross-layer control and monitoring
options to application programmers. For example, our interfaces allow
programmers to choose appropriate failure semantics at design time while
they can refine error protection and visibility of low-level errors at
run-time.
Based on some examples we show how our middleware simplifies the
integration of stream-based communication into large-scale application
architectures. An important result of this work is that despite cross-layer
cooperation, neither application nor transport protocol designers
experience an increase in complexity. Application programmers can even
reuse existing streaming protocols which effectively increases system
robustness.Der Bedarf unsere Gesellschaft nach kostengünstiger und
zuverlässiger
Kommunikation wächst stetig. Während wir uns selbst immer mehr von modernen
Kommunikationstechnologien abhängig machen, müssen die Ingenieure dieser
Technologien sowohl den Bedarf nach schneller Einführung neuer Produkte
befriedigen als auch die wachsende Komplexität der Systeme beherrschen.
Gerade die Übertragung multimedialer Inhalte wie Video und Audiodaten ist
nicht trivial. Einer der prominentesten Gründe dafür ist die
Unzuverlässigkeit heutiger Netzwerke, wie z.B.~dem Internet. Paketverluste
und schwankende Laufzeiten können die Darstellungsqualität massiv
beeinträchtigen. Wie jüngste Entwicklungen im Bereich der
Streaming-Protokolle zeigen, sind jedoch Qualität und Robustheit der
Übertragung effizient kontrollierbar, wenn Streamingprotokolle
Informationen über den Inhalt der transportierten Daten ausnutzen.
Existierende Ansätze, die den Inhalt von Multimediadatenströmen
beschreiben, sind allerdings meist auf einzelne Kompressionsverfahren
spezialisiert und verwenden berechnungsintensive Metriken. Das reduziert
ihren praktischen Nutzen deutlich. Außerdem erfordert der
Informationsaustausch eine enge Kooperation zwischen Applikationen und
Transportschichten. Da allerdings die Schnittstellen aktueller
Systemarchitekturen nicht darauf vorbereitet sind, müssen entweder die
Schnittstellen erweitert oder alternative Architekturkonzepte geschaffen
werden. Die Gefahr beider Varianten ist jedoch, dass sich die Komplexität
eines Systems dadurch weiter erhöhen kann.
Das zentrale Ziel dieser Dissertation ist es deshalb,
schichtenübergreifende Koordination bei gleichzeitiger Reduzierung der
Komplexität zu erreichen. Hier leistet die Arbeit zwei Beträge zum
aktuellen Stand der Forschung. Erstens definiert sie ein universelles
Modell zur Beschreibung von Inhaltsattributen, wie Wichtigkeiten und
Abhängigkeitsbeziehungen innerhalb eines Datenstroms. Transportschichten
können dieses Wissen zur effizienten Fehlerkontrolle verwenden. Zweitens
beschreibt die Arbeit das Noja Programmiermodell für multimediale
Middleware. Noja definiert Abstraktionen zur Übertragung und Kontrolle
multimedialer Ströme, die die Koordination von Streamingprotokollen mit
Applikationen ermöglichen. Zum Beispiel können Programmierer geeignete
Fehlersemantiken und Kommunikationstopologien auswählen und den konkreten
Fehlerschutz dann zur Laufzeit verfeinern und kontrolliere
- …