2,486 research outputs found
Co-Simulation of distributed flexibility coordination schemes
CĂlem práce je implementovat a otestovat simulaÄŤnĂ prostĹ™edĂ, kterĂ© umoĹľnĂ spojenĂ simulátorĹŻ rĹŻznĂ˝ch typĹŻ Ăşloh. Toto prostĹ™edĂ je aplikováno na simulaci koordinovanĂ©ho optimálnĂho Ĺ™ĂzenĂ spotĹ™eby energie 20 domácnostĂ s rĹŻznĂ˝mi poĹľadavky na velikost spotĹ™eby a moĹľnostmi uloĹľenĂ energie. VĂ˝sledky ukazujĂ, Ĺľe koordinovanĂ© Ĺ™ĂzenĂ spotĹ™eby energie vĂce domácnostĂ mĹŻĹľe dosáhnout znaÄŤnĂ˝ch Ăşspor ve srovnánĂ s Ĺ™ĂzenĂm spotĹ™eby jednotlivĂ˝ch domácnostĂ bez ohledu na ostatnĂ.The goal of the thesis is to implement and test co-simulation environment making it possible to connect simulators of different type. The environment is applied on simulation of coordinated optimal control of energy consumption of 20 households with different preferences on energy supply and its storage capacity. The results show that coordinated control of energy consumption may achieve considerable savings in comparison with control of individual households regardless to the others
Micropower sensate tags for supply-chain management and security
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 109-113).This thesis describes the development of a system of sensate active RFID tags for supply-chain management and security applications, necessitated by the current lack of commercial platforms capable of monitoring the state of shipments at the crate and case level. To make a practical prototype, off-the-shelf components and custom-designed circuits that minimize power consumption and cost were assembled and integrated into an interrupt-driven, quasi-passive system that can monitor, log, and report environmental conditions inside a shipping crate while consuming only 23.7 microwatts of average power. To prove the feasibility of the system, the tags were tested in the laboratory and aboard transport conveyances.by Mateusz Ksawery Malinowski.M.Eng
Performance Evaluation And Anomaly detection in Mobile BroadBand Across Europe
With the rapidly growing market for smartphones and user’s confidence for immediate
access to high-quality multimedia content, the delivery of video over wireless networks has
become a big challenge. It makes it challenging to accommodate end-users with flawless
quality of service. The growth of the smartphone market goes hand in hand with the
development of the Internet, in which current transport protocols are being re-evaluated to
deal with traffic growth. QUIC and WebRTC are new and evolving standards. The latter
is a unique and evolving standard explicitly developed to meet this demand and enable
a high-quality experience for mobile users of real-time communication services. QUIC
has been designed to reduce Web latency, integrate security features, and allow a highquality
experience for mobile users. Thus, the need to evaluate the performance of these
rising protocols in a non-systematic environment is essential to understand the behavior
of the network and provide the end user with a better multimedia delivery service. Since
most of the work in the research community is conducted in a controlled environment, we
leverage the MONROE platform to investigate the performance of QUIC and WebRTC
in real cellular networks using static and mobile nodes. During this Thesis, we conduct
measurements ofWebRTC and QUIC while making their data-sets public to the interested
experimenter. Building such data-sets is very welcomed with the research community,
opening doors to applying data science to network data-sets. The development part of the
experiments involves building Docker containers that act as QUIC and WebRTC clients.
These containers are publicly available to be used candidly or within the MONROE
platform. These key contributions span from Chapter 4 to Chapter 5 presented in Part
II of the Thesis.
We exploit data collection from MONROE to apply data science over network
data-sets, which will help identify networking problems shifting the Thesis focus from
performance evaluation to a data science problem.
Indeed, the second part of the Thesis focuses on interpretable data science. Identifying
network problems leveraging Machine Learning (ML) has gained much visibility in the
past few years, resulting in dramatically improved cellular network services. However,
critical tasks like troubleshooting cellular networks are still performed manually by experts
who monitor the network around the clock. In this context, this Thesis contributes by proposing the use of simple interpretable
ML algorithms, moving away from the current trend of high-accuracy ML algorithms
(e.g., deep learning) that do not allow interpretation (and hence understanding) of their
outcome. We prefer having lower accuracy since we consider it interesting (anomalous)
the scenarios misclassified by the ML algorithms, and we do not want to miss them by
overfitting. To this aim, we present CIAN (from Causality Inference of Anomalies in
Networks), a practical and interpretable ML methodology, which we implement in the
form of a software tool named TTrees (from Troubleshooting Trees) and compare it to
a supervised counterpart, named STress (from Supervised Trees). Both methodologies
require small volumes of data and are quick at training. Our experiments using real
data from operational commercial mobile networks e.g., sampled with MONROE probes,
show that STrees and CIAN can automatically identify and accurately classify network
anomalies—e.g., cases for which a low network performance is not justified by operational
conditions—training with just a few hundreds of data samples, hence enabling precise
troubleshooting actions. Most importantly, our experiments show that a fully automated
unsupervised approach is viable and efficient. In Part III of the Thesis which includes
Chapter 6 and 7.
In conclusion, in this Thesis, we go through a data-driven networking roller coaster,
from performance evaluating upcoming network protocols in real mobile networks to
building methodologies that help identify and classify the root cause of networking
problems, emphasizing the fact that these methodologies are easy to implement and can
be deployed in production environments.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Matteo Sereno.- Secretario: Antonio de la Oliva Delgado.- Vocal: Raquel Barco Moren
Secure Abstractions for Trusted Cloud Computation
Cloud computing is adopted by most organizations due to its characteristics, namely
offering on-demand resources and services that can quickly be provisioned with minimal
management effort and maintenance expenses for its users. However it still suffers from
security incidents which have lead to many data security concerns and reluctance in
further adherence. With the advent of these incidents, cryptographic technologies such
as homomorphic and searchable encryption schemes were leveraged to provide solutions
that mitigated data security concerns.
The goal of this thesis is to provide a set of secure abstractions to serve as a tool for
programmers to develop their own distributed applications. Furthermore, these abstractions
can also be used to support trusted cloud computations in the context of NoSQL
data stores. For this purpose we leveraged conflict-free replicated data types (CRDTs) as
they provide a mechanism to ensure data consistency when replicated that has no need
for synchronization, which aligns well with the distributed and replicated nature of the
cloud, and the aforementioned cryptographic technologies to comply with the security
requirements. The main challenge of this thesis consisted in combining the cryptographic
technologies with the CRDTs in such way that it was possible to support all of the data
structures functionalities over ciphertext while striving to attain the best security and
performance possible.
To evaluate our abstractions we conducted an experiment to compare each secure
abstraction with their non secure counterpart performance wise. Additionally, we also
analysed the security level provided by each of the structures in light of the cryptographic
scheme used to support it. The results of our experiment shows that our abstractions
provide the intended data security with an acceptable performance overhead, showing
that it has potential to be used to build solutions for trusted cloud computation
Modeling and Implementation of Wireless Sensor Networks for Logistics Applications
Logistics has experienced a long time of developments and improvements based on the advanced vehicle technologies, transportation systems, traffic network extension and logistics processes. In the last decades, the complexity has increased significantly and this has created complex logistics networks over multiple continents. Because of the close cooperation, these logistics networks are highly dependent on each other in sharing and processing the logistics information. Every customer has many suppliers and vice versa. The conventional centralized control continues but reaches some limitations such as the different distribution of suppliers, the complexity and flexibility of processing orders or the dynamics of the logistic objects. In order to overcome these disadvantages, the paradigm of autonomous logistics is proposed and promises a better technical solution for current logistics systems. In autonomous logistics, the decision making is shifted toward the logistic objects which are defined as material items (e.g., vehicles, containers) or immaterial items (e.g., customer orders) of a networked logistics system. These objects have the ability to interact with each other and make decisions according to their own objectives. In the technical aspect, with the rapid development of innovative sensor technology, namely Wireless Sensor Networks (WSNs), each element in the network can self-organize and interact with other elements for information transmission. The attachment of an electronic sensor element into a logistic object will create an autonomous environment in both the communication and the logistic domain. With this idea, the requirements of logistics can be fulfilled; for example, the monitoring data can be precise, comprehensive and timely. In addition, the goods flow management can be transferred to the information logistic object management, which is easier by the help of information technologies. However, in order to transmit information between these logistic objects, one requirement is that a routing protocol is necessary. The Opportunistic relative Distance-Enabled Uni-cast Routing (ODEUR ) protocol which is proposed and investigated in this thesis shows that it can be used in autonomous environments like autonomous logistics. Moreover, the support of mobility, multiple sinks and auto-connection in this protocol enhances the dynamics of logistic objects. With a general model which covers a range from low-level issues to high-level protocols, many services such as real time monitoring of environmental conditions, context-aware applications and localization make the logistic objects (embedded with sensor equipment) more advanced in information communication and data processing. The distributed management service in each sensor node allows the flexible configuration of logistic items at any time during the transportation. All of these integrated features introduce a new technical solution for smart logistic items and intelligent transportation systems. In parallel, a management system, WSN data Collection and Management System (WiSeCoMaSys), is designed to interact with the deployed Wireless Sensor Networks. This tool allows the user to easily manipulate the sensor networks remotely. With its rich set of features such as real time data monitoring, data analysis and visualization, per-node management, and alerts, this tool helps both developers and users in the design and deployment of a sensor network. In addition, an analytical model is developed for comparison with the results from simulations and experiments. Focusing on the use of probability theory to model the network links, this model considers several important factors such as packet reception rate and network traffic which are used in the simulation and experiment parts. Moreover, the comparison between simulation, experiment and analytical results is also carried out to estimate the accuracy of the design and make several improvements of the simulation accuracy. Finally, all of the above parts are integrated in one unique system. This system is verified by both simulations in logistic scenarios (e.g., harbors, warehouses and containers) and experiments. The results show that the proposed model and protocol have a good packet delivery rate, little memory requirements and low delay. Accordingly, this system design is practical and applicable in logistics
Quantifying and Predicting the Influence of Execution Platform on Software Component Performance
The performance of software components depends on several factors, including the execution platform on which the software components run. To simplify cross-platform performance prediction in relocation and sizing scenarios, a novel approach is introduced in this thesis which separates the application performance profile from the platform performance profile. The approach is evaluated using transparent instrumentation of Java applications and with automated benchmarks for Java Virtual Machines
Replication and fault-tolerance in real-time systems
PhD ThesisThe increased availability of sophisticated computer hardware and the corresponding
decrease in its cost has led to a widespread growth in the use of computer systems for realtime
plant and process control applications. Such applications typically place very high
demands upon computer control systems and the development of appropriate control
software for these application areas can present a number of problems not normally
encountered in other applications.
First of all, real-time applications must be correct in the time domain as well as the value
domain: returning results which are not only correct but also delivered on time. Further,
since the potential for catastrophic failures can be high in a process or plant control
environment, many real-time applications also have to meet high reliability requirements.
These requirements will typically be met by means of a combination of fault avoidance and
fault tolerance techniques.
This thesis is intended to address some of the problems encountered in the provision of fault
tolerance in real-time applications programs. Specifically,it considers the use of replication
to ensure the availability of services in real-time systems. In a real-time environment,
providing support for replicated services can introduce a number of problems. In particular,
the scope for non-deterministic behaviour in real-time applications can be quite large and
this can lead to difficultiesin maintainingconsistent internal states across the members of a
replica group. To tackle this problem, a model is proposed for fault tolerant real-time
objects which not only allows such objects to perform application specific recovery
operations and real-time processing activities such as event handling, but which also allows
objects to be replicated. The architectural support required for such replicated objects is
also discussed and, to conclude, the run-time overheads associated with the use of such
replicated services are considered.The Science and Engineering Research Council
- …