578 research outputs found
Tackling cloud compliance through information flow control
IT outsourcing to clouds bears new challenges to the technical implementation of legally compliant clouds. On the one hand, outsourcing companies have to comply with legal requirements. On the other hand, cloud providers have to support their customers in achieving compliance with these legal requirements when processing data in the cloud. Consequently, the questions arise when IT outsourcing to clouds is lawful, which legal requirements apply to data processing in clouds, and how cloud providers can support their customers on achieving legal compliance.
In this thesis, answers to these questions are given by performing a legal analysis identifying the legal requirements and a technical analysis identifying how legal requirements can be addressed in the context of cloud computing. Further, an information flow analysis is done, resulting in a system theoretical model that is able to describe information flow control in clouds based on the security classification of virtual resources and hardware resources. In a proof-of-concept implementation which is based on the OpenStack open-source cloud platform, it is shown that information flow control can be implemented as a part of cloud management and that legal compliance can be monitored and reported based on the actual assignment of virtual resources to hardware resources. Thereby, cloud providers are able to provide cloud customers with cloud resources, which are automatically assigned to hardware resources that comply with the legal requirements of the cloud customers. This consequently empowers cloud customers to utilise cloud resources according to their legal requirements and to keep control of managing the legal compliance of their data processing in clouds
Top-k Semantic Caching
The subject of this thesis is the intelligent caching of top-k queries in an environment with high latency and low throughput. In such an environment, caching can be used to reduce network traffic and improve response time. Slow database connections of mobile devices and to databases, which have been offshored, are practical use cases.
A semantic cache is a query-based cache that caches query results and maintains their semantic description. It reuses partial matches of previous query results. Each query that is processed by the semantic cache is split into two disjoint parts: one that can be completely answered with tuples of the cache probe query, and another that requires tuples to be transferred from the server (remainder query).
Existing semantic caches do not support top-k queries, i.e., ordered and limited queries. In this thesis, we present an innovative semantic cache that naturally supports top-k queries. The support of top-k queries in a semantic cache has considerable effects on cache elements, operations on cache elements -- like creation, difference, intersection, and union -- and query answering. Hence, we introduce new techniques for cache management and query processing. They enable the semantic cache to become a true top-k semantic cache.
In addition, we have developed a new algorithm that can estimate the lower bounds of query results of sorted queries using multidimensional histograms. Using this algorithm, our top-k semantic cache is able to pipeline partial query results of top-k queries. Thereby, query execution performance can be significantly increased.
We have implemented a prototype of a top-k semantic cache called IQCache (Intelligent Query Cache). An extensive and thorough evaluation with various benchmarks using our prototype demonstrates the applicability and performance of top-k semantic caching in practice. The experiments prove that the top-k semantic cache invariably outperforms simple hash-based caching strategies and scales very well
Effective Approaches to Abstraction Refinement for Automatic Software Verification
This thesis presents various techniques that aim at enabling more effective and more
efficient approaches for automatic software verification.
After a brief motivation why automatic software verification is getting ever more
relevant, we continue with detailing the formalism used in this thesis and on the
concepts it is built on.
We then describe the design and implementation of the value analysis, an analysis
for automatic software verification that tracks state information concretely. From
a thorough evaluation based on well over 4 000 verification tasks from the latest
edition of the International Competition on Software Verification (SV-COMP), we
learn that this plain value analysis leads to an efficient verification process for many
verification tasks, but at the same time, fails to solve other verification tasks due
to state-space explosion. From this insight we infer that some form of abstraction
technique must be added to the value analysis in order to also allow the successful
verification of large and complex verification tasks.
As a solution, we propose to incorporate counterexample-guided abstraction refinement (CEGAR) and interpolation into the value domain. To this end, we design
a novel interpolation procedure, that extracts from infeasible counterexamples interpolants for the value domain, allowing to form a precision strong enough to exclude
these infeasible counterexamples, and to make progress in the CEGAR loop. We
then describe several optimizations and extensions to these concepts, such that the
value analysis with CEGAR becomes competitive for automatic software verification.
As the next step, we combine the value analysis with CEGAR with a predicate
analysis, to obtain a more precise and efficient composite analysis based on CEGAR.
This composite analysis is indeed on a par with the world’s leading software verification tools, as witnessed by the results of SV-COMP’13 where this approach achieved
the 2 nd place in the overall ranking.
After having available competitive CEGAR-based analyses for the value domain,
the predicate domain, and the combination thereof, we then turn our attention to
techniques that have the goal to make all these CEGAR-based approaches more
successful. Our first novel idea in this regard is based on the concept of infeasible
sliced prefixes, which allow the computation of different precisions from a single
infeasible counterexample. This adds choice to the CEGAR loop, while without this
enhancement, no choice for a specific precision, i. e., a specific refinement, is possible.
In our evaluation we show, for both the value analysis and the predicate analysis,
that choosing different infeasible sliced prefixes during the refinement step leads to
major differences in verification effectiveness and verification efficiency.
Extending on the concept of infeasible sliced prefixes, we define several heuristics
in order to precisely select a single refinement from a set of possible refinements. We
make this new concept, which we refer to as guided refinement selection, available
to both the value and predicate analysis, and in a large-scale evaluation we try to
answer the question which selection technique leads to well suited abstractions and
thus, to a more effective verification process. Additionally, we present the idea of
inter-analysis refinement selection, where the refinement component of a composite
analysis may decide which of its component analyses is best to be refined, and in yet
another evaluation we highlight the positive effects of this technique.
Finally, we present the results of SV-COMP’16, where the verifier we contributed
and which is based on the concepts and ideas presented in this thesis achieved the
1 st place in the category DeviceDriversLinux64
PERAN DAN KAPABILITAS TNI DALAM PENGAWASAN LINTAS BATAS : STUDI KASUS KAPABILITAS KOMPI TEMPUR I YONIF 631/ANTANG DI PULAU SEBATIK TAHUN 2010-2011
This research aims to reveal the role and capability of TNI in monitoring border crossing. Case study was conducted in Sebatik Island, where border situation comprises complex vulnerabilities. However, TNI border security task force in charge succeeded to foil only small number of illegal activities. Research was conducted by qualitative methods based on military capability and border security management concepts, towards documents and interview to selected figures. Result of this study shows that force posture organised in accordance with border security operational standard, will be able to achieve optimum capability in monitoring border crossing if the main role is exercised in activities which are stick with determined strategy, and supported with actual intelligent information. Additional roles related to main tasks of the strategy may emerge in the effects of contributing factors. Whereas in the existence of contributing partners, formal framework is needed to synergize strategies among parties and ensure comprehensive capabilities. Keywords : role of TNI, border security, border crossing, capability, monitorin
PAPER Special Section on Statistical Modeling for Speech Processing Trigger-Based Language Model Adaptation for Automatic Transcription of Panel Discussions
SUMMARY We present a novel trigger-based language model adaptation method oriented to the transcription of meetings. In meetings, the topic is focused and consistent throughout the whole session, therefore keywords can be correlated over long distances. The trigger-based language model is designed to capture such long-distance dependencies, but it is typically constructed from a large corpus, which is usually too general to derive taskdependent trigger pairs. In the proposed method, we make use of the initial speech recognition results to extract task-dependent trigger pairs and to estimate their statistics. Moreover, we introduce a back-off scheme that also exploits the statistics estimated from a large corpus. The proposed model reduced the test-set perplexity considerably more than the typical triggerbased language model constructed from a large corpus, and achieved a remarkable perplexity reduction of 44% over the baseline when combined with an adapted trigram language model. In addition, a reduction in word error rate was obtained when using the proposed language model to rescore word graphs. key words: speech recognition, language model, trigger-based language model, TF/ID
Improved space bounds for strongly competitive randomized paging algorithms
Paging is one of the prominent problems in the field of on-line algorithms. While in the deterministic setting there exist simple and efficient strongly competitive algorithms, in the randomized setting a tradeoff between competitiveness and memory is still not settled. Bein et al. [4] conjectured that there exist strongly competitive randomized paging algorithms, using o(k) bookmarks, i.e. pages not in cache that the algorithm keeps track of. Also in [4] the first algorithm using O(k) bookmarks (2k more precisely), Equitable2, was introduced, proving in the affirmative a conjecture in [7].
We prove tighter bounds for Equitable2, showing that it requires less than k bookmarks, more precisely ≈ 0.62k. We then give a lower bound for Equitable2 showing that it cannot both be strongly competitive and use o(k) bookmarks. Nonetheless, we show that it can trade competitiveness for space. More precisely, if its competitive ratio is allowed to be (Hk + t), then it requires k/(1 + t) bookmarks.
Our main result proves the conjecture that there exist strongly competitive paging algorithms using o(k) bookmarks. We propose an algorithm, denoted Partition2, which is a variant of the Partition algorithm byMcGeoch and Sleator [13]. While classical Partition is unbounded in its space requirements, Partition2 uses θ(k/ log k) bookmarks. Furthermore, we show that this result is asymptotically tight when the forgiveness steps are deterministic
Learning to Program Mobile Robots in the ROS Summer School Series
The main objective of our ROS Summer School series is to introduce MA level students to program mobile robots with the Robot Operating System (ROS). ROS is a robot middleware that is used my many research institutions world-wide. Therefore, many state-of-the-art algorithms of mobile robotics are available in ROS and can be deployed very easily. As a basic robot platform we deploy a 1/10 RC cart that is wquipped with an Arduino micro-controller to control the servo motors, and an embedded PC that runs ROS. In two weeks, participants get to learn the basics of mobile robotics hands-on. We describe our teaching concepts and our curriculum and report on the learning success of our students
- …
