11,282 research outputs found
Layer Partition-based Matching Algorithm of DDM
High Level Architecture (HLA) is architecture for
reuse and interoperation of simulations. In HLA paradigm, the
Runtime Infrastructure (RTI) provides a set of services. Data
Distribution Management (DDM) service reduces message traffic
over the network. DDM aims to control and limit the data exchanged
between federates during federation. Each federate may inform the
RTI about its intention to publish some data or it may subscribe to
receive a subset of the published data. DDM services are used to
reduce the transmission and receiving of irrelevant data and aimed at
reducing the communication over the network. These services rely on
the computation of the intersection between “update” and
“subscription” regions. When calculating the intersection between
update regions and subscription regions, the higher computation
overhead can occur. Currently, there are several main DDM filtering
algorithms. This paper proposes the layer partition-based matching
algorithm for DDM in the HLA-based large-scale distributed
simulations. The new algorithm chooses the dynamic pivot based on
regions distribution in the routing space. The binary partition-based
algorithm is fundamentally based on a divide and conquers approach.
This algorithm always chooses the midpoint as the pivot point of
routing space. This approach promises low computational overhead,
since it does not require unnecessary comparisons within regions in
different partitions. The proposed algorithm firstly calculates the
regions distribution. Then, the partitioning among regions performs
based on the result of choosing pivot based on region detection and
defines the matching area that entirely covers all regions which need
to match with regions at pivot point. The proposed algorithm
provides the more definite matching area between update region and
subscription region during matching process. This algorithm
guarantees low computational overheads for matching process based
on the overlapping degree between the regions and reduce the
irrelevant message among federates
Design and implementation of a testbed for data distribution management
Data Distribution Management (DDM) is a core part of High Level Architecture standard, as its goal is to optimize the resources used by simulation environments to exchange data. It has to filter and match the set of information generated during a simulation, so that each federate, that is a simulation entity, only receives the information it needs. It is important that this is done quickly and to the best in order to get better performances and avoiding the transmission of irrelevant data, otherwise network resources may saturate quickly.
The main topic of this thesis is the implementation of a super partes DDM testbed. It evaluates the goodness of DDM approaches, of all kinds. In fact it supports both region and grid based approaches, and it may support other different methods still unknown too. It uses three factors to rank them: execution time, memory and distance from the optimal solution. A prearranged set of instances is already available, but we also allow the creation of instances with user-provided parameters.
This is how this thesis is structured. We start introducing what DDM and HLA are and what do they do in details. Then in the first chapter we describe the state of the art, providing an overview of the most well known resolution approaches and the pseudocode of the most interesting ones. The third chapter describes how the testbed we implemented is structured. In the fourth chapter we expose and compare the results we got from the execution of four approaches we have implemented.
The result of the work described in this thesis can be downloaded on sourceforge using the following link: https://sourceforge.net/projects/ddmtestbed/. It is licensed under the GNU General Public License version
3.0 (GPLv3)
Recommended from our members
Parallel computing in information retrieval - An updated review
The progress of parallel computing in Information Retrieval (IR) is reviewed. In particular we stress the importance of the motivation in using parallel computing for Text Retrieval. We analyse parallel IR systems using a classification due to Rasmussen [1] and describe some parallel IR systems. We give a description of the retrieval models used in parallel Information Processing.. We describe areas of research which we believe are needed
Scalable Approach to Uncertainty Quantification and Robust Design of Interconnected Dynamical Systems
Development of robust dynamical systems and networks such as autonomous
aircraft systems capable of accomplishing complex missions faces challenges due
to the dynamically evolving uncertainties coming from model uncertainties,
necessity to operate in a hostile cluttered urban environment, and the
distributed and dynamic nature of the communication and computation resources.
Model-based robust design is difficult because of the complexity of the hybrid
dynamic models including continuous vehicle dynamics, the discrete models of
computations and communications, and the size of the problem. We will overview
recent advances in methodology and tools to model, analyze, and design robust
autonomous aerospace systems operating in uncertain environment, with stress on
efficient uncertainty quantification and robust design using the case studies
of the mission including model-based target tracking and search, and trajectory
planning in uncertain urban environment. To show that the methodology is
generally applicable to uncertain dynamical systems, we will also show examples
of application of the new methods to efficient uncertainty quantification of
energy usage in buildings, and stability assessment of interconnected power
networks
FPGA-based Query Acceleration for Non-relational Databases
Database management systems are an integral part of today’s everyday life. Trends like smart applications, the internet of things, and business and social networks require applications to deal efficiently with data in various data models close to the underlying domain. Therefore, non-relational database systems provide a wide variety of database models, like graphs and documents. However, current non-relational database systems face performance challenges due to the end of Dennard scaling and therefore performance scaling of CPUs. In the meanwhile, FPGAs have gained traction as accelerators for data management.
Our goal is to tackle the performance challenges of non-relational database
systems with FPGA acceleration and, at the same time, address design challenges of FPGA acceleration itself. Therefore, we split this thesis up into two main lines of work: graph processing and flexible data processing.
Because of the lacking benchmark practices for graph processing accelerators, we propose GraphSim. GraphSim is able to reproduce runtimes of these accelerators based on a memory access model of the approach. Through this simulation environment, we extract three performance-critical accelerator properties: asynchronous graph processing, compressed graph data structure, and multi-channel memory. Since these accelerator properties have not been combined in one system, we propose GraphScale. GraphScale is the first scalable, asynchronous graph processing accelerator working on a compressed graph and outperforms all state-of-the-art graph processing accelerators.
Focusing on accelerator flexibility, we propose PipeJSON as the first FPGA-based JSON parser for arbitrary JSON documents. PipeJSON is able to achieve
parsing at line-speed, outperforming the fastest, vectorized parsers for CPUs. Lastly, we propose the subgraph query processing accelerator GraphMatch which outperforms state-of-the-art CPU systems for subgraph query processing and is able to flexibly switch queries during runtime in a matter of clock cycles
- …