19,331 research outputs found
A review of parallel computing for large-scale remote sensing image mosaicking
Interest in image mosaicking has been spurred by a wide variety of research and management needs. However, for large-scale applications, remote sensing image mosaicking usually requires significant computational capabilities. Several studies have attempted to apply parallel computing to improve image mosaicking algorithms and to speed up calculation process. The state of the art of this field has not yet been summarized, which is, however, essential for a better understanding and for further research of image mosaicking parallelism on a large scale. This paper provides a perspective on the current state of image mosaicking parallelization for large scale applications. We firstly introduce the motivation of image mosaicking parallel for large scale application, and analyze the difficulty and problem of parallel image mosaicking at large scale such as scheduling with huge number of dependent tasks, programming with multiple-step procedure, dealing with frequent I/O operation. Then we summarize the existing studies of parallel computing in image mosaicking for large scale applications with respect to problem decomposition and parallel strategy, parallel architecture, task schedule strategy and implementation of image mosaicking parallelization. Finally, the key problems and future potential research directions for image mosaicking are addressed
Enhancing speed and scalability of the ParFlow simulation code
Regional hydrology studies are often supported by high resolution simulations
of subsurface flow that require expensive and extensive computations. Efficient
usage of the latest high performance parallel computing systems becomes a
necessity. The simulation software ParFlow has been demonstrated to meet this
requirement and shown to have excellent solver scalability for up to 16,384
processes. In the present work we show that the code requires further
enhancements in order to fully take advantage of current petascale machines. We
identify ParFlow's way of parallelization of the computational mesh as a
central bottleneck. We propose to reorganize this subsystem using fast mesh
partition algorithms provided by the parallel adaptive mesh refinement library
p4est. We realize this in a minimally invasive manner by modifying selected
parts of the code to reinterpret the existing mesh data structures. We evaluate
the scaling performance of the modified version of ParFlow, demonstrating good
weak and strong scaling up to 458k cores of the Juqueen supercomputer, and test
an example application at large scale.Comment: The final publication is available at link.springer.co
Analyzing Traffic Problem Model With Graph Theory Algorithms
This paper will contribute to a practical problem, Urban Traffic. We will
investigate those features, try to simplify the complexity and formulize this
dynamic system. These contents mainly contain how to analyze a decision problem
with combinatorial method and graph theory algorithms; how to optimize our
strategy to gain a feasible solution through employing other principles of
Computer Science.Comment: 7 pages, 5 figures, Science and Information Conference (SAI), 201
High-Throughput Computing on High-Performance Platforms: A Case Study
The computing systems used by LHC experiments has historically consisted of
the federation of hundreds to thousands of distributed resources, ranging from
small to mid-size resource. In spite of the impressive scale of the existing
distributed computing solutions, the federation of small to mid-size resources
will be insufficient to meet projected future demands. This paper is a case
study of how the ATLAS experiment has embraced Titan---a DOE leadership
facility in conjunction with traditional distributed high- throughput computing
to reach sustained production scales of approximately 52M core-hours a years.
The three main contributions of this paper are: (i) a critical evaluation of
design and operational considerations to support the sustained, scalable and
production usage of Titan; (ii) a preliminary characterization of a next
generation executor for PanDA to support new workloads and advanced execution
modes; and (iii) early lessons for how current and future experimental and
observational systems can be integrated with production supercomputers and
other platforms in a general and extensible manner
Integration of Legacy Appliances into Home Energy Management Systems
The progressive installation of renewable energy sources requires the
coordination of energy consuming devices. At consumer level, this coordination
can be done by a home energy management system (HEMS). Interoperability issues
need to be solved among smart appliances as well as between smart and
non-smart, i.e., legacy devices. We expect current standardization efforts to
soon provide technologies to design smart appliances in order to cope with the
current interoperability issues. Nevertheless, common electrical devices affect
energy consumption significantly and therefore deserve consideration within
energy management applications. This paper discusses the integration of smart
and legacy devices into a generic system architecture and, subsequently,
elaborates the requirements and components which are necessary to realize such
an architecture including an application of load detection for the
identification of running loads and their integration into existing HEM
systems. We assess the feasibility of such an approach with a case study based
on a measurement campaign on real households. We show how the information of
detected appliances can be extracted in order to create device profiles
allowing for their integration and management within a HEMS
- …