7,298 research outputs found
Monitoring Cluster on Online Compiler with Ganglia
Ganglia is an open source monitoring system for high performance computing (HPC) that collect both a whole cluster and every nodes status and report to the user. We use Ganglia to monitor our spasi.informatika.lipi.go.id (SPASI), a customized-fedora10-based cluster, for our cluster online compiler, CLAW (cluster access through web). Our experience on using Ganglia shows that Ganglia has a capability to view our cluster status and allow us to track them
Design and Implementation of a Measurement-Based Policy-Driven Resource Management Framework For Converged Networks
This paper presents the design and implementation of a measurement-based QoS
and resource management framework, CNQF (Converged Networks QoS Management
Framework). CNQF is designed to provide unified, scalable QoS control and
resource management through the use of a policy-based network management
paradigm. It achieves this via distributed functional entities that are
deployed to co-ordinate the resources of the transport network through
centralized policy-driven decisions supported by measurement-based control
architecture. We present the CNQF architecture, implementation of the prototype
and validation of various inbuilt QoS control mechanisms using real traffic
flows on a Linux-based experimental test bed.Comment: in Ictact Journal On Communication Technology: Special Issue On Next
Generation Wireless Networks And Applications, June 2011, Volume 2, Issue 2,
Issn: 2229-6948(Online
Software tools for conducting bibliometric analysis in science: An up-to-date review
Bibliometrics has become an essential tool for assessing and analyzing the output of scientists, cooperation between
universities, the effect of state-owned science funding on national research and development performance and educational
efficiency, among other applications. Therefore, professionals and scientists need a range of theoretical and practical
tools to measure experimental data. This review aims to provide an up-to-date review of the various tools available
for conducting bibliometric and scientometric analyses, including the sources of data acquisition, performance analysis
and visualization tools. The included tools were divided into three categories: general bibliometric and performance
analysis, science mapping analysis, and libraries; a description of all of them is provided. A comparative analysis of the
database sources support, pre-processing capabilities, analysis and visualization options were also provided in order to
facilitate its understanding. Although there are numerous bibliometric databases to obtain data for bibliometric and
scientometric analysis, they have been developed for a different purpose. The number of exportable records is between
500 and 50,000 and the coverage of the different science fields is unequal in each database. Concerning the analyzed
tools, Bibliometrix contains the more extensive set of techniques and suitable for practitioners through Biblioshiny.
VOSviewer has a fantastic visualization and is capable of loading and exporting information from many sources. SciMAT
is the tool with a powerful pre-processing and export capability. In views of the variability of features, the users need to
decide the desired analysis output and chose the option that better fits into their aims
ShenZhen transportation system (SZTS): a novel big data benchmark suite
Data analytics is at the core of the supply chain for both products and services in modern economies and societies. Big data workloads, however, are placing unprecedented demands on computing technologies, calling for a deep understanding and characterization of these emerging workloads. In this paper, we propose ShenZhen Transportation System (SZTS), a novel big data Hadoop benchmark suite comprised of real-life transportation analysis applications with real-life input data sets from Shenzhen in China. SZTS uniquely focuses on a specific and real-life application domain whereas other existing Hadoop benchmark suites, such as HiBench and CloudRank-D, consist of generic algorithms with synthetic inputs. We perform a cross-layer workload characterization at the microarchitecture level, the operating system (OS) level, and the job level, revealing unique characteristics of SZTS compared to existing Hadoop benchmarks as well as general-purpose multi-core PARSEC benchmarks. We also study the sensitivity of workload behavior with respect to input data size, and we propose a methodology for identifying representative input data sets
LIKWID Monitoring Stack: A flexible framework enabling job specific performance monitoring for the masses
System monitoring is an established tool to measure the utilization and
health of HPC systems. Usually system monitoring infrastructures make no
connection to job information and do not utilize hardware performance
monitoring (HPM) data. To increase the efficient use of HPC systems automatic
and continuous performance monitoring of jobs is an essential component. It can
help to identify pathological cases, provides instant performance feedback to
the users, offers initial data to judge on the optimization potential of
applications and helps to build a statistical foundation about application
specific system usage. The LIKWID monitoring stack is a modular framework build
on top of the LIKWID tools library. It aims on enabling job specific
performance monitoring using HPM data, system metrics and application-level
data for small to medium sized commodity clusters. Moreover, it is designed to
integrate in existing monitoring infrastructures to speed up the change from
pure system monitoring to job-aware monitoring.Comment: 4 pages, 4 figures. Accepted for HPCMASPA 2017, the Workshop on
Monitoring and Analysis for High Performance Computing Systems Plus
Applications, held in conjunction with IEEE Cluster 2017, Honolulu, HI,
September 5, 201
Hacker Combat: A Competitive Sport from Programmatic Dueling & Cyberwarfare
The history of humanhood has included competitive activities of many
different forms. Sports have offered many benefits beyond that of
entertainment. At the time of this article, there exists not a competitive
ecosystem for cyber security beyond that of conventional capture the flag
competitions, and the like. This paper introduces a competitive framework with
a foundation on computer science, and hacking. This proposed competitive
landscape encompasses the ideas underlying information security, software
engineering, and cyber warfare. We also demonstrate the opportunity to rank,
score, & categorize actionable skill levels into tiers of capability.
Physiological metrics are analyzed from participants during gameplay. These
analyses provide support regarding the intricacies required for competitive
play, and analysis of play. We use these intricacies to build a case for an
organized competitive ecosystem. Using previous player behavior from gameplay,
we also demonstrate the generation of an artificial agent purposed with
gameplay at a competitive level
- …