53,311 research outputs found
Push vs. Pull in Web-Based Network Management
In this paper, we show how Web technologies can be used effectively to (i) address some of the deficiencies of traditional IP network management platforms, and (ii) render these expensive platforms redundant. We build on the concept of embedded management application, proposed by Wellens and Auerbach, and present two models of network management application designs that rely on Web technologies. First, the pull model is based on the request/response paradigm. It is typically used to perform data polling. Several commercial management platforms already use Web technologies that rely on this model to provide for ad hoc management; we demonstrate how to extend this to regular management. Second, the push model is a novel approach which relies on the publish/subscribe/distribute paradigm. It is better suited to regular management than the pull model, and allows administrators to conserve network bandwidth as well as CPU time on the management station. It can be seen as a generalization of the paradigm commonly used for notification delivery. Finally, we introduce the concept of the collapsed network management platform, where these two models coexist
Improving the Scalability of DPWS-Based Networked Infrastructures
The Devices Profile for Web Services (DPWS) specification enables seamless
discovery, configuration, and interoperability of networked devices in various
settings, ranging from home automation and multimedia to manufacturing
equipment and data centers. Unfortunately, the sheer simplicity of event
notification mechanisms that makes it fit for resource-constrained devices,
makes it hard to scale to large infrastructures with more stringent
dependability requirements, ironically, where self-configuration would be most
useful. In this report, we address this challenge with a proposal to integrate
gossip-based dissemination in DPWS, thus maintaining compatibility with
original assumptions of the specification, and avoiding a centralized
configuration server or custom black-box middleware components. In detail, we
show how our approach provides an evolutionary and non-intrusive solution to
the scalability limitations of DPWS and experimentally evaluate it with an
implementation based on the the Web Services for Devices (WS4D) Java Multi
Edition DPWS Stack (JMEDS).Comment: 28 pages, Technical Repor
Using ICT tools to manage knowledge: a student perspective in determining the quality of education
Within the e-learning context of a university, technology has the potential to facilitate the
knowledge interaction between the source (instructor) and the recipient (students). From a
literature review, it can be concluded that prior studies have not explored the types of
channels that encourage knowledge transfer in this environment. For example, how explicit
knowledge travels through the e-learning environment and goes through interaction processes
and is received and acquired is largely unknown.
According to Alavi & Leidner (2001), Information and Communication Technology (ICT)
can help speed up the processes of transferring knowledge from those who have knowledge
to those seeking knowledge. Within the university context, technologies such as email,
Internet, IRC chat, bulletin boards and tools such as WebCT and BlackBoard have the
potential to facilitate the transfer of knowledge and act as a link between source and recipient.
Effective knowledge transfer has to consider effective knowledge acquisition, which are
therefore inexplicably linked. Nonaka's spiral model addresses knowledge acquisition
through spiraling processes in which an individual would be able to convert tacit knowledge
to explicit knowledge and vice versa. According to Nonaka & Takeuchi (1995) there are four
types of interaction, which give way to the conversion of one form of knowledge into
another, namely tacit-to-tacit, tacit-to-explicit, explicit-to-tacit and explicit-to-explicit. In an
academic environment, this can be studied as the source, either transferring tacit or explicit
knowledge, and similarly as the recipient, receiving knowledge either in tacit or explicit form.
Nonaka & Takeuchi (1995) also refer to this as the SECI model, where SECI stands for
Socialisation, Externalisation, Combination and Internalisation.
This 'Research in Progress' reports the outcomes of a study undertaken to understand how
and to what extent knowledge spiraling processes and accompanying characteristics of SECI
can be ICT-enabled to contribute towards the studying and learning processes for university
education. A survey instrument was developed for this purpose and it is currently undergoing
peer-review and other customary validity and reliability tests. Once the instrument is
validated, it will be administered on about 50 tertiary students. It is hoped that the results
obtained from this survey will be reported in the QIK 2005 conference
Petuum: A New Platform for Distributed Machine Learning on Big Data
What is a systematic way to efficiently apply a wide spectrum of advanced ML
programs to industrial scale problems, using Big Models (up to 100s of billions
of parameters) on Big Data (up to terabytes or petabytes)? Modern
parallelization strategies employ fine-grained operations and scheduling beyond
the classic bulk-synchronous processing paradigm popularized by MapReduce, or
even specialized graph-based execution that relies on graph representations of
ML programs. The variety of approaches tends to pull systems and algorithms
design in different directions, and it remains difficult to find a universal
platform applicable to a wide range of ML programs at scale. We propose a
general-purpose framework that systematically addresses data- and
model-parallel challenges in large-scale ML, by observing that many ML programs
are fundamentally optimization-centric and admit error-tolerant,
iterative-convergent algorithmic solutions. This presents unique opportunities
for an integrative system design, such as bounded-error network synchronization
and dynamic scheduling based on ML program structure. We demonstrate the
efficacy of these system designs versus well-known implementations of modern ML
algorithms, allowing ML programs to run in much less time and at considerably
larger model sizes, even on modestly-sized compute clusters.Comment: 15 pages, 10 figures, final version in KDD 2015 under the same titl
Location Privacy in Spatial Crowdsourcing
Spatial crowdsourcing (SC) is a new platform that engages individuals in
collecting and analyzing environmental, social and other spatiotemporal
information. With SC, requesters outsource their spatiotemporal tasks to a set
of workers, who will perform the tasks by physically traveling to the tasks'
locations. This chapter identifies privacy threats toward both workers and
requesters during the two main phases of spatial crowdsourcing, tasking and
reporting. Tasking is the process of identifying which tasks should be assigned
to which workers. This process is handled by a spatial crowdsourcing server
(SC-server). The latter phase is reporting, in which workers travel to the
tasks' locations, complete the tasks and upload their reports to the SC-server.
The challenge is to enable effective and efficient tasking as well as reporting
in SC without disclosing the actual locations of workers (at least until they
agree to perform a task) and the tasks themselves (at least to workers who are
not assigned to those tasks). This chapter aims to provide an overview of the
state-of-the-art in protecting users' location privacy in spatial
crowdsourcing. We provide a comparative study of a diverse set of solutions in
terms of task publishing modes (push vs. pull), problem focuses (tasking and
reporting), threats (server, requester and worker), and underlying technical
approaches (from pseudonymity, cloaking, and perturbation to exchange-based and
encryption-based techniques). The strengths and drawbacks of the techniques are
highlighted, leading to a discussion of open problems and future work
Gunrock: GPU Graph Analytics
For large-scale graph analytics on the GPU, the irregularity of data access
and control flow, and the complexity of programming GPUs, have presented two
significant challenges to developing a programmable high-performance graph
library. "Gunrock", our graph-processing system designed specifically for the
GPU, uses a high-level, bulk-synchronous, data-centric abstraction focused on
operations on a vertex or edge frontier. Gunrock achieves a balance between
performance and expressiveness by coupling high performance GPU computing
primitives and optimization strategies with a high-level programming model that
allows programmers to quickly develop new graph primitives with small code size
and minimal GPU programming knowledge. We characterize the performance of
various optimization strategies and evaluate Gunrock's overall performance on
different GPU architectures on a wide range of graph primitives that span from
traversal-based algorithms and ranking algorithms, to triangle counting and
bipartite-graph-based algorithms. The results show that on a single GPU,
Gunrock has on average at least an order of magnitude speedup over Boost and
PowerGraph, comparable performance to the fastest GPU hardwired primitives and
CPU shared-memory graph libraries such as Ligra and Galois, and better
performance than any other GPU high-level graph library.Comment: 52 pages, invited paper to ACM Transactions on Parallel Computing
(TOPC), an extended version of PPoPP'16 paper "Gunrock: A High-Performance
Graph Processing Library on the GPU
Managing Dynamic User Communities in a Grid of Autonomous Resources
One of the fundamental concepts in Grid computing is the creation of Virtual
Organizations (VO's): a set of resource consumers and providers that join
forces to solve a common problem. Typical examples of Virtual Organizations
include collaborations formed around the Large Hadron Collider (LHC)
experiments. To date, Grid computing has been applied on a relatively small
scale, linking dozens of users to a dozen resources, and management of these
VO's was a largely manual operation. With the advance of large collaboration,
linking more than 10000 users with a 1000 sites in 150 counties, a
comprehensive, automated management system is required. It should be simple
enough not to deter users, while at the same time ensuring local site autonomy.
The VO Management Service (VOMS), developed by the EU DataGrid and DataTAG
projects[1, 2], is a secured system for managing authorization for users and
resources in virtual organizations. It extends the existing Grid Security
Infrastructure[3] architecture with embedded VO affiliation assertions that can
be independently verified by all VO members and resource providers. Within the
EU DataGrid project, Grid services for job submission, file- and database
access are being equipped with fine- grained authorization systems that take VO
membership into account. These also give resource owners the ability to ensure
site security and enforce local access policies. This paper will describe the
EU DataGrid security architecture, the VO membership service and the local site
enforcement mechanisms Local Centre Authorization Service (LCAS), Local
Credential Mapping Service(LCMAPS) and the Java Trust and Authorization
Manager.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 7 pages, LaTeX, 5 eps figures. PSN
TUBT00
- …