564 research outputs found
Towards Peer-to-Peer-based Cryptanalysis
Abstract-Modern cryptanalytic algorithms require a large amount of computational power. An approach to cope with this requirement is to distribute these algorithms among many computers and to perform the computation massively parallel. However, existing approaches for distributing cryptanalytic algorithms are based on a client/server or a grid architecture. In this paper we propose the usage of peer-to-peer (P2P) technology for distributed cryptanalytic calculations. Our contribution in this paper is three-fold: We first identify the challenges resulting from this approach and provide a classification of algorithms suited for P2P-based computation. Secondly, we discuss and classify some specific cryptanalytic algorithms and their suitability for such an approach. Finally we provide a new, fully decentralized approach for distributing such computationally intensive jobs. Our design takes special care about scalability and the possible untrustworthy nature of the participating peers
Smart Grid Communications: Overview of Research Challenges, Solutions, and Standardization Activities
Optimization of energy consumption in future intelligent energy networks (or
Smart Grids) will be based on grid-integrated near-real-time communications
between various grid elements in generation, transmission, distribution and
loads. This paper discusses some of the challenges and opportunities of
communications research in the areas of smart grid and smart metering. In
particular, we focus on some of the key communications challenges for realizing
interoperable and future-proof smart grid/metering networks, smart grid
security and privacy, and how some of the existing networking technologies can
be applied to energy management. Finally, we also discuss the coordinated
standardization efforts in Europe to harmonize communications standards and
protocols.Comment: To be published in IEEE Communications Surveys and Tutorial
Geocomputation and open source software: components and software stacks.
Geocomputation, with its necessary focus on software development and methods innovation, has enjoyed a close relationship with free and open source software communities. These extend from communities providing the numerical infrastructure for computation, such as BLAS (Basic Linear Algebra Subprograms),through language communities around Python, Java and others, to communities supporting spatial data handling, especially the projects of the Open Source Geospatial Foundation. This chapter surveys the stack of software components available for geocomputation from these sources, looking in most detail at the R language and environment, and how OSGeo projects have been interfaced with it. In addition, attention will be paid to open development models and community participation in software development. Since free and open source geospatial software has also achieved a successively greater presence in proprietary software as computational platforms evolve, the chapter will close with some indications of future trends in software component stacks, using Terralib as an example.Geocomputation; Open source software
Bringing UMAP Closer to the Speed of Light with GPU Acceleration
The Uniform Manifold Approximation and Projection (UMAP) algorithm has become
widely popular for its ease of use, quality of results, and support for
exploratory, unsupervised, supervised, and semi-supervised learning. While many
algorithms can be ported to a GPU in a simple and direct fashion, such efforts
have resulted in inefficient and inaccurate versions of UMAP. We show a number
of techniques that can be used to make a faster and more faithful GPU version
of UMAP, and obtain speedups of up to 100x in practice. Many of these design
choices/lessons are general purpose and may inform the conversion of other
graph and manifold learning algorithms to use GPUs. Our implementation has been
made publicly available as part of the open source RAPIDS cuML library
(https://github.com/rapidsai/cuml)
Social Computing: An Overview
A collection of technologies termed social computing is driving a dramatic evolution of the Web, matching the dot-com era in growth, excitement, and investment. All of these share high degree of community formation, user level content creation, and computing, and a variety of other characteristics. We provide an overview of social computing and identify salient characteristics. We argue that social computing holds tremendous disruptive potential in the business world and can significantly impact society, and outline possible changes in organized human action that could be brought about. Social computing can also have deleterious effects associated with it, including security issues. We suggest that social computing should be a priority for researchers and business leaders and illustrate the fundamental shifts in communication, computing, collaboration, and commerce brought about by this trend
Context-aware task scheduling in distributed computing systems
These days, the popularity of technologies such as machine learning, augmented reality, and big data analytics is growing dramatically. This leads to a higher demand of computational power not only for IT professionals but also for ordinary device users who benefit from new applications. At the same time, the computational performance of end-user devices increases to meet the demands of these resource-hungry applications. As a result, there is a coexistence of a huge demand of computational power on the one side and a large pool of computational resources on the other side. Bringing these two sides together is the idea of computational resource sharing systems which allow applications to forward computationally intensive workload to remote resources. This technique is often used in cloud computing where customers can rent computational power. However, we argue that not only cloud resources can be used as offloading targets. Rather, idle CPU cycles from end-user administered devices at the edge of the network can be spontaneously leveraged as well. Edge devices, however, are not only heterogeneous in their hardware and software capabilities, they also do not provide any guarantees in terms of reliability or performance. Does it mean that either the applications that require further guarantees or the unpredictable resources need to be excluded from such a sharing system?
In this thesis, we propose a solution to this problem by introducing the Tasklet system, our approach for a computational resource sharing system. The Tasklet system supports computation offloading to arbitrary types of devices, including stable cloud instances as well as unpredictable end-user owned edge resources. Therefore, the Tasklet system is structured into multiple layers. The lowest layer is a best-effort resource sharing system which provides lightweight task scheduling and execution. Here, best-effort means that in case of a failure, the task execution is dropped and that tasks are allocated to resources randomly. To provide execution guarantees such as a reliable or timely execution, we add a Quality of Computation (QoC) layer on top of the best-effort execution layer. The QoC layer enforces the guarantees for applications by using a context-aware task scheduler which monitors the available resources in the computing environment and performs the matchmaking between resources and tasks based on the current state of the system. As edge resources are controlled by individuals, we consider the fact that these users need to be able to decide with whom they want to share their resources and for which price. Thus, we add a social layer on top of the system that allows users to establish friendship connections which can then be leveraged for social-aware task allocation and accounting of shared computation
Multi-dimensional resource allocation strategy for large- scale computational grid systems
Master'sMASTER OF ENGINEERIN
- …