25,475 research outputs found
Architecture for Cooperative Prefetching in P2P Video-on- Demand System
Most P2P VoD schemes focused on service architectures and overlays
optimization without considering segments rarity and the performance of
prefetching strategies. As a result, they cannot better support VCRoriented
service in heterogeneous environment having clients using free VCR controls.
Despite the remarkable popularity in VoD systems, there exist no prior work
that studies the performance gap between different prefetching strategies. In
this paper, we analyze and understand the performance of different prefetching
strategies. Our analytical characterization brings us not only a better
understanding of several fundamental tradeoffs in prefetching strategies, but
also important insights on the design of P2P VoD system. On the basis of this
analysis, we finally proposed a cooperative prefetching strategy called
"cooching". In this strategy, the requested segments in VCR interactivities are
prefetched into session beforehand using the information collected through
gossips. We evaluate our strategy through extensive simulations. The results
indicate that the proposed strategy outperforms the existing prefetching
mechanisms.Comment: 13 Pages, IJCN
An Overview of the Use of Neural Networks for Data Mining Tasks
In the recent years the area of data mining has experienced a considerable demand for technologies that extract knowledge from large and complex data sources. There is a substantial commercial interest as well as research investigations in the area that aim to develop new and improved approaches for extracting information, relationships, and patterns from datasets. Artificial Neural Networks (NN) are popular biologically inspired intelligent methodologies, whose classification, prediction and pattern recognition capabilities have been utilised successfully in many areas, including science, engineering, medicine, business, banking, telecommunication, and many other fields. This paper highlights from a data mining perspective the implementation of NN, using supervised and unsupervised learning, for pattern recognition, classification, prediction and cluster analysis, and focuses the discussion on their usage in bioinformatics and financial data analysis tasks
Social Welfare Maximization Auction in Edge Computing Resource Allocation for Mobile Blockchain
Blockchain, an emerging decentralized security system, has been applied in
many applications, such as bitcoin, smart grid, and Internet-of-Things.
However, running the mining process may cost too much energy consumption and
computing resource usage on handheld devices, which restricts the use of
blockchain in mobile environments. In this paper, we consider deploying edge
computing service to support the mobile blockchain. We propose an auction-based
edge computing resource market of the edge computing service provider. Since
there is competition among miners, the allocative externalities (positive and
negative) are taken into account in the model. In our auction mechanism, we
maximize the social welfare while guaranteeing the truthfulness, individual
rationality and computational efficiency. Based on blockchain mining experiment
results, we define a hash power function that characterizes the probability of
successfully mining a block. Through extensive simulations, we evaluate the
performance of our auction mechanism which shows that our edge computing
resources market model can efficiently solve the social welfare maximization
problem for the edge computing service provider
Cloud/fog computing resource management and pricing for blockchain networks
The mining process in blockchain requires solving a proof-of-work puzzle,
which is resource expensive to implement in mobile devices due to the high
computing power and energy needed. In this paper, we, for the first time,
consider edge computing as an enabler for mobile blockchain. In particular, we
study edge computing resource management and pricing to support mobile
blockchain applications in which the mining process of miners can be offloaded
to an edge computing service provider. We formulate a two-stage Stackelberg
game to jointly maximize the profit of the edge computing service provider and
the individual utilities of the miners. In the first stage, the service
provider sets the price of edge computing nodes. In the second stage, the
miners decide on the service demand to purchase based on the observed prices.
We apply the backward induction to analyze the sub-game perfect equilibrium in
each stage for both uniform and discriminatory pricing schemes. For the uniform
pricing where the same price is applied to all miners, the existence and
uniqueness of Stackelberg equilibrium are validated by identifying the best
response strategies of the miners. For the discriminatory pricing where the
different prices are applied to different miners, the Stackelberg equilibrium
is proved to exist and be unique by capitalizing on the Variational Inequality
theory. Further, the real experimental results are employed to justify our
proposed model.Comment: 16 pages, double-column version, accepted by IEEE Internet of Things
Journa
Agent-Based Simulations of Blockchain protocols illustrated via Kadena's Chainweb
While many distributed consensus protocols provide robust liveness and
consistency guarantees under the presence of malicious actors, quantitative
estimates of how economic incentives affect security are few and far between.
In this paper, we describe a system for simulating how adversarial agents, both
economically rational and Byzantine, interact with a blockchain protocol. This
system provides statistical estimates for the economic difficulty of an attack
and how the presence of certain actors influences protocol-level statistics,
such as the expected time to regain liveness. This simulation system is
influenced by the design of algorithmic trading and reinforcement learning
systems that use explicit modeling of an agent's reward mechanism to evaluate
and optimize a fully autonomous agent. We implement and apply this simulation
framework to Kadena's Chainweb, a parallelized Proof-of-Work system, that
contains complexity in how miner incentive compliance affects security and
censorship resistance. We provide the first formal description of Chainweb that
is in the literature and use this formal description to motivate our simulation
design. Our simulation results include a phase transition in block height
growth rate as a function of shard connectivity and empirical evidence that
censorship in Chainweb is too costly for rational miners to engage in. We
conclude with an outlook on how simulation can guide and optimize protocol
development in a variety of contexts, including Proof-of-Stake parameter
optimization and peer-to-peer networking design.Comment: 10 pages, 7 figures, accepted to the IEEE S&B 2019 conferenc
Learning from distributed data sources using random vector functional-link networks
One of the main characteristics in many real-world big data scenarios is their distributed nature. In a machine learning context, distributed data, together with the requirements of preserving privacy and scaling up to large networks, brings the challenge of designing fully decentralized training protocols. In this paper, we explore the problem of distributed learning when the features of every pattern are available throughout multiple agents (as is happening, for example, in a distributed database scenario). We propose an algorithm for a particular class of neural networks, known as Random Vector Functional-Link (RVFL), which is based on the Alternating Direction Method of Multipliers optimization algorithm. The proposed algorithm allows to learn an RVFL network from multiple distributed data sources, while restricting communication to the unique operation of computing a distributed average. Our experimental simulations show that the algorithm is able to achieve a generalization accuracy comparable to a fully centralized solution, while at the same time being extremely efficient
Distributed top-k aggregation queries at large
Top-k query processing is a fundamental building block for efficient ranking in a large number of applications. Efficiency is a central issue, especially for distributed settings, when the data is spread across different nodes in a network. This paper introduces novel optimization methods for top-k aggregation queries in such distributed environments. The optimizations can be applied to all algorithms that fall into the frameworks of the prior TPUT and KLEE methods. The optimizations address three degrees of freedom: 1) hierarchically grouping input lists into top-k operator trees and optimizing the tree structure, 2) computing data-adaptive scan depths for different input sources, and 3) data-adaptive sampling of a small subset of input sources in scenarios with hundreds or thousands of query-relevant network nodes. All optimizations are based on a statistical cost model that utilizes local synopses, e.g., in the form of histograms, efficiently computed convolutions, and estimators based on order statistics. The paper presents comprehensive experiments, with three different real-life datasets and using the ns-2 network simulator for a packet-level simulation of a large Internet-style network
- …