2,591 research outputs found
Many-to-Many Matching Games for Proactive Social-Caching in Wireless Small Cell Networks
In this paper, we address the caching problem in small cell networks from a
game theoretic point of view. In particular, we formulate the caching problem
as a many-to-many matching game between small base stations and service
providers' servers. The servers store a set of videos and aim to cache these
videos at the small base stations in order to reduce the experienced delay by
the end-users. On the other hand, small base stations cache the videos
according to their local popularity, so as to reduce the load on the backhaul
links. We propose a new matching algorithm for the many-to-many problem and
prove that it reaches a pairwise stable outcome. Simulation results show that
the number of satisfied requests by the small base stations in the proposed
caching algorithm can reach up to three times the satisfaction of a random
caching policy. Moreover, the expected download time of all the videos can be
reduced significantly
Lex Informatica: The Formulation of Information Policy Rules through Technology
Historically, law and government regulation have established default rules for information policy, including constitutional rules on freedom of expression and statutory rights of ownership of information. This Article will show that for network environments and the Information Society, however, law and government regulation are not the only source of rule-making. Technological capabilities and system design choices impose rules on participants. The creation and implementation of information policy are embedded in network designs and standards as well as in system configurations. Even user preferences and technical choices create overarching, local default rules. This Article argues, in essence, that the set of rules for information flows imposed by technology and communication networks form a “Lex Informatica” that policymakers must understand, consciously recognize, and encourage
Intellectual property rights in a knowledge-based economy
Intellectual property rights (IPR) have been created as economic mechanisms to facilitate ongoing innovation by granting inventors a temporary monopoly in return for disclosure of technical know-how. Since the beginning of 1980s, IPR have come under scrutiny as new technological paradigms appeared with the emergence of knowledge-based industries. Knowledge-based products are intangible, non-excludable and non-rivalrous goods. Consequently, it is difficult for their creators to control their dissemination and use. In particular, many information goods are based on network externalities and on the creation of market standards. At the same time, information technologies are generic in the sense of being useful in many places in the economy. Hence, policy makers often define current IPR regimes in the context of new technologies as both over- and under-protective. They are over-protective in the sense that they prevent the dissemination of information which has a very high social value; they are under-protective in the sense that they do not provide strong control over the appropriation of rents from their invention and thus may not provide strong incentives to innovate. During the 1980s, attempts to assess the role of IPR in the process of technological learning have found that even though firms in high-tech sectors do use patents as part of their strategy for intellectual property protection, the reliance of these sectors on patents as an information source for innovation is lower than in traditional industries. Intellectual property rights are based mainly on patents for technical inventions and on copyrights for artistic works. Patents are granted only if inventions display minimal levels of utility, novelty and non-obviousness of technical know-how. By contrast, copyrights protect only final works and their derivatives, but guarantee protection for longer periods, according to the Berne Convention. Licensing is a legal aid that allows the use of patented technology by other firms, in return for royalty fees paid to the inventor. Licensing can be contracted on an exclusive or non-exclusive basis, but in most countries patented knowledge can be exclusively held by its inventors, as legal provisions for compulsory licensing of technologies do not exist. The fair use doctrine aims to prevent formation of perfect monopolies over technological fields and copyrighted artefacts as a result of IPR application. Hence, the use of patented and copyrighted works is permissible in academic research, education and the development of technologies that are complimentary to core technologies. Trade secrecy is meant to prevent inadvertent technology transfer to rival firms and is based on contracts between companies and employees. However, as trade secrets prohibit transfer of knowledge within industries, regulators have attempted to foster disclosure of technical know-how by institutional means of patents, copyrights and sui-generis laws. And indeed, following the provisions formed by IPR regulation, firms have shifted from methods of trade secrecy towards patenting strategies to achieve improved protection of intellectual property, as well as means to acquire competitive advantages in the market by monopolization of technological advances.economics of technology ;
Exact Analysis of TTL Cache Networks: The Case of Caching Policies driven by Stopping Times
TTL caching models have recently regained significant research interest,
largely due to their ability to fit popular caching policies such as LRU. This
paper advances the state-of-the-art analysis of TTL-based cache networks by
developing two exact methods with orthogonal generality and computational
complexity. The first method generalizes existing results for line networks
under renewal requests to the broad class of caching policies whereby evictions
are driven by stopping times. The obtained results are further generalized,
using the second method, to feedforward networks with Markov arrival processes
(MAP) requests. MAPs are particularly suitable for non-line networks because
they are closed not only under superposition and splitting, as known, but also
under input-output caching operations as proven herein for phase-type TTL
distributions. The crucial benefit of the two closure properties is that they
jointly enable the first exact analysis of feedforward networks of TTL caches
in great generality
Storage Solutions for Big Data Systems: A Qualitative Study and Comparison
Big data systems development is full of challenges in view of the variety of
application areas and domains that this technology promises to serve.
Typically, fundamental design decisions involved in big data systems design
include choosing appropriate storage and computing infrastructures. In this age
of heterogeneous systems that integrate different technologies for optimized
solution to a specific real world problem, big data system are not an exception
to any such rule. As far as the storage aspect of any big data system is
concerned, the primary facet in this regard is a storage infrastructure and
NoSQL seems to be the right technology that fulfills its requirements. However,
every big data application has variable data characteristics and thus, the
corresponding data fits into a different data model. This paper presents
feature and use case analysis and comparison of the four main data models
namely document oriented, key value, graph and wide column. Moreover, a feature
analysis of 80 NoSQL solutions has been provided, elaborating on the criteria
and points that a developer must consider while making a possible choice.
Typically, big data storage needs to communicate with the execution engine and
other processing and visualization technologies to create a comprehensive
solution. This brings forth second facet of big data storage, big data file
formats, into picture. The second half of the research paper compares the
advantages, shortcomings and possible use cases of available big data file
formats for Hadoop, which is the foundation for most big data computing
technologies. Decentralized storage and blockchain are seen as the next
generation of big data storage and its challenges and future prospects have
also been discussed
CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines
Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective.
The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines.
From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research
Distributed Hybrid Simulation of the Internet of Things and Smart Territories
This paper deals with the use of hybrid simulation to build and compose
heterogeneous simulation scenarios that can be proficiently exploited to model
and represent the Internet of Things (IoT). Hybrid simulation is a methodology
that combines multiple modalities of modeling/simulation. Complex scenarios are
decomposed into simpler ones, each one being simulated through a specific
simulation strategy. All these simulation building blocks are then synchronized
and coordinated. This simulation methodology is an ideal one to represent IoT
setups, which are usually very demanding, due to the heterogeneity of possible
scenarios arising from the massive deployment of an enormous amount of sensors
and devices. We present a use case concerned with the distributed simulation of
smart territories, a novel view of decentralized geographical spaces that,
thanks to the use of IoT, builds ICT services to manage resources in a way that
is sustainable and not harmful to the environment. Three different simulation
models are combined together, namely, an adaptive agent-based parallel and
distributed simulator, an OMNeT++ based discrete event simulator and a
script-language simulator based on MATLAB. Results from a performance analysis
confirm the viability of using hybrid simulation to model complex IoT
scenarios.Comment: arXiv admin note: substantial text overlap with arXiv:1605.0487
The 5G Cellular Backhaul Management Dilemma: To Cache or to Serve
With the introduction of caching capabilities into small cell networks
(SCNs), new backaul management mechanisms need to be developed to prevent the
predicted files that are downloaded by the at the small base stations (SBSs) to
be cached from jeopardizing the urgent requests that need to be served via the
backhaul. Moreover, these mechanisms must account for the heterogeneity of the
backhaul that will be encompassing both wireless backhaul links at various
frequency bands and a wired backhaul component. In this paper, the
heterogeneous backhaul management problem is formulated as a minority game in
which each SBS has to define the number of predicted files to download, without
affecting the required transmission rate of the current requests. For the
formulated game, it is shown that a unique fair proper mixed Nash equilibrium
(PMNE) exists. Self-organizing reinforcement learning algorithm is proposed and
proved to converge to a unique Boltzmann-Gibbs equilibrium which approximates
the desired PMNE. Simulation results show that the performance of the proposed
approach can be close to that of the ideal optimal algorithm while it
outperforms a centralized greedy approach in terms of the amount of data that
is cached without jeopardizing the quality-of-service of current requests.Comment: Accepted for publication at Transactions on Wireless Communication
- …