1,165 research outputs found
Microscopic Study of Superfluidity in Dilute Neutron Matter
Singlet -wave superfluidity of dilute neutron matter is studied within the
correlated BCS method, which takes into account both pairing and short-range
correlations. First, the equation of state (EOS) of normal neutron matter is
calculated within the Correlated Basis Function (CBF) method in lowest cluster
order using the and components of the Argonne
potential, assuming trial Jastrow-type correlation functions. The
superfluid gap is then calculated with the corresponding component of the
Argonne potential and the optimally determined correlation functions.
The dependence of our results on the chosen forms for the correlation functions
is studied, and the role of the -wave channel is investigated. Where
comparison is meaningful, the values obtained for the gap within
this simplified scheme are consistent with the results of similar and more
elaborate microscopic methods.Comment: 9 pages, 6 figure
The troubled journey of QoS: From ATM to content networking, edge-computing and distributed internet governance
Network Quality of Service (QoS) and the associated user Quality of Experience (QoE) have always been the networking “holy grail” and have been sought after through various different approaches and networking technologies over the last decades. Despite substantial amounts of effort invested in the area, there has been very little actual deployment of mechanisms to guarantee QoS in the Internet. As a result, the Internet is largely operating on a “best effort” basis in terms of QoS. Here, we attempt a historical overview in order to better understand how we got to the point where we are today and consider the evolution of QoS/QoE in the future.
As we move towards more demanding networking environments where enormous amounts of data is produced at the edge of the network (e.g., from IoT devices), computation will also need to migrate to the edge in order to guarantee QoS. In turn, we argue that distributed computing at the edge of the network will inevitably require infrastructure decentralisation. That said, trust to the infrastructure provider is more difficult to guarantee and new components need to be incorporated into the Internet landscape in order to be able to support emerging applications, but also achieve acceptable service quality.
We start from the first steps of ATM and related IP-based technologies, we consider recent proposals for content-oriented and Information-Centric Networking, mobile edge and fog computing, and finally we see how distributed Internet governance through Distributed Ledger Technology and blockchains can influence QoS in future networks
Path-Based Epidemic Spreading in Networks
Conventional epidemic models assume omnidirectional contact-based infection. This strongly associates the epidemic spreading process with node degrees. The role of the infection transmission medium is often neglected. In real-world networks, however, the infectious agent as the physical contagion medium usually flows from one node to another via specific directed routes ( path-based infection). Here, we use continuous-time Markov chain analysis to model the influence of the infectious agent and routing paths on the spreading behavior by taking into account the state transitions of each node individually, rather than the mean aggregated behavior of all nodes. By applying a mean field approximation, the analysis complexity of the path-based infection mechanics is reduced from exponential to polynomial. We show that the structure of the topology plays a secondary role in determining the size of the epidemic. Instead, it is the routing algorithm and traffic intensity that determine the survivability and the steady-state of the epidemic. We define an infection characterization matrix that encodes both the routing and the traffic information. Based on this, we derive the critical path-based epidemic threshold below which the epidemic will die off, as well as conditional bounds of this threshold which network operators may use to promote/suppress path-based spreading in their networks. Finally, besides artificially generated random and scale-free graphs, we also use real-world networks and traffic, as case studies, in order to compare the behaviors of contact- and path-based epidemics. Our results further corroborate the recent empirical observations that epidemics in communication networks are highly persistent
Hash-routing schemes for information centric networking.
It is our great pleasure to welcome you to The 3rd ACM SIGCOMM Workshop on Information-Centric Networking (ICN 2013). The fundamental concept in Information-Centric Networking (ICN) is to evolve the Internet from today's host based packet delivery towards directly retrieving information objects by names in a secure, reliable, scalable, and efficient way. These architectural design efforts aim to directly address the challenges that arise from the increasing demands for highly scalable content distribution, from accelerated growths of mobile devices, from wide deployment of the Internet-of-Things (IoT), and from the need to secure the global Internet.
Rapid progress has been made over the last few years, initial designs are sketched, new research challenges exposed, and prototype implementations are deployed on testbeds of various scales. The research efforts have reached a new stage that allows one to experiment with proposed architectures and to apply a proposed architectural design to address real world problems. It also becomes important to compare different design approaches and develop methodologies for architecture evaluations. Some research areas, such as routing and caching, have drawn considerable attention; some other areas, such as trust management, effective and efficient application of cryptography, experience from prototyping, and lessons from experimentations, to name a few, have yet to be fully explored.
This workshop presents original contributions on Information-Centric Networking architecture topics, specific algorithms and protocols, as well as results from implementations and bexperimentation, with an emphasis on applying the new approach to address real world problems and on experimental investigations. New for this year is that the workshop includes a poster/demo session.
We received a large number of submissions and as the workshop is limited in time we were only able to accept 20% of them as full papers. To promote sharing of latest results among workshop attendees, we also accepted 17% of the submissions as posters or demos
Revisiting Resource Pooling: The Case for In-Network Resource Sharing.
We question the widely adopted view of in-network caches acting as temporary storage for the most popular content in Information-Centric Networks (ICN). Instead, we propose that in-network storage is used as a place of temporary custody for incoming content in a store and forward manner. Given this functionality of in-network storage, senders push content into the network in an open-loop manner to take advantage of underutilised links. When content hits the bottleneck link it gets re-routed through alternative uncongested paths. If alternative paths do not exist, incoming content is temporarily stored in in-network caches, while the system enters a closed-loop, back-pressure mode of operation to avoid congestive collapse.
Our proposal follows in spirit the resource pooling principle, which, however, is restricted to end-to-end resources and paths. We extend this principle to also take advantage of in-network resources, in terms of multiplicity of available sub-paths (as compared to multihomed users only) and in-network cache space. We call the proposed principle In-Network Resource Pooling Principle (INRPP). Using the INRPP, congestion, or increased contention over a link, is dealt with locally in a hop-by-hop manner, instead of end-to-end. INRPP utilises resources throughout the network more efficiently and opens up new directions for research in the multipath routing and congestion control areas
Mind the gap: modelling video delivery under expected periods of disconnection.
In this work we model video delivery under expected periods of disconnection, such as the ones experienced in public transportation systems. Our main goal is to quantify the gains of users' collaboration in terms of Quality of Experience (QoE) in the context of intermittently available and bandwidth-limited WiFi connectivity. Under the assumption that Wi-Fi connectivity is available within underground stations, but absent between them, at first, we define a mathematical model which describes the content distribution under these conditions and we present the users' QoE function in terms of undisrupted video playback. Next, we expand this model to include the case of collaboration between users for content sharing in a peer-to-peer (P2P) way. Lastly, we evaluate our model based on real data from the London Underground network, where we investigate the feasibility of content distribution, only to find that collaboration between users increases significantly their QoE
Decentralized Solutions for Monitoring Large-Scale Software-Defined Networks
Software-Defined Networking (SDN) technologies offer the possibility to automatically and frequently reconfigure the network resources by enabling simple and flexible network programmability. One of the key challenges to address when developing a new SDN-based solution is the design of a monitoring framework that can provide frequent and consistent updates to heterogeneous management applications. To cope with the requirements of large-scale networks (i.e. large number of geographically dispersed devices), a distributed monitoring approach is required. This PhD aims at investigating decentralized solutions for resource monitoring in SDN. The research will focus on the design of monitoring entities for the collection and processing of information at different network locations and will investigate how these can efficiently share their knowledge in a distributed management environment
Understanding Sharded Caching Systems
Sharding is a method for allocating data items to nodes of a distributed caching or storage system based on the result of a hash function computed on the item identifier. It is ubiquitously used in key-value stores, CDNs and many other applications. Despite considerable work has focused on the design and the implementation of such systems, there is limited understanding of their performance in realistic operational conditions from a theoretical standpoint. In this paper we fill this gap by providing a thorough modeling of sharded caching systems, focusing particularly on load balancing and caching performance aspects. Our analysis provides important insights that can be applied to optimize the design and configuration of sharded caching systems
- …