1,054 research outputs found

    Emergent behaviors in the Internet of things: The ultimate ultra-large-scale system

    Get PDF
    To reach its potential, the Internet of Things (IoT) must break down the silos that limit applications' interoperability and hinder their manageability. Doing so leads to the building of ultra-large-scale systems (ULSS) in several areas, including autonomous vehicles, smart cities, and smart grids. The scope of ULSS is both large and complex. Thus, the authors propose Hierarchical Emergent Behaviors (HEB), a paradigm that builds on the concepts of emergent behavior and hierarchical organization. Rather than explicitly programming all possible decisions in the vast space of ULSS scenarios, HEB relies on the emergent behaviors induced by local rules at each level of the hierarchy. The authors discuss the modifications to classical IoT architectures required by HEB, as well as the new challenges. They also illustrate the HEB concepts in reference to autonomous vehicles. This use case paves the way to the discussion of new lines of research.Damian Roca work was supported by a Doctoral Scholarship provided by Fundación La Caixa. This work has been supported by the Spanish Government (Severo Ochoa grants SEV2015-0493) and by the Spanish Ministry of Science and Innovation (contracts TIN2015-65316-P).Peer ReviewedPostprint (author's final draft

    Universal critical exponent in class D superconductors

    Full text link
    We study a physical system consisting of non-interacting quasiparticles in disordered superconductors that have neither time-reversal nor spin-rotation invariance. This system belongs to class D within the recent classification scheme of random matrix ensembles (RME) and its phase diagram contains three different phases: metallic and two distinct localized phases with different quantized thermal Hall conductances. We find that critical exponents describing different transitions (insulator-to-insulator and insulator-to-metal) are identical within the error of numerical calculations and also find that critical disorder of the insulator-to-metal transition is energy independent.Comment: 3.5 pages 4 figure

    Critical fixed points in class D superconductors

    Full text link
    We study in detail a critical line on the phase diagram of the Cho-Fisher network model separating three different phases: metallic and two distinct localized phases with different quantized thermal Hall conductances. This system describes non-interacting quasiparticles in disordered superconductors that have neither time-reversal nor spin-rotational invariance. We find that in addition to a tricritical fixed point WTW_T on that critical line there exist an additional repulsive fixed point WNW_N (where the vortex disorder concentration WN<WTW_N<W_T), which splits RG flow into opposite directions: toward a clean Ising model at W=0 and toward WTW_T.Comment: 3 pages, one figur

    A general guide to applying machine learning to computer architecture

    Get PDF
    The resurgence of machine learning since the late 1990s has been enabled by significant advances in computing performance and the growth of big data. The ability of these algorithms to detect complex patterns in data which are extremely difficult to achieve manually, helps to produce effective predictive models. Whilst computer architects have been accelerating the performance of machine learning algorithms with GPUs and custom hardware, there have been few implementations leveraging these algorithms to improve the computer system performance. The work that has been conducted, however, has produced considerably promising results. The purpose of this paper is to serve as a foundational base and guide to future computer architecture research seeking to make use of machine learning models for improving system efficiency. We describe a method that highlights when, why, and how to utilize machine learning models for improving system performance and provide a relevant example showcasing the effectiveness of applying machine learning in computer architecture. We describe a process of data generation every execution quantum and parameter engineering. This is followed by a survey of a set of popular machine learning models. We discuss their strengths and weaknesses and provide an evaluation of implementations for the purpose of creating a workload performance predictor for different core types in an x86 processor. The predictions can then be exploited by a scheduler for heterogeneous processors to improve the system throughput. The algorithms of focus are stochastic gradient descent based linear regression, decision trees, random forests, artificial neural networks, and k-nearest neighbors.This work has been supported by the European Research Council (ERC) Advanced Grant RoMoL (Grant Agreemnt 321253) and by the Spanish Ministry of Science and Innovation (contract TIN 2015-65316P).Peer ReviewedPostprint (published version

    Dynamics of High-Technology Firms in the Silicon Valley

    Get PDF
    The pace of technological innovation since World War II is dramatically accelerating following the commercial exploitation of the Internet. Since the mid 90’s fiber optics capacity (infrastructure for transmission of information including voice and data) has incremented over one hundred times thanks to a new technology, dense wave division multiplexing, and Internet traffic has increased over 1.000 times. The dramatic advances in information technology provide excellent examples of the critical relevance of the knowledge in the development of competitive advantages. The Silicon Valley (SV) that about fifty years ago was an agricultural region became the center of dramatic technological and organizational transformations. In fact, most of the present high-tech companies did not exist twenty years ago. Venture capital contribution to the local economy is quite important not only due to the magnitude of the financial investment (venture investment in SV during 2000 surpassed 25.000 millions of dollars) but also because the extent and quality of networks (management teams, senior employees, customers, providers, etc.) that bring to emerging companies. How do new technologies develop? What is the role of private and public investment in the financing of R&D? Which are the most dynamical agents and how do they interact? How are new companies created and how do they evolve? The discussion of these questions is the focus of the current work.Technological development, R&D, networks

    Quasi-stationary distributions as centrality measures of reducible graphs

    Get PDF
    Random walk can be used as a centrality measure of a directed graph. However, if the graph is reducible the random walk will be absorbed in some subset of nodes and will never visit the rest of the graph. In Google PageRank the problem was solved by introduction of uniform random jumps with some probability. Up to the present, there is no clear criterion for the choice this parameter. We propose to use parameter-free centrality measure which is based on the notion of quasi-stationary distribution. Specifically we suggest four quasi-stationary based centrality measures, analyze them and conclude that they produce approximately the same ranking. The new centrality measures can be applied in spam detection to detect ``link farms'' and in image search to find photo albums
    corecore