305,533 research outputs found

    Parallel and Distributed Machine Learning Algorithms for Scalable Big Data Analytics

    Get PDF
    This editorial is for the Special Issue of the journal Future Generation Computing Systems, consisting of the selected papers of the 6th International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics (ParLearning 2017). In this editorial, we have given a high-level overview of the 4 papers contained in this special issue, along with references to some of the related works

    High-performance simulation and simulation methodologies

    Get PDF
    types: Editorial CommentThe realization of high performance simulation necessitates sophisticated simulation experimentation and optimization; this often requires non-trivial amounts of computing power. Distributed computing techniques and systems found in areas such as High Performance Computing (HPC), High Throughput Computing (HTC), e-infrastructures, grid and cloud computing can provide the required computing capacity for the execution of large and complex simulations. This extends the long tradition of adopting advances in distributed computing in simulation as evidenced by contributions from the parallel and distributed simulation community. There has arguably been a recent acceleration of innovation in distributed computing tools and techniques. This special issue presents the opportunity to showcase recent research that is assimilating these new advances in simulation. This special issue brings together a contemporary collection of work showcasing original research in the advancement of simulation theory and practice with distributed computing. This special issue has two parts. The first part (published in the preceding issue of the journal) included seven studies in high performance simulation that support applications including the study of epidemics, social networks, urban mobility and real-time embedded and cyber-physical systems. This second part focuses on original research in high performance simulation that supports a range of methods including DEVS, Petri nets and DES. Of the four papers for this issue, the manuscript by Bergero, et al. (2013), which was submitted, reviewed and accepted for the special issue, was published in an earlier issue of SIMULATION as the author requested early publication.Research Councils U

    Application and support for high-performance simulation

    Get PDF
    types: Editorial CommentHigh performance simulation that supports sophisticated simulation experimentation and optimization can require non-trivial amounts of computing power. Advanced distributed computing techniques and systems found in areas such as High Performance Computing (HPC), High Throughput Computing (HTC), grid computing, cloud computing and e-Infrastructures are needed to provide effectively the computing power needed for the high performance simulation of large and complex models. In simulation there has been a long tradition of translating and adopting advances in distributed computing as shown by contributions from the parallel and distributed simulation community. This special issue brings together a contemporary collection of work showcasing original research in the advancement of simulation theory and practice with distributed computing. This special issue is divided into two parts. This first part focuses on research pertaining to high performance simulation that support a range of applications including the study of epidemics, social networks, urban mobility and real-time embedded and cyber-physical systems. Compared to other simulation techniques agent-based modeling and simulation is relatively new; however, it is increasingly being used to study large-scale problems. Agent-based simulations present challenges for high performance simulation as they can be complex and computationally demanding, and it is therefore not surprising that this special issue includes several articles on the high performance simulation of such systems.Research Councils U

    Stable Leader Election in Population Protocols Requires Linear Time

    Full text link
    A population protocol *stably elects a leader* if, for all nn, starting from an initial configuration with nn agents each in an identical state, with probability 1 it reaches a configuration y\mathbf{y} that is correct (exactly one agent is in a special leader state ℓ\ell) and stable (every configuration reachable from y\mathbf{y} also has a single agent in state ℓ\ell). We show that any population protocol that stably elects a leader requires Ω(n)\Omega(n) expected "parallel time" --- Ω(n2)\Omega(n^2) expected total pairwise interactions --- to reach such a stable configuration. Our result also informs the understanding of the time complexity of chemical self-organization by showing an essential difficulty in generating exact quantities of molecular species quickly.Comment: accepted to Distributed Computing special issue of invited papers from DISC 2015; significantly revised proof structure and intuitive explanation
    • …
    corecore