48,799 research outputs found

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    Matching Theory for Future Wireless Networks: Fundamentals and Applications

    Full text link
    The emergence of novel wireless networking paradigms such as small cell and cognitive radio networks has forever transformed the way in which wireless systems are operated. In particular, the need for self-organizing solutions to manage the scarce spectral resources has become a prevalent theme in many emerging wireless systems. In this paper, the first comprehensive tutorial on the use of matching theory, a Nobelprize winning framework, for resource management in wireless networks is developed. To cater for the unique features of emerging wireless networks, a novel, wireless-oriented classification of matching theory is proposed. Then, the key solution concepts and algorithmic implementations of this framework are exposed. Then, the developed concepts are applied in three important wireless networking areas in order to demonstrate the usefulness of this analytical tool. Results show how matching theory can effectively improve the performance of resource allocation in all three applications discussed

    Active Virtual Network Management Prediction: Complexity as a Framework for Prediction, Optimization, and Assurance

    Full text link
    Research into active networking has provided the incentive to re-visit what has traditionally been classified as distinct properties and characteristics of information transfer such as protocol versus service; at a more fundamental level this paper considers the blending of computation and communication by means of complexity. The specific service examined in this paper is network self-prediction enabled by Active Virtual Network Management Prediction. Computation/communication is analyzed via Kolmogorov Complexity. The result is a mechanism to understand and improve the performance of active networking and Active Virtual Network Management Prediction in particular. The Active Virtual Network Management Prediction mechanism allows information, in various states of algorithmic and static form, to be transported in the service of prediction for network management. The results are generally applicable to algorithmic transmission of information. Kolmogorov Complexity is used and experimentally validated as a theory describing the relationship among algorithmic compression, complexity, and prediction accuracy within an active network. Finally, the paper concludes with a complexity-based framework for Information Assurance that attempts to take a holistic view of vulnerability analysis

    Impersonal efficiency and the dangers of a fully automated securities exchange

    Get PDF
    This report identifies impersonal efficiency as a driver of market automation during the past four decades, and speculates about the future problems it might pose. The ideology of impersonal efficiency is rooted in a mistrust of financial intermediaries such as floor brokers and specialists. Impersonal efficiency has guided the development of market automation towards transparency and impersonality, at the expense of human trading floors. The result has been an erosion of the informal norms and human judgment that characterize less anonymous markets. We call impersonal efficiency an ideology because we do not think that impersonal markets are always superior to markets built on social ties. This report traces the historical origins of this ideology, considers the problems it has already created in the recent Flash Crash of 2010, and asks what potential risks it might pose in the future

    Approximate Computation and Implicit Regularization for Very Large-scale Data Analysis

    Full text link
    Database theory and database practice are typically the domain of computer scientists who adopt what may be termed an algorithmic perspective on their data. This perspective is very different than the more statistical perspective adopted by statisticians, scientific computers, machine learners, and other who work on what may be broadly termed statistical data analysis. In this article, I will address fundamental aspects of this algorithmic-statistical disconnect, with an eye to bridging the gap between these two very different approaches. A concept that lies at the heart of this disconnect is that of statistical regularization, a notion that has to do with how robust is the output of an algorithm to the noise properties of the input data. Although it is nearly completely absent from computer science, which historically has taken the input data as given and modeled algorithms discretely, regularization in one form or another is central to nearly every application domain that applies algorithms to noisy data. By using several case studies, I will illustrate, both theoretically and empirically, the nonobvious fact that approximate computation, in and of itself, can implicitly lead to statistical regularization. This and other recent work suggests that, by exploiting in a more principled way the statistical properties implicit in worst-case algorithms, one can in many cases satisfy the bicriteria of having algorithms that are scalable to very large-scale databases and that also have good inferential or predictive properties.Comment: To appear in the Proceedings of the 2012 ACM Symposium on Principles of Database Systems (PODS 2012

    A new taxonomy for distributed computer systems based upon operating system structure

    Get PDF
    Characteristics of the resource structure found in the operating system are considered as a mechanism for classifying distributed computer systems. Since the operating system resources, themselves, are too diversified to provide a consistent classification, the structure upon which resources are built and shared are examined. The location and control character of this indivisibility provides the taxonomy for separating uniprocessors, computer networks, network computers (fully distributed processing systems or decentralized computers) and algorithm and/or data control multiprocessors. The taxonomy is important because it divides machines into a classification that is relevant or important to the client and not the hardware architect. It also defines the character of the kernel O/S structure needed for future computer systems. What constitutes an operating system for a fully distributed processor is discussed in detail
    • 

    corecore