76 research outputs found

    The Origins of Computational Mechanics: A Brief Intellectual History and Several Clarifications

    Get PDF
    The principle goal of computational mechanics is to define pattern and structure so that the organization of complex systems can be detected and quantified. Computational mechanics developed from efforts in the 1970s and early 1980s to identify strange attractors as the mechanism driving weak fluid turbulence via the method of reconstructing attractor geometry from measurement time series and in the mid-1980s to estimate equations of motion directly from complex time series. In providing a mathematical and operational definition of structure it addressed weaknesses of these early approaches to discovering patterns in natural systems. Since then, computational mechanics has led to a range of results from theoretical physics and nonlinear mathematics to diverse applications---from closed-form analysis of Markov and non-Markov stochastic processes that are ergodic or nonergodic and their measures of information and intrinsic computation to complex materials and deterministic chaos and intelligence in Maxwellian demons to quantum compression of classical processes and the evolution of computation and language. This brief review clarifies several misunderstandings and addresses concerns recently raised regarding early works in the field (1980s). We show that misguided evaluations of the contributions of computational mechanics are groundless and stem from a lack of familiarity with its basic goals and from a failure to consider its historical context. For all practical purposes, its modern methods and results largely supersede the early works. This not only renders recent criticism moot and shows the solid ground on which computational mechanics stands but, most importantly, shows the significant progress achieved over three decades and points to the many intriguing and outstanding challenges in understanding the computational nature of complex dynamic systems.Comment: 11 pages, 123 citations; http://csc.ucdavis.edu/~cmg/compmech/pubs/cmr.ht

    Computation offloading in beyond 5G/6G networks with edge computing:implications and challenges

    Get PDF
    The emerging beyond 5G/6G networks come with novel, latency-sensitive and computation-intensive applications that require enhanced network performance and infrastructure to meet the expected quality of experience for end users. To cope with this challenge, computation offloading leverages the benefits of multi-access edge computing to migrate the application tasks requiring additional computing resources for reduced execution delay. Although the benefits of introducing offloading mechanisms into the network might be straightforward, the implementation is not trivial due to various communication and computation trade-offs that must be made to obtain optimal offloading decisions. In this paper, we provide an overview of computation offloading with highlight on the networking perspective by looking at different offloading decisions, current research efforts, as well as the challenges that may be encountered while building an efficient and robust offloading mechanism. In addition, we provide our view on the evolution of computation offloading in 6G networks to support novel applications through enriched infrastructure and powerful artificial intelligence techniques

    How Has the Literature on Gini's Index Evolved in the Past 80 Years?

    Get PDF
    The Gini coefficient or index is perhaps one of the most used indicators of social and economic conditions. From its first proposal in English in 1921 to the present, a large number of papers on the Gini index has been written and published. Going through these papers represents a demanding task. The aim of this survey paper is to help the reader to navigate through the major developments of the literature and to incorporate recent theoretical research results with a particular focus on different formulations and interpretations of the Gini index, its social welfare implication, and source and subgroup decomposition.Gini coefficent or index; social welfare; decomposition; computation

    System Theories: Science, War, Construction

    Get PDF
    This paper will examine the confluence of information and systems theory and the production of architectural building systems. The two are interrelated around the adaptation of pre-WWII techniques and knowledge that became transformed during the years leading up to, during, and in the immediate postwar period. Much of the progress regarding the evolution of computation is attributed to the large scale deployment of highly acute mathematical minds to the problem of interpreting the encrypted messages sent from Axis command centers to the troops on land, on the sea and in the air. Known as the 'code breakers' these individuals were crucial in the advancement of computation, cybernetics and systems theory. After laying out the theoretical implications of systems theory, this paper analyzes two case studies of wartime building systems. In one case, a wartime factory was retooled for peacetime housing production, and in the other pipe factories were retooled to produce bomb casings. Case 1: Packaged House System. At the end of 1941, Konrad Wachsmann and Walter Gropius, German emigres to the U.S. began to collaborate on a project for industrialized modular housing, which became known as the 'PACKAGED HOUSE'. Wachsmann designed a 'universal Joint' that would give great structural stability to the joining of prefabricated panels. The JOINTING SYSTEM was based on 2-, 3-, and 4-way connections between panels. All surfaces were conceived to be used from the same panels: exterior walls, interior partitions, floors, ceilings and the roof. In February of 1942, the National Housing Agency allocated $153 million for the housing of displaced defense workers. By May 1945 with the end of WWII, the house was still not in production, despite enthusiasm for the project. But the house could have a second chance, in the enormous postwar demand for returning GI's and their families. The General Panel Corporation raised funds to be able to take over the former Lockheed Factory in Burbank, California, which had been built to build wartime aircraft for government contracts. And it was a classic example of using factories that made armaments could be retooled to make houses. Case 2: Tubi Innocenti: scaffolding system. Ferdinando Innocenti, born 1891, experimented with iron pipe and tubes and started producing tube scaffolding in 1933, with a rapid system of mounting and dismantling a combination of tubes and a mechanical fastener. During the war years the Innocenti plants supplied bodies for 150 and 250 kg airplane bombs, for which cut down tubes were used, and also produced 15% of all bullets produced in Italy. After the war, Innocenti continued to make scaffolding and all other types of pipe and tubes for industry and then developed a scooter: the Lambretta. The idea came from vehicles dropped in Rome by the British paratroopers.Conference co-organized by the Institute of Fine Arts; Canadian Centre for Architecture, Montreal; and Princeton University's School of Architecture

    Spectral Representation of Some Computably Enumerable Sets With an Application to Quantum Provability

    Full text link
    We propose a new type of quantum computer which is used to prove a spectral representation for a class F of computable sets. When S in F codes the theorems of a formal system, the quantum computer produces through measurement all theorems and proofs of the formal system. We conjecture that the spectral representation is valid for all computably enumerable sets. The conjecture implies that the theorems of a general formal system, like Peano Arithmetic or ZFC, can be produced through measurement; however, it is unlikely that the quantum computer can produce the proofs as well, as in the particular case of F. The analysis suggests that showing the provability of a statement is different from writing up the proof of the statement.Comment: 12 pages, LaTeX2e, no figure

    Parallel processing over a peer-to-peer network : constructing the poor man’s supercomputer

    Get PDF
    The aggregation of typical home computers through a peer-to-peer (P2P) framework over the Internet would yield a virtual supercomputer of unmatched processing power, 95% of which is presently being left unutilized. However, the global community appears to be still hesitant at tapping into the well of unharnessed potential offered by exploiting distributed computing. Reasons include the lack of personal incentive for participants, and the high degree of expertise required from application developers. Our vision is to tackle the aforementioned obstacles by building a P2P system capable of deploying user-defined tasks onto the network for distributed execution. Users would only be expected to write standard concurrent code accessing our application programming interface, and may rely on the system to transparently provide for optimal task distribution, process migration, message delivery, global state, fault tolerance, and recovery. Strong mobility during process migration is achieved by pre-processing the source code. Our results indicate that near-linear efficiencies – approximately 94% ± 2% of the optimal – may be obtained for adequately coarse-grained applications, even when deployed on a heterogeneous net- work.peer-reviewe

    Universal Mechanical Polycomputation in Granular Matter

    Full text link
    Unconventional computing devices are increasingly of interest as they can operate in environments hostile to silicon-based electronics, or compute in ways that traditional electronics cannot. Mechanical computers, wherein information processing is a material property emerging from the interaction of components with the environment, are one such class of devices. This information processing can be manifested in various physical substrates, one of which is granular matter. In a granular assembly, vibration can be treated as the information-bearing mode. This can be exploited to realize "polycomputing": materials can be evolved such that a single grain within them can report the result of multiple logical operations simultaneously at different frequencies, without recourse to quantum effects. Here, we demonstrate the evolution of a material in which one grain acts simultaneously as two different NAND gates at two different frequencies. NAND gates are of interest as any logical operations can be built from them. Moreover, they are nonlinear thus demonstrating a step toward general-purpose, computationally dense mechanical computers. Polycomputation was found to be distributed across each evolved material, suggesting the material's robustness. With recent advances in material sciences, hardware realization of these materials may eventually provide devices that challenge the computational density of traditional computers.Comment: Accepted to the Genetic and Evolutionary Computation Conference 2023 (GECCO '23
    • …
    corecore