865 research outputs found

    The impact of timing on linearizability in counting networks

    No full text
    {\em Counting networks} form a new class of distributed, low-contention data structures, made up of {\em balancers} and {\em wires,} which are suitable for solving a variety of multiprocessor synchronization problems that can be expressed as counting problems. A {\em linearizable} counting network guarantees that the order of the values it returns respects the real-time order they were requested. Linearizability significantly raises the capabilities of the network, but at a possible price in network size or synchronization support. In this work, we further pursue the systematic study of the impact of {\em timing} assumptions on linearizability for counting networks, along the line of research recently initiated by Lynch~{\em et~al.} in [18]. We consider two basic {\em timing} models, the {instantaneous balancer} model, in which the transition of a token from an input to an output port of a balancer is modeled as an instantaneous event, and the {\em periodic balancer} model, where balancers send out tokens at a fixed rate. In both models, we assume lower and upper bounds on the delays incurred by wires connecting the balancers. We present necessary and sufficient conditions for linearizability in these models, in the form of precise inequalities that involve not only parameters of the timing models, but also certain structural parameters of the counting network, which may be of more general interest. Our results extend and strengthen previous impossibility and possibility results on linearizability in counting networks

    Privacy-Preserving Public Information for Sequential Games

    Full text link
    In settings with incomplete information, players can find it difficult to coordinate to find states with good social welfare. For example, in financial settings, if a collection of financial firms have limited information about each other's strategies, some large number of them may choose the same high-risk investment in hopes of high returns. While this might be acceptable in some cases, the economy can be hurt badly if many firms make investments in the same risky market segment and it fails. One reason why many firms might end up choosing the same segment is that they do not have information about other firms' investments (imperfect information may lead to `bad' game states). Directly reporting all players' investments, however, raises confidentiality concerns for both individuals and institutions. In this paper, we explore whether information about the game-state can be publicly announced in a manner that maintains the privacy of the actions of the players, and still suffices to deter players from reaching bad game-states. We show that in many games of interest, it is possible for players to avoid these bad states with the help of privacy-preserving, publicly-announced information. We model behavior of players in this imperfect information setting in two ways -- greedy and undominated strategic behaviours, and we prove guarantees on social welfare that certain kinds of privacy-preserving information can help attain. Furthermore, we design a counter with improved privacy guarantees under continual observation

    The amazing synchronicity of the Global Development (the 1300s-1450s). An institutional approach to the globalization of the late Middle Ages

    Get PDF
    In a new approach to a long-ranging debate on the causes of the Late Medieval Debasement, we offer an institutional case-study of Russia and the Levant. Avoiding the complexity of the “upstream” financial/minting centres of Western Europe, we consider the effects of debasement “downstream”, in resource-exporting periphery countries. The paper shows the amazing synchronicity of the worldwide appearance of the early modern trading system, associated with capitalism or commercial society. The centre-periphery feedback loop amplified trends and pushed towards economic and institutional changes. This is illustrated via the Hanseatic-Novgorodian and Italian-Levantine trade – under growing market pressure of the exploding transaction costs, the oligopolies gradually dissolved and were replaced by the British-Dutch traders. In this case-study the late-medieval/early-modern monetary integration served as the transitional institutional base for reducing transaction costs during a dramatic global shift. Highlighting centre-periphery links, a new trading outpost of Arkhangelsk rose synchronously with Amsterdam

    Balancing the trade : Roman cargo shipments to India

    Get PDF
    There has been a continuing debate about the extent to which the Roman Empire suffered an economic imbalance in its trade with India (and more broadly the East), that is to say whether in volume or value the Roman Empire imported more than it exported. This imbalance is often thought to be manifested in the export of Roman gold and silver to India and the connected notion that other goods from the Roman Empire were seen as merely items of ballast. It is the intention of this article to place this debate in a practical context by demonstrating not only the physical need for mixed cargoes on ships sailing to India, but also the negligible amount of space taken up by the gold and silver. It is argued that in terms of volume (if not value) goods in-kind were far more significant

    Sequentially consistent versus linearizable counting networks

    Full text link
    We compare the impact of timing conditions on implementing sequentially consistent and linearizable counters using (uniform) counting networks in distributed systems. For counting problems in application domains which do not require linearizability but will run correctly if only sequential consistency is provided, the results of our investigation, and their potential payoffs, are threefold: • First, we show that sequential consistency and linearizability cannot be distinguished by the timing conditions previously considered in the context of counting networks; thus, in contexts where these constraints apply, it is possible to rely on the stronger semantics of linearizability, which simplifies proofs and enhances compositionality. • Second, we identify local timing conditions that support sequential consistency but not linearizability; thus, we suggest weaker, easily implementable timing conditions that are likely to be sufficient in many applications. • Third, we show that any kind of synchronization that is too weak to support even sequential consistency may violate it significantly for some counting networks; hence

    Indo-Byzantine exchange, 4th to 7th centuries: a global history

    Get PDF
    This thesis uses Byzantine coins in south India to re-examine pre-Islamic maritime trade between the Mediterranean and south India. Analysis of historiographical trends, key textual sources (the Periplous of the Erythreian Sea and the Christian Topography, Book Eleven), and archaeological evidence from the Red Sea, Aksum, the Persian Gulf and India, alongside the numismatic evidence yields two main methodological and three historical conclusions. Methodologically, the multi-disciplinary tradition of Indo-Roman studies needs to incorporate greater sensitivity to the complexities of different evidence types and engage with wider scholarship on the economic and state structures of the Mediterranean and India. Furthermore, pre-Islamic Indo-Mediterranean trade offers an ideal locus for experimenting with a practical global history, particularly using new technologies to enhance data sharing and access to scholarship. Historically, this thesis concludes: first, that the significance of pre-Islamic trade between the Mediterranean and India was minimal for any of the participating states; second, that this trade should be understood in the context of wider Indian Ocean networks, connecting India, Sri Lanka and southeast Asia; third, that the Persian Gulf rather than the Red Sea probably formed the major meeting point of trade from east and west, but this is not yet demonstrable archaeologically, numismatically or textually

    Randomised Load Balancing

    Get PDF
    Due to the increased use of parallel processing in networks and multi-core architectures, it is important to have load balancing strategies that are highly efficient and adaptable to specific requirements. Randomised protocols in particular are useful in situations in which it is costly to gather and update information about the load distribution (e.g. in networks). For the mathematical analysis randomised load balancing schemes are modelled by balls-into-bins games, where balls represent tasks and bins computers. If m balls are allocated to n bins and every ball chooses one bin at random, the gap between maximum and average load is known to grow with the number of balls m. Surprisingly, this is not the case in the multiple-choice process in which each ball chooses d > 1 bins and allocates itself to the least loaded. Berenbrink et al. proved that then the gap remains ln ln(n) / ln(d). This thesis analyses generalisations and variations of the multiple-choice process. For a scenario in which batches of balls are allocated in parallel, it is shown that the gap between maximum and average load is still independent of m. Furthermore, we look into a process in which only predetermined subsets of bins can be chosen by a ball. Assuming that the number and composition of the subsets can change with every ball, we examine under which circumstances the maximum load is one. Finally, we consider a generalisation of the basic process allowing the bins to have different capacities. Adapting the probabilities of the bins, it is shown how the load can be balanced over the bins according to their capacities

    Methodologies for innovation and best practices in Industry 4.0 for SMEs

    Get PDF
    Today, cyber physical systems are transforming the way in which industries operate, we call this Industry 4.0 or the fourth industrial revolution. Industry 4.0 involves the use of technologies such as Cloud Computing, Edge Computing, Internet of Things, Robotics and most of all Big Data. Big Data are the very basis of the Industry 4.0 paradigm, because they can provide crucial information on all the processes that take place within manufacturing (which helps optimize processes and prevent downtime), as well as provide information about the employees (performance, individual needs, safety in the workplace) as well as clients/customers (their needs and wants, trends, opinions) which helps businesses become competitive and expand on the international market. Current processing capabilities thanks to technologies such as Internet of Things, Cloud Computing and Edge Computing, mean that data can be processed much faster and with greater security. The implementation of Artificial Intelligence techniques, such as Machine Learning, can enable technologies, can help machines take certain decisions autonomously, or help humans make decisions much faster. Furthermore, data can be used to feed predictive models which can help businesses and manufacturers anticipate future changes and needs, address problems before they cause tangible harm

    Adaptive architecture-transparent policy control in a distributed graph reducer

    Get PDF
    The end of the frequency scaling era occured around 2005 as the clock frequency has stalled for commodity architectures. Thus performance improvements that could in the past be expected with each new hardware generation needed to originate elsewhere. Almost all computer architectures exhibit substantial and growing levels of parallelism, exploiting which became one of the key sources of performance and scalability improvements. Alas, parallel programming proved much more difficult than sequential, due to the need to specify coordination and parallelism management aspects. Whilst low-level languages place the burden on the programmers reducing productivity and portability, semi-implicit approaches delegate the responsibility to sophisticated compilers and run-time systems. This thesis presents a study of adaptive load distribution based on work stealing using history and ancestry information in a distributed graph reducer for a nonstrict functional language. The results contribute to the exploration of more flexible run-time-system-level parallelism control implementing a semi-explicit model of parallelism, which offers productivity and high level of abstraction by delegating the responsibility of coordination to the run-time system. After characterising a set of parallel functional applications, we study the use of historical information to adapt the choice of the victim to steal from in a work stealing scheduler. We observe substantially lower numbers of messages for data-parallel and nested applications. However, this heuristic fails in cases where past application behaviour is not resembling future behaviour, for instance for Divide-&-Conquer applications with a large number of very fine-grained threads and generators of parallelism that move dynamically across processing elements. This mechanism is not specific to the language and the run-time system, and applies to other work stealing schedulers. Next, we focus on the other key work stealing decision of which sparks that represent potential parallelism to donate, investigating the effect of Spark Colocation on the performance of five Divide-&-Conquer programs run on a cluster of up to 256 PEs. When using Spark Colocation, the distributed graph reducer shares related work resulting in a higher degree of both potential and actual parallelism, and more fine-grained and less variable thread size. We validate this behaviour by observing a reduction in average fetch times, but increased amounts of FETCH messages and of inter-PE pointers for colocation, which nevertheless results in improved load balance for three of the five benchmark programs. The results show high speedups and speedup improvements for Spark Colocation for the three more regular and nested applications and performance degradation for two programs: one that is excessively fine-grained and one exhibiting limited scalability. Overall, Spark Colocation appears most beneficial for higher numbers of PEs, where improved load balance and higher degree of parallelism have more opportunities to pay off. In more general terms, we show that a run-time system can beneficially use historical information on past stealing successes that is gathered dynamically and used within the same run and the ancestry information dynamically reconstructed at run time using annotations. Moreover, the results support the view that different heuristics are beneficial for applications using different parallelism patterns, underlining the advantages of a flexible architecture-transparent approach.The Scottish Informatics and Computer Science Alliance (SICSA

    Building Efficient Smart Cities

    Get PDF
    Current technological developments offer promising solutions to the challenges faced by cities such as crowding, pollution, housing, the search for greater comfort, better healthcare, optimized mobility and other urban services that must be adapted to the fast-paced life of the citizens. Cities that deploy technology to optimize their processes and infrastructure fit under the concept of a smart city. An increasing number of cities strive towards becoming smart and some are even already being recognized as such, including Singapore, London and Barcelona. Our society has an ever-greater reliance on technology for its sustenance. This will continue into the future, as technology is rapidly penetrating all facets of human life, from daily activities to the workplace and industries. A myriad of data is generated from all these digitized processes, which can be used to further enhance all smart services, increasing their adaptability, precision and efficiency. However, dealing with large amounts of data coming from different types of sources is a complex process; this impedes many cities from taking full advantage of data, or even worse, a lack of control over the data sources may lead to serious security issues, leaving cities vulnerable to cybercrime. Given that smart city infrastructure is largely digitized, a cyberattack would have fatal consequences on the city’s operation, leading to economic loss, citizen distrust and shut down of essential city services and networks. This is a threat to the efficiency smart cities strive for
    corecore