1,754 research outputs found

    Systemic Failure in the Provision of Safe Food

    Get PDF
    Many deficiencies in the capacity of a food system to deliver safe products are systemic in nature. We suggest a taxonomy of four general ways in which a systemic failure might occur. One relates to the connectedness, or topology, of the system. Another arises from mistrust on the part of downstream parties concerning signals on product attributes, production processes, and the performance of regulatory mechanisms. A third arises when asymmetric information leads to low incentives for preserving food quality. Finally, inflexibilities in adapting to different states of nature may leave the system vulnerable to failures. Innovations in information technology and institutional design may ameliorate many problems, while appropriate trade, industrial organization, science, and public infrastructure policies also may fortify the system.

    Performance controls for distributed telecommunication services

    Get PDF
    As the Internet and Telecommunications domains merge, open telecommunication service architectures such as TINA, PARLAY and PINT are becoming prevalent. Distributed Computing is a common engineering component in these technologies and promises to bring improvements to the scalability, reliability and flexibility of telecommunications service delivery systems. This distributed approach to service delivery introduces new performance concerns. As service logic is decomposed into software components and distnbuted across network resources, significant additional resource loading is incurred due to inter-node communications. This fact makes the choice of distribution of components in the network and the distribution of load between these components critical design and operational issues which must be resolved to guarantee a high level of service for the customer and a profitable network for the service operator. Previous research in the computer science domain has addressed optimal placement of components from the perspectives of minimising run time, minimising communications costs or balancing of load between network resources. This thesis proposes a more extensive optimisation model, which we argue, is more useful for addressing concerns pertinent to the telecommunications domain. The model focuses on providing optimal throughput and profitability of network resources and on overload protection whilst allowing flexibility in terms of the cost of installation of component copies and differentiation in the treatment of service types, in terms of fairness to the customer and profitability to the operator. Both static (design-time) component distribution and dynamic (run-time) load distribution algorithms are developed using Linear and Mixed Integer Programming techniques. An efficient, but sub-optimal, run-time solution, employing Market-based control, is also proposed. The performance of these algorithms is investigated using a simulation model of a distributed service platform, which is based on TINA service components interacting with the Intelligent Network through gateways. Simulation results are verified using Layered Queuing Network analytic modelling Results show significant performance gains over simpler methods of performance control and demonstrate how trade-offs in network profitability, fairness and network cost are possible

    Causation in contemporary analytical philosophy

    Get PDF
    Contemporary analytic philosophy is in the midst of a vigorous debate on the nature of causation. Each of the main proposals discussed in this chapter faces important problems: the deductive-nomological model, the counterfactual theory, the manipulability theory, the probabilistic theory and the transference theory. After having explored possible solutions to these problems, I conclude that one version of the transference approach is most promising. However, as I show in the last section, it is necessary to supplement this transference approach with the notion of lawful dependency. This gives rise to the notion of causal responsibility

    A probabilistic demand side management approach by consumption admission control

    Get PDF
    Nova generacija električne mreĆŸe pod nazivom pametna mreĆŸa (Smart Grid) je nedavno zamiĆĄljena vizija čiơćeg, učinkovitijeg i jeftinijeg elektroenergetskog sustava. Jedan od najvećih izazova električne mreĆŸe je da bi proizvodnja i potroĆĄnja trebale biti uravnoteĆŸene u svakome trenutku. U radu se uvodi novi koncept za kontrolu potroĆĄnje sredstvima automatski omogućavanih/onemogućavanih električnih aparata kako bi bili sigurni da je potroĆĄnja usklađena s raspoloĆŸivim zalihama, na temelju statističkih karakterizacija potreba. U naĆĄem novom pristupu, umjesto uporabe tvrdih granica procjenjujemo vjerojatnost kraja distribucije potroĆĄnje i sustava kontrole pomoću načela i rezultata statističkog upravljanja resursima.New generation electricity network called Smart Grid is a recently conceived vision for a cleaner, more efficient and cheaper electricity system. One of the major challenges of electricity network is that generation and consumption should be balanced at every moment. This paper introduces a new concept for controlling the demand side by the means of automatically enabling/disabling electric appliances to make sure that the demand is in match with the available supplies, based on the statistical characterization of the need. In our new approach instead of using hard limits we estimate the tail probability of the demand distribution and control system by using the principles and the results of statistical resource management

    A software approach to enhancing quality of service in internet commerce

    Get PDF

    Application of learning algorithms to traffic management in integrated services networks.

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre-DSC:DXN027131 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Statistical multiplexing and connection admission control in ATM networks

    Get PDF
    Asynchronous Transfer Mode (ATM) technology is widely employed for the transport of network traffic, and has the potential to be the base technology for the next generation of global communications. Connection Admission Control (CAC) is the effective traffic control mechanism which is necessary in ATM networks in order to avoid possible congestion at each network node and to achieve the Quality-of-Service (QoS) requested by each connection. CAC determines whether or not the network should accept a new connection. A new connection will only be accepted if the network has sufficient resources to meet its QoS requirements without affecting the QoS commitments already made by the network for existing connections. The design of a high-performance CAC is based on an in-depth understanding of the statistical characteristics of the traffic sources

    Practical Real-Time with Look-Ahead Scheduling

    Get PDF
    In my dissertation, I present ATLAS — the Auto-Training Look-Ahead Scheduler. ATLAS improves service to applications with regard to two non-functional properties: timeliness and overload detection. Timeliness is an important requirement to ensure user interface responsiveness and the smoothness of multimedia operations. Overload can occur when applications ask for more computation time than the machine can offer. Interactive systems have to handle overload situations dynamically at runtime. ATLAS provides timely service to applications, accessible through an easy-to-use interface. Deadlines specify timing requirements, workload metrics describe jobs. ATLAS employs machine learning to predict job execution times. Deadline misses are detected before they occur, so applications can react early.:1 Introduction 2 Anatomy of a Desktop Application 3 Real Simple Real-Time 4 Execution Time Prediction 5 System Scheduler 6 Timely Service 7 The Road Ahead Bibliography Inde

    An Economic Approach to the Law of Evidence

    Get PDF
    In this article, Judge Richard A. Posner presents the first comprehensive economic analysis of the law of evidence. The article is presented in three parts. First, Judge Posner proposes and describes two possible economic models, both a search and a cost-minimization approach, to describe how evidence is obtained, presented, and evaluated. In both, he incorporates Bayes\u27 theorem to examine rational decisionmaking. Second, he examines the evidence gathering process, comparing and contrasting, in economic terms, the inquisitorial and adversarial systems of justice. The inquisitorial system, at first glance, appears to be more economically efficient. This, though, may be illusory, a result of the adversarial system\u27s greater public visibility and widespread acceptance of plea bargaining. Finally, the article addresses burden of proof issues, plus specific provisions of the Federal Rules of Evidence: harmless error, limiting instructions, relevance, character evidence, hearsay, expert witnesses, and various privileges and exclusionary rules. He concludes that American evidence law, rather than simply sacrificing efficiency in order to protect noneconomic values, is actually quite efficient and possibly superior to its Continental, inquisitorial counterparts; but a number of reforms are suggested

    SEEC: A Framework for Self-aware Management of Multicore Resources

    Get PDF
    This paper presents SEEC, a self-aware programming model, designed to reduce programming effort in modern multicore systems. In the SEEC model, application programmers specify application goals and progress, while systems programmers separately specify actions system software and hardware can take to affect an application (e.g. resource allocation). The SEEC runtime monitors applications and dynamically selects actions to meet application goals optimally (e.g. meeting performance while minimizing power consumption). The SEEC runtime optimizes system behavior for the application rather than requiring the application programmer to optimize for the system. This paper presents a detailed discussion of the SEEC model and runtime as well as several case studies demonstrating their benefits. SEEC is shown to optimize performance per Watt for a video encoder, find optimal resource allocation for an application with complex resource usage, and maintain the goals of multiple applications in the face of environmental fluctuations
    • 

    corecore