1,221 research outputs found

    Coordination in organizations with decision support systems

    Get PDF
    Bibliography: p. 13.Office of Naval Research. N00014-84-K-0519 (NR 649-003)by Jean-Louis M. Grevet, Alexander H. Levis

    Selecting projects in a portfolio using risk and ranking

    Full text link
    There are three dimensions in project management: time, cost and performance. Risk is a characteristic related to the previous dimensions and their relationships. A risk equation is proposed based on the nature of the uncertainty associated to each dimension as well as the relationship between the uncertainties. A ranking equation that is able to prioritise projects is proposed and discussed. The problem solved here is which projects to select in a given portfolio of projects. The model is implemented in a group decision support system (GDSS) which can guide decisionmakers in their decision process. However, the system is not intended as a substitution of the decisionmaker task, but merely as an aid. The methodology used is analysis of the equations proposed and trial and error based on examples. This paper’s main contribution is the risk equation and the ranking equation

    A Supervisor Αgent-Based on the Markovian Decision Process Framework to Optimize the Behavior of a Highly Automated System

    Get PDF
    In this paper, we explore how MDP can be used as the framework to design and develop an Intelligent Decision Support System/Recommender System, in order to extend human perception and overcome human senses limitations (because covered by the ADS), by augmenting human cognition, emphasizing human judgement and intuition, as well as supporting him/her to take the proper decision in the right terms and time. Moreover, we develop Human-Machine Interaction (HMI) strategies able to make “transparent” the decision-making/recommendation process. This is strongly needed, since the adoption of partial automated systems is not only connected to the effectiveness of the decision and control processes, but also relies on how these processes are communicated and “explained” to the human driver, in order to achieve his/her trust

    A two-armed bandit based scheme for accelerated decentralized learning

    Get PDF
    The two-armed bandit problem is a classical optimization problem where a decision maker sequentially pulls one of two arms attached to a gambling machine, with each pull resulting in a random reward. The reward distributions are unknown, and thus, one must balance between exploiting existing knowledge about the arms, and obtaining new information. Bandit problems are particularly fascinating because a large class of real world problems, including routing, QoS control, game playing, and resource allocation, can be solved in a decentralized manner when modeled as a system of interacting gambling machines. Although computationally intractable in many cases, Bayesian methods provide a standard for optimal decision making. This paper proposes a novel scheme for decentralized decision making based on the Goore Game in which each decision maker is inherently Bayesian in nature, yet avoids computational intractability by relying simply on updating the hyper parameters of sibling conjugate priors, and on random sampling from these posteriors. We further report theoretical results on the variance of the random rewards experienced by each individual decision maker. Based on these theoretical results, each decision maker is able to accelerate its own learning by taking advantage of the increasingly more reliable feedback that is obtained as exploration gradually turns into exploitation in bandit problem based learning. Extensive experiments demonstrate that the accelerated learning allows us to combine the benefits of conservative learning, which is high accuracy, with the benefits of hurried learning, which is fast convergence. In this manner, our scheme outperforms recently proposed Goore Game solution schemes, where one has to trade off accuracy with speed. We thus believe that our methodology opens avenues for improved performance in a number of applications of bandit based decentralized decision making

    Optimal Timing and Legal Decisionmaking: The Case of the Liquidation Decision in Bankruptcy

    Get PDF
    Until the firm is sold or a plan of reorganization is confirmed, Chapter 11 entrusts a judge with the decision of whether to keep a firm as a going concern or to shut it down. The judge revisits this liquidation decision multiple times. The key is to make the correct decision at the optimal time. This paper models this decision as the exercise of a real option and shows that it depends critically on particular types of information about the firm and its industry. Liquidations take place too soon if we merely compare the liquidation value of the assets with the expected earnings of the firm. Moreover, existing law undermines effective decisionmaking. Even though the judge makes the liquidation decision, a number of rules prevent the judge from controlling the timing of the decision, and those who do control it lack the incentive to ensure it is made at the optimal time. The paper introduces a framework that can illuminate many areas of law, such as summary judgment motions, parole, and agency rule making

    Future Generations: A Prioritarian View

    Get PDF
    Should we remain neutral between our interests and those of future generations? Or are we ethically permitted or even required to depart from neutrality and engage in some measure of intergenerational discounting? This Article addresses the problem of intergenerational discounting by drawing on two different intellectual traditions: the social welfare function (“SWF”) tradition in welfare economics, and scholarship on “prioritarianism” in moral philosophy. Unlike utilitarians, prioritarians are sensitive to the distribution of well-being. They give greater weight to well-being changes affecting worse-off individuals. Prioritarianism can be captured, formally, through an SWF which sums a concave transformation of individual utility, rather than simply summing unweighted utilities in utilitarian fashion. The Article considers the appropriate structure of a prioritarian SWF in intergenerational cases. The simplest case involves a fixed and finite intertemporal population. In that case, I argue, policymakers can and should maintain full neutrality between present and future generations. No discount factor should be attached to the utility of future individuals. Neutrality becomes trickier when we depart from this simple case, meaning: (1) “non-identity” problems, where current choices change the identity of future individuals; (2) population-size variation, where current choices affect not merely the identity of future individuals, but the size of the world’s future population (this case raises the specter of what Derek Parfit terms “the repugnant conclusion,” i.e., that dramatic reductions in the average level of individual well-being might be compensated for by increases in population size); or (3) an infinite population. The Article grapples with the difficult question of outfitting a prioritarian SWF to handle non-identity problems, population-size variation, and infinite populations. It tentatively suggests that a measure of neutrality can be maintained even in these cases

    Incorporating a priori preferences in a vector PSO algorithm to find arbitrary fractions of the pareto front of multiobjective design problems

    Get PDF
    Author name used in this publication: S. L. Ho2007-2008 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    A Multi objective Approach to Evolving Artificial Neural Networks for Coronary Heart Disease Classification

    Get PDF
    The optimisation of the accuracy of classifiers in pattern recognition is a complex problem that is often poorly understood. Whilst numerous techniques exist for the optimisa- tion of weights in artificial neural networks (e.g. the Widrow-Hoff least mean squares algorithm and back propagation techniques), there do not exist any hard and fast rules for choosing the structure of an artificial neural network - in particular for choosing both the number of the hidden layers used in the network and the size (in terms of number of neurons) of those hidden layers. However, this internal structure is one of the key factors in determining the accuracy of the classification. This paper proposes taking a multi-objective approach to the evolutionary design of artificial neural networks using a powerful optimiser based around the state-of-the-art MOEA/D- DRA algorithm and a novel method of incorporating decision maker preferences. In contrast to previous approaches, the novel approach outlined in this paper allows the intuitive consideration of trade-offs between classification objectives that are frequently present in complex classification problems but are often ignored. The effectiveness of the proposed multi-objective approach to evolving artificial neural networks is then shown on a real-world medical classification problem frequently used to benchmark classification method

    Intelligent Design

    Get PDF
    When designers obtain exclusive intellectual property (IP) rights in the functional aspects of their creations, they can wield these rights to increase both the costs to their competitors and the prices that consumers must pay for their goods. IP rights and the costs they entail are justified when they create incentives for designers to invest in new, socially valuable designs. But the law must be wary of allowing rights to be misused. Accordingly, IP law has employed a series of doctrinal and costly screens to channel designs into the appropriate regime—copyright law, design patent law, or utility patent law—depending upon the type of design. Unfortunately, those screens are no longer working. Designers are able to obtain powerful IP protection over the utilitarian aspects of their creations without demonstrating that they have made socially valuable contributions. They are also able to do so without paying substantial fees that might weed out weaker, socially costly designs. This is bad for competition and bad for consumers. In this Article, we integrate theories of doctrinal and costly screens and explore their roles in channeling IP rights. We explain the inefficiencies that have arisen through the misapplication of these screens in copyright and design patent laws. Finally, we propose a variety of solutions that would move design protection toward a successful channeling regime, balancing the law’s needs for incentives and competition. These proposals include improving doctrinal screens to weed out functionality, making design protection more costly, and preventing designers from obtaining multiple forms of protection for the same design
    • …
    corecore