7,129 research outputs found

    Ranking and Selection under Input Uncertainty: Fixed Confidence and Fixed Budget

    Full text link
    In stochastic simulation, input uncertainty (IU) is caused by the error in estimating the input distributions using finite real-world data. When it comes to simulation-based Ranking and Selection (R&S), ignoring IU could lead to the failure of many existing selection procedures. In this paper, we study R&S under IU by allowing the possibility of acquiring additional data. Two classical R&S formulations are extended to account for IU: (i) for fixed confidence, we consider when data arrive sequentially so that IU can be reduced over time; (ii) for fixed budget, a joint budget is assumed to be available for both collecting input data and running simulations. New procedures are proposed for each formulation using the frameworks of Sequential Elimination and Optimal Computing Budget Allocation, with theoretical guarantees provided accordingly (e.g., upper bound on the expected running time and finite-sample bound on the probability of false selection). Numerical results demonstrate the effectiveness of our procedures through a multi-stage production-inventory problem

    Panel Assignment in the Federal Courts of Appeals

    Get PDF
    It is common knowledge that the federal courts of appeals typically hear cases in panels of three judges and that the composition of the panel can have significant consequences for case outcomes and for legal doctrine more generally. Yet neither legal scholars nor social scientists have focused on the question of how judges are selected for their panels. Instead, a substantial body of scholarship simply assumes that panel assignment is random. This Article provides what, up until this point, has been a missing account of panel assignment. Drawing on a multiyear qualitative study of five circuit courts, including in-depth interviews with thirty-five judges and senior administrators, I show that strictly random selection is a myth, and an improbable one at that—in many instances, it would have been impossible as a practical matter for the courts studied here to create their panels by random draw. Although the courts generally tried to “mix up” the judges, the chief judges and clerks responsible for setting the calendar also took into account various other factors, from collegiality to efficiency-based considerations. Notably, those factors differed from one court to the next; no two courts approached the challenge of panel assignment in precisely the same way. These findings pose an important challenge to the widespread assumption of panel randomness and reveal key normative questions that have been largely ignored in the literature. Although randomness is regarded as the default selection method across much of judicial administration, there is little exposition of why it is valuable. What, exactly, is desirable about having judges brought together randomly in the first place? What, if anything, is problematic about nonrandom methods of selection? This Article sets out to clarify both the costs and benefits of randomness, arguing that there can be valid reasons to depart from it. As such, it provides a framework for assessing different panel assignment practices and the myriad other court practices that rely, to some extent, on randomness

    Panel Assignment in the Federal Courts of Appeals

    Get PDF
    It is common knowledge that the federal courts of appeals typically hear cases in panels of three judges and that the composition of the panel can have significant consequences for case outcomes and for legal doctrine more generally. Yet neither legal scholars nor social scientists have focused on the question of how judges are selected for their panels. Instead, a substantial body of scholarship simply assumes that panel assignment is random. This Article provides what, up until this point, has been a missing account of panel assignment. Drawing on a multiyear qualitative study of five circuit courts, including in-depth interviews with thirty-five judges and senior administrators, I show that strictly random selection is a myth, and an improbable one at that—in many instances, it would have been impossible as a practical matter for the courts studied here to create their panels by random draw. Although the courts generally tried to “mix up” the judges, the chief judges and clerks responsible for setting the calendar also took into account various other factors, from collegiality to efficiency-based considerations. Notably, those factors differed from one court to the next; no two courts approached the challenge of panel assignment in precisely the same way. These findings pose an important challenge to the widespread assumption of panel randomness and reveal key normative questions that have been largely ignored in the literature. Although randomness is regarded as the default selection method across much of judicial administration, there is little exposition of why it is valuable. What, exactly, is desirable about having judges brought together randomly in the first place? What, if anything, is problematic about nonrandom methods of selection? This Article sets out to clarify both the costs and benefits of randomness, arguing that there can be valid reasons to depart from it. As such, it provides a framework for assessing different panel assignment practices and the myriad other court practices that rely, to some extent, on randomness

    Cooperative sensing of spectrum opportunities

    Get PDF
    Reliability and availability of sensing information gathered from local spectrum sensing (LSS) by a single Cognitive Radio is strongly affected by the propagation conditions, period of sensing, and geographical position of the device. For this reason, cooperative spectrum sensing (CSS) was largely proposed in order to improve LSS performance by using cooperation between Secondary Users (SUs). The goal of this chapter is to provide a general analysis on CSS for cognitive radio networks (CRNs). Firstly, the theoretical system model for centralized CSS is introduced, together with a preliminary discussion on several fusion rules and operative modes. Moreover, three main aspects of CSS that substantially differentiate the theoretical model from realistic application scenarios are analyzed: (i) the presence of spatiotemporal correlation between decisions by different SUs; (ii) the possible mobility of SUs; and (iii) the nonideality of the control channel between the SUs and the Fusion Center (FC). For each aspect, a possible practical solution for network organization is presented, showing that, in particular for the first two aspects, cluster-based CSS, in which sensing SUs are properly chosen, could mitigate the impact of such realistic assumptions

    Spectrum sharing security and attacks in CRNs: a review

    Get PDF
    Cognitive Radio plays a major part in communication technology by resolving the shortage of the spectrum through usage of dynamic spectrum access and artificial intelligence characteristics. The element of spectrum sharing in cognitive radio is a fundament al approach in utilising free channels. Cooperatively communicating cognitive radio devices use the common control channel of the cognitive radio medium access control to achieve spectrum sharing. Thus, the common control channel and consequently spectrum sharing security are vital to ensuring security in the subsequent data communication among cognitive radio nodes. In addition to well known security problems in wireless networks, cognitive radio networks introduce new classes of security threats and challenges, such as licensed user emulation attacks in spectrum sensing and misbehaviours in the common control channel transactions, which degrade the overall network operation and performance. This review paper briefly presents the known threats and attacks in wireless networks before it looks into the concept of cognitive radio and its main functionality. The paper then mainly focuses on spectrum sharing security and its related challenges. Since spectrum sharing is enabled through usage of the common control channel, more attention is paid to the security of the common control channel by looking into its security threats as well as protection and detection mechanisms. Finally, the pros and cons as well as the comparisons of different CR - specific security mechanisms are presented with some open research issues and challenges

    Constrained Optimization in Simulation: A Novel Approach

    Get PDF
    This paper presents a novel heuristic for constrained optimization of random computer simulation models, in which one of the simulation outputs is selected as the objective to be minimized while the other outputs need to satisfy prespeci¯ed target values. Besides the simulation outputs, the simulation inputs must meet prespeci¯ed constraints including the constraint that the inputs be integer. The proposed heuristic combines (i) experimental design to specify the simulation input combinations, (ii) Kriging (also called spatial correlation mod- eling) to analyze the global simulation input/output data that result from this experimental design, and (iii) integer nonlinear programming to estimate the optimal solution from the Krig- ing metamodels. The heuristic is applied to an (s, S) inventory system and a realistic call-center simulation model, and compared with the popular commercial heuristic OptQuest embedded in the ARENA versions 11 and 12. These two applications show that the novel heuristic outper- forms OptQuest in terms of search speed (it moves faster towards high-quality solutions) and consistency of the solution quality.

    Constrained optimization in simulation: a novel approach.

    Get PDF
    This paper presents a novel heuristic for constrained optimization of random computer simulation models, in which one of the simulation outputs is selected as the objective to be minimized while the other outputs need to satisfy prespeci¯ed target values. Besides the simulation outputs, the simulation inputs must meet prespeci¯ed constraints including the constraint that the inputs be integer. The proposed heuristic combines (i) experimental design to specify the simulation input combinations, (ii) Kriging (also called spatial correlation modeling) to analyze the global simulation input/output data that result from this experimental design, and (iii) integer nonlinear programming to estimate the optimal solution from the Kriging metamodels. The heuristic is applied to an (s, S) inventory system and a realistic call-center simulation model, and compared with the popular commercial heuristic OptQuest embedded in the ARENA versions 11 and 12. These two applications show that the novel heuristic outperforms OptQuest in terms of search speed (it moves faster towards high-quality solutions) and consistency of the solution quality.

    Monotonicity-Preserving Bootstrapped Kriging Metamodels for Expensive Simulations

    Get PDF
    Kriging (Gaussian process, spatial correlation) metamodels approximate the Input/Output (I/O) functions implied by the underlying simulation models; such metamodels serve sensitivity analysis and optimization, especially for computationally expensive simulations. In practice, simulation analysts often know that the I/O function is monotonic. To obtain a Kriging metamodel that preserves this known shape, this article uses bootstrapping (or resampling). Parametric bootstrapping assuming normality may be used in deterministic simulation, but this article focuses on stochastic simulation (including discrete-event simulation) using distribution-free bootstrapping. In stochastic simulation, the analysts should simulate each input combination several times to obtain a more reliable average output per input combination. Nevertheless, this average still shows sampling variation, so the Kriging metamodel does not need to interpolate the average outputs. Bootstrapping provides a simple method for computing a noninterpolating Kriging model. This method may use standard Kriging software, such as the free Matlab toolbox called DACE. The method is illustrated through the M/M/1 simulation model with as outputs either the estimated mean or the estimated 90% quantile; both outputs are monotonic functions of the traffic rate, and have nonnormal distributions. The empirical results demonstrate that monotonicity-preserving bootstrapped Kriging may give higher probability of covering the true simulation output, without lengthening the confidence interval.Queues
    corecore