795 research outputs found

    Advanced Studies on Locational Marginal Pricing

    Get PDF
    The effectiveness and economic aspect of Locational Marginal Price (LMP) formulation to deal with the power trading in both Day-Ahead (DA) and Real-Time (RT) operation are the focus of not only the system operator but also numerous market participants. In addition, with the ever increasing penetration of renewable energy being integrated into the grid, uncertainty plays a larger role in the process of market operation. The study is carried out in four parts. In the first part, the mathematical programming models, which produce the generation dispatch solution for the Ex Post LMP, are reviewed. The existing approach fails to meet the premise that Ex Post LMP should be equal to Ex Ante LMP when all the generation and load combinations in RT operation remain the same as in DA market. Thus, a similar yet effective approach which is based on a scaling factor applied to the Ex Ante dispatch model is proposed. In the second part, the step change characteristic of LMP and the Critical Load Level (CLL) effect are investigated together with the stochastic wind power to evaluate the impacts on the market price volatility. A lookup table based Monte Carlo simulation has been adopted to capture the probabilistic nature of wind power as well as assessing the probabilistic distribution of the price signals. In the third part, a probability-driven, multilayer framework is proposed for ISOs to schedule intermittent wind power and other renewables. The fundamental idea is to view the intermittent renewable energy as a product with a lower quality than dispatchable power plants, from the operator’s viewpoint. The new concept used to handle the scheduling problem with uncertainty greatly relieves the intensive computational burden of the stochastic Unit Commitment (UC) and Economic Dispatch (ED). In the last part, due to the relatively high but similar R/X ratio along the radial distribution feeder, a modified DC power flow approach can be used to simplify the computational effort. In addition, distribution LMP (DLMP) has been formulated to have both real and reactive power price, under the linearized optimal power flow (OPF) model

    A hierarchical graph model for object cosegmentation

    Get PDF

    The role of sensory uncertainty in simple contour integration

    Get PDF
    Perceptual organization is the process of grouping scene elements into whole entities. A classic example is contour integration, in which separate line segments are perceived as continuous contours. Uncertainty in such grouping arises from scene ambiguity and sensory noise. Some classic Gestalt principles of contour integration, and more broadly, of perceptual organization, have been re-framed in terms of Bayesian inference, whereby the observer computes the probability that the whole entity is present. Previous studies that proposed a Bayesian interpretation of perceptual organization, however, have ignored sensory uncertainty, despite the fact that accounting for the current level of perceptual uncertainty is one the main signatures of Bayesian decision making. Crucially, trial-by-trial manipulation of sensory uncertainty is a key test to whether humans perform near-optimal Bayesian inference in contour integration, as opposed to using some manifestly non-Bayesian heuristic. We distinguish between these hypotheses in a simplified form of contour integration, namely judging whether two line segments separated by an occluder are collinear. We manipulate sensory uncertainty by varying retinal eccentricity. A Bayes-optimal observer would take the level of sensory uncertainty into account-in a very specific way-in deciding whether a measured offset between the line segments is due to non-collinearity or to sensory noise. We find that people deviate slightly but systematically from Bayesian optimality, while still performing "probabilistic computation" in the sense that they take into account sensory uncertainty via a heuristic rule. Our work contributes to an understanding of the role of sensory uncertainty in higher-order perception. Author summary Our percept of the world is governed not only by the sensory information we have access to, but also by the way we interpret this information. When presented with a visual scene, our visual system undergoes a process of grouping visual elements together to form coherent entities so that we can interpret the scene more readily and meaningfully. For example, when looking at a pile of autumn leaves, one can still perceive and identify a whole leaf even when it is partially covered by another leaf. While Gestalt psychologists have long described perceptual organization with a set of qualitative laws, recent studies offered a statistically-optimal-Bayesian, in statistical jargon-interpretation of this process, whereby the observer chooses the scene configuration with the highest probability given the available sensory inputs. However, these studies drew their conclusions without considering a key actor in this kind of statistically-optimal computations, that is the role of sensory uncertainty. One can easily imagine that our decision on whether two contours belong to the same leaf or different leaves is likely going to change when we move from viewing the pile of leaves at a great distance (high sensory uncertainty), to viewing very closely (low sensory uncertainty). Our study examines whether and how people incorporate uncertainty into contour integration, an elementary form of perceptual organization, by varying sensory uncertainty from trial to trial in a simple contour integration task. We found that people indeed take into account sensory uncertainty, however in a way that subtly deviates from optimal behavior.Peer reviewe

    Honest Score Client Selection Scheme: Preventing Federated Learning Label Flipping Attacks in Non-IID Scenarios

    Full text link
    Federated Learning (FL) is a promising technology that enables multiple actors to build a joint model without sharing their raw data. The distributed nature makes FL vulnerable to various poisoning attacks, including model poisoning attacks and data poisoning attacks. Today, many byzantine-resilient FL methods have been introduced to mitigate the model poisoning attack, while the effectiveness when defending against data poisoning attacks still remains unclear. In this paper, we focus on the most representative data poisoning attack - "label flipping attack" and monitor its effectiveness when attacking the existing FL methods. The results show that the existing FL methods perform similarly in Independent and identically distributed (IID) settings but fail to maintain the model robustness in Non-IID settings. To mitigate the weaknesses of existing FL methods in Non-IID scenarios, we introduce the Honest Score Client Selection (HSCS) scheme and the corresponding HSCSFL framework. In the HSCSFL, The server collects a clean dataset for evaluation. Under each iteration, the server collects the gradients from clients and then perform HSCS to select aggregation candidates. The server first evaluates the performance of each class of the global model and generates the corresponding risk vector to indicate which class could be potentially attacked. Similarly, the server evaluates the client's model and records the performance of each class as the accuracy vector. The dot product of each client's accuracy vector and global risk vector is generated as the client's host score; only the top p\% host score clients are included in the following aggregation. Finally, server aggregates the gradients and uses the outcome to update the global model. The comprehensive experimental results show our HSCSFL effectively enhances the FL robustness and defends against the "label flipping attack.
    corecore