41 research outputs found

    A three-stage optimal operation strategy of interconnected microgrids with rule-based deep deterministic policy gradient algorithm

    Get PDF
    The ever-increasing requirements of demand response dynamics, competition among different stakeholders, and information privacy protection intensify the challenge of the optimal operation of microgrids. To tackle the above problems, this article proposes a three-stage optimization strategy with a deep reinforcement learning (DRL)-based distributed privacy optimization. In the upper layer of the model, the rule-based deep deterministic policy gradient (DDPG) algorithm is proposed to optimize the load migration problem with demand response, which enhances dynamic characteristics with the interaction between electricity prices and consumer behavior. Due to the competition among different stakeholders and the information privacy requirement in the middle layer of the model, a potential game-based distributed privacy optimization algorithm is improved to seek Nash equilibriums (NEs) with encoded exchange information by a distributed privacy-preserving optimization algorithm, which can ensure the convergence as well as protect privacy information of each stakeholder. In the lower layer of the model of each stakeholder, economic cost and emission rate are both taken as operation objectives, and a gradient descent-based multiobjective optimization method is employed to approach this objective. The simulation results confirm that the proposed three-stage optimization strategy can be a viable and efficient way for the optimal operation of microgrids.In part by the National Natural Science Fund, the Basic Research Project of Leading Technology of Jiangsu Province, the National Natural Science Fund of Jiangsu Province and the National Natural Science Key Fund.https://ieeexplore.ieee.org/servlet/opac?punumber=5962385hj2023Electrical, Electronic and Computer Engineerin

    A learning-based CT prostate segmentation method via joint transductive feature selection and regression

    Get PDF
    In1 recent years, there has been a great interest in prostate segmentation, which is a important and challenging task for CT image guided radiotherapy. In this paper, a learning-based segmentation method via joint transductive feature selection and transductive regression is presented, which incorporates the physician’s simple manual specification (only taking a few seconds), to aid accurate segmentation, especially for the case with large irregular prostate motion. More specifically, for the current treatment image, experienced physician is first allowed to manually assign the labels for a small subset of prostate and non-prostate voxels, especially in the first and last slices of the prostate regions. Then, the proposed method follows the two step: in prostate-likelihood estimation step, two novel algorithms: tLasso and wLapRLS, will be sequentially employed for transductive feature selection and transductive regression, respectively, aiming to generate the prostate-likelihood map. In multi-atlases based label fusion step, the final segmentation result will be obtained according to the corresponding prostate-likelihood map and the previous images of the same patient. The proposed method has been substantially evaluated on a real prostate CT dataset including 24 patients with 330 CT images, and compared with several state-of-the-art methods. Experimental results show that the proposed method outperforms the state-of-the-arts in terms of higher Dice ratio, higher true positive fraction, and lower centroid distances. Also, the results demonstrate that simple manual specification can help improve the segmentation performance, which is clinically feasible in real practice

    Multidimensional Balance-Based Cluster Boundary Detection for High-Dimensional Data

    Full text link
    © 2018 IEEE. The balance of neighborhood space around a central point is an important concept in cluster analysis. It can be used to effectively detect cluster boundary objects. The existing neighborhood analysis methods focus on the distribution of data, i.e., analyzing the characteristic of the neighborhood space from a single perspective, and could not obtain rich data characteristics. In this paper, we analyze the high-dimensional neighborhood space from multiple perspectives. By simulating each dimension of a data point's k nearest neighbors space (k NNs) as a lever, we apply the lever principle to compute the balance fulcrum of each dimension after proving its inevitability and uniqueness. Then, we model the distance between the projected coordinate of the data point and the balance fulcrum on each dimension and construct the DHBlan coefficient to measure the balance of the neighborhood space. Based on this theoretical model, we propose a simple yet effective cluster boundary detection algorithm called Lever. Experiments on both low- and high-dimensional data sets validate the effectiveness and efficiency of our proposed algorithm

    Cost-sensitive weighting and imbalance-reversed bagging for streaming imbalanced and concept drifting in electricity pricing classification

    Get PDF
    National Natural Science Foundation of China Grants 61572201 and 51707041; Guangzhou Science and Technology Plan Project 201804010245; Fundamental Research Funds for the Central Universities 2017ZD052; Guangdong University of Technology Grant from the Financial and Education Department of Guangdong Province 2016[202]; Education Department of Guangdong Province project number 2016KCXTD022; State Grid Technology Project Grant 5211011600RJ

    A novel consistent random forest framework: Bernoulli random forests

    Full text link
    © 2012 IEEE. Random forests (RFs) are recognized as one type of ensemble learning method and are effective for the most classification and regression tasks. Despite their impressive empirical performance, the theory of RFs has yet been fully proved. Several theoretically guaranteed RF variants have been presented, but their poor practical performance has been criticized. In this paper, a novel RF framework is proposed, named Bernoulli RFs (BRFs), with the aim of solving the RF dilemma between theoretical consistency and empirical performance. BRF uses two independent Bernoulli distributions to simplify the tree construction, in contrast to the RFs proposed by Breiman. The two Bernoulli distributions are separately used to control the splitting feature and splitting point selection processes of tree construction. Consequently, theoretical consistency is ensured in BRF, i.e., the convergence of learning performance to optimum will be guaranteed when infinite data are given. Importantly, our proposed BRF is consistent for both classification and regression. The best empirical performance is achieved by BRF when it is compared with state-of-the-art theoretical/consistent RFs. This advance in RF research toward closing the gap between theory and practice is verified by the theoretical and experimental studies in this paper

    Passive attack detection for a class of stealthy intermittent integrity attacks

    Get PDF
    This paper proposes a passive methodology for detecting a class of stealthy intermittent integrity attacks in cyber-physical systems subject to process disturbances and measurement noise. A stealthy intermittent integrity attack strategy is first proposed by modifying a zero-dynamics attack model. The stealthiness of the generated attacks is rigorously investigated under the condition that the adversary does not know precisely the system state values. In order to help detect such attacks, a backward-in-time detection residual is proposed based on an equivalent quantity of the system state change, due to the attack, at a time prior to the attack occurrence time. A key characteristic of this residual is that its magnitude increases every time a new attack occurs. To estimate this unknown residual, an optimal fixed-point smoother is proposed by minimizing a piece-wise linear quadratic cost function with a set of specifically designed weighting matrices. The smoother design guarantees robustness with respect to process disturbances and measurement noise, and is also able to maintain sensitivity as time progresses to intermittent integrity attack by resetting the covariance matrix based on the weighting matrices. The adaptive threshold is designed based on the estimated backward-in-time residual, and the attack detectability analysis is rigorously investigated to characterize quantitatively the class of attacks that can be detected by the proposed methodology. Finally, a simulation example is used to demonstrate the effectiveness of the developed methodology

    Linking brain structure, activity and cognitive function through computation

    Get PDF
    Understanding the human brain is a “Grand Challenge” for 21st century research. Computational approaches enable large and complex datasets to be addressed efficiently, supported by artificial neural networks, modeling and simulation. Dynamic generative multiscale models, which enable the investigation of causation across scales and are guided by principles and theories of brain function, are instrumental for linking brain structure and function. An example of a resource enabling such an integrated approach to neuroscientific discovery is the BigBrain, which spatially anchors tissue models and data across different scales and ensures that multiscale models are supported by the data, making the bridge to both basic neuroscience and medicine. Research at the intersection of neuroscience, computing and robotics has the potential to advance neuro-inspired technologies by taking advantage of a growing body of insights into perception, plasticity and learning. To render data, tools and methods, theories, basic principles and concepts interoperable, the Human Brain Project (HBP) has launched EBRAINS, a digital neuroscience research infrastructure, which brings together a transdisciplinary community of researchers united by the quest to understand the brain, with fascinating insights and perspectives for societal benefits

    Contrastive Video Question Answering via Video Graph Transformer

    Full text link
    We propose to perform video question answering (VideoQA) in a Contrastive manner via a Video Graph Transformer model (CoVGT). CoVGT's uniqueness and superiority are three-fold: 1) It proposes a dynamic graph transformer module which encodes video by explicitly capturing the visual objects, their relations and dynamics, for complex spatio-temporal reasoning. 2) It designs separate video and text transformers for contrastive learning between the video and text to perform QA, instead of multi-modal transformer for answer classification. Fine-grained video-text communication is done by additional cross-modal interaction modules. 3) It is optimized by the joint fully- and self-supervised contrastive objectives between the correct and incorrect answers, as well as the relevant and irrelevant questions respectively. With superior video encoding and QA solution, we show that CoVGT can achieve much better performances than previous arts on video reasoning tasks. Its performances even surpass those models that are pretrained with millions of external data. We further show that CoVGT can also benefit from cross-modal pretraining, yet with orders of magnitude smaller data. The results demonstrate the effectiveness and superiority of CoVGT, and additionally reveal its potential for more data-efficient pretraining. We hope our success can advance VideoQA beyond coarse recognition/description towards fine-grained relation reasoning of video contents. Our code is available at https://github.com/doc-doc/CoVGT.Comment: Accepted by IEEE T-PAMI'2
    corecore