135 research outputs found

    Quantitative Analysis on Time-series Nodal Voltages in Linear-time Intervals

    Get PDF
    The power flow problem is fundamental to all aspects of modeling, analysis, operation, and control of transmission and distribution systems. In a nutshell, it amounts to solving for the nodal voltages in the nonlinear active- and reactive power balance equations that characterize the steady-states of AC electric networks. The traditional power flow algorithm is focused on the steady operational state under a single snapshot and calculates the corresponding voltage and power distributions for given nodal power injections and network topology. To better capture the temporal characteristics of power injections and system variables with high accuracy, a linear-time interval regarding nodal power injections is first defined in this paper, and the norms of nodal voltage derivatives are further analyzed, which is leveraged for simplifying the complexity of solving non-linear dynamic time-varying problems. The voltage monotonicity property has been guaranteed under the proposed linear-time interval. Simulation case studies on IEEE 5-bus and modified 118-bus systems have demonstrated the effectiveness and efficiency of the proposed algorithm

    Optimizing Grid Resilience: A Capacity Reserve Market for High Impact Low Probability Events

    Full text link
    This paper addresses the challenges of high-impact low-probability (HILP) events by proposing a novel capacity reserve event market for mobile generation assets, aimed at supporting the transmission network during such incidents. Despite the usefulness of portable generators and mobile energy units in restoring power, there are drawbacks such as environmental impact, finite operation, and complex cost recovery. The proposed market integrates these resources into a dispatch framework based on pre-established contracts, ensuring fair compensation and considering factors like capacity, pricing, and travel distance. Resource owners receive advanced notifications for potential events, allowing them to adjust their bids for cost recovery. Simulations on an IEEE 30-bus case have been conducted to demonstrate the model effectiveness in increasing grid resiliency

    Benchmarking and Explaining Large Language Model-based Code Generation: A Causality-Centric Approach

    Full text link
    While code generation has been widely used in various software development scenarios, the quality of the generated code is not guaranteed. This has been a particular concern in the era of large language models (LLMs)- based code generation, where LLMs, deemed a complex and powerful black-box model, is instructed by a high-level natural language specification, namely a prompt, to generate code. Nevertheless, effectively evaluating and explaining the code generation capability of LLMs is inherently challenging, given the complexity of LLMs and the lack of transparency. Inspired by the recent progress in causality analysis and its application in software engineering, this paper launches a causality analysis-based approach to systematically analyze the causal relations between the LLM input prompts and the generated code. To handle various technical challenges in this study, we first propose a novel causal graph-based representation of the prompt and the generated code, which is established over the fine-grained, human-understandable concepts in the input prompts. The formed causal graph is then used to identify the causal relations between the prompt and the derived code. We illustrate the insights that our framework can provide by studying over 3 popular LLMs with over 12 prompt adjustment strategies. The results of these studies illustrate the potential of our technique to provide insights into LLM effectiveness, and aid end-users in understanding predictions. Additionally, we demonstrate that our approach provides actionable insights to improve the quality of the LLM-generated code by properly calibrating the prompt

    Approximate Chance-Constrained Unit Commitment Under Wind Energy Penetration

    Get PDF
    We study a multi-period unit commitment problem under wind energy penetration, in which the load balance is enforced with a predefined confidence level across the whole system and over the planning horizon. Since, except for special cases, chance-constrained problems are non-convex, we analyze two relaxations of the load balance based upon robust optimization ideas and estimated quantiles of the marginal distributions of the net load processes. The approximation proposals are benchmarked against the well-known scenario approximation. Under the scenario approach, we also analyze a simple decomposition strategy to find a lower bound of the approximate problem, when the latter becomes intractable due the size of the set of scenarios. The reliability of the obtained solutions as well as their runtimes are examined on three widespread test systems

    A novel buckwheat protein with a beneficial effect in atherosclerosis was purified from Fagopyrum tataricum (L.) Gaertn.

    Get PDF
    Buckwheat seeds contain many kinds of functional compounds that are of benefit to patients with cardiovascular disease. In this research, a water-soluble buckwheat protein was isolated and purified through a DEAE-Sepharose anion exchange column and Sephadex G-75 gel chromatography. The isolated buckwheat protein fractions exhibited hypocholesterolemic activity in a HepG2 cell model and demonstrated prominent bile acid salt-binding activity in an in vitro assay. The antioxidative activity of protein fractions with hypolipidemic effects was detected in a free radical scavenging experiment. The buckwheat protein fraction with the most obvious hypolipidemic activity and free radical scavenging activity was named as WSBWP. Its molecular weight was estimated by SDS-PAGE electrophoresis to be 38 kDa. It could become a potential candidate in the treatment of atherosclerosis

    On the Feasibility of Specialized Ability Stealing for Large Language Code Models

    Full text link
    Recent progress in large language code models (LLCMs) has led to a dramatic surge in the use of software development. Nevertheless, it is widely known that training a well-performed LLCM requires a plethora of workforce for collecting the data and high quality annotation. Additionally, the training dataset may be proprietary (or partially open source to the public), and the training process is often conducted on a large-scale cluster of GPUs with high costs. Inspired by the recent success of imitation attacks in stealing computer vision and natural language models, this work launches the first imitation attack on LLCMs: by querying a target LLCM with carefully-designed queries and collecting the outputs, the adversary can train an imitation model that manifests close behavior with the target LLCM. We systematically investigate the effectiveness of launching imitation attacks under different query schemes and different LLCM tasks. We also design novel methods to polish the LLCM outputs, resulting in an effective imitation training process. We summarize our findings and provide lessons harvested in this study that can help better depict the attack surface of LLCMs. Our research contributes to the growing body of knowledge on imitation attacks and defenses in deep neural models, particularly in the domain of code related tasks.Comment: 11 page

    VRPTEST: Evaluating Visual Referring Prompting in Large Multimodal Models

    Full text link
    With recent advancements in Large Multimodal Models (LMMs) across various domains, a novel prompting method called visual referring prompting has emerged, showing significant potential in enhancing human-computer interaction within multimodal systems. This method offers a more natural and flexible approach to human interaction with these systems compared to traditional text descriptions or coordinates. However, the categorization of visual referring prompting remains undefined, and its impact on the performance of LMMs has yet to be formally examined. In this study, we conduct the first comprehensive analysis of LMMs using a variety of visual referring prompting strategies. We introduce a benchmark dataset called VRPTEST, comprising 3 different visual tasks and 2,275 images, spanning diverse combinations of prompt strategies. Using VRPTEST, we conduct a comprehensive evaluation of eight versions of prominent open-source and proprietary foundation models, including two early versions of GPT-4V. We develop an automated assessment framework based on software metamorphic testing techniques to evaluate the accuracy of LMMs without the need for human intervention or manual labeling. We find that the current proprietary models generally outperform the open-source ones, showing an average accuracy improvement of 22.70%; however, there is still potential for improvement. Moreover, our quantitative analysis shows that the choice of prompt strategy significantly affects the accuracy of LMMs, with variations ranging from -17.5% to +7.3%. Further case studies indicate that an appropriate visual referring prompting strategy can improve LMMs' understanding of context and location information, while an unsuitable one might lead to answer rejection. We also provide insights on minimizing the negative impact of visual referring prompting on LMMs.Comment: 13 page
    corecore