660,696 research outputs found

    What's Decidable About Sequences?

    Full text link
    We present a first-order theory of sequences with integer elements, Presburger arithmetic, and regular constraints, which can model significant properties of data structures such as arrays and lists. We give a decision procedure for the quantifier-free fragment, based on an encoding into the first-order theory of concatenation; the procedure has PSPACE complexity. The quantifier-free fragment of the theory of sequences can express properties such as sortedness and injectivity, as well as Boolean combinations of periodic and arithmetic facts relating the elements of the sequence and their positions (e.g., "for all even i's, the element at position i has value i+3 or 2i"). The resulting expressive power is orthogonal to that of the most expressive decidable logics for arrays. Some examples demonstrate that the fragment is also suitable to reason about sequence-manipulating programs within the standard framework of axiomatic semantics.Comment: Fixed a few lapses in the Mergesort exampl

    Advanced Directives and Pregnancy: A Comparison Between the US and Ireland

    Get PDF
    Advance directives enable patients to specify which medical treatment they do and do not want to receive ahead of time if they cannot make such a determination, including the refusal or ending of life support when the time to act on such a decision arises. There are two types of advance directives: (1) living wills where a patient lists their treatment preferences and (2) creating a health care power of attorney which vests the decision-making authority in a proxy. Since the creation of advance directives in 1976, all fifty states have adopted their own laws on advance directives. This post was originally published on the Cardozo International & Comparative Law Review on March 28, 2022. The original post can be accessed via the Archived Link button above

    Evidence-based indications for the planning of PET or PET/CT capacities are needed

    Get PDF
    Purpose To identify evidence-based indications for PET/PET–CT scans in support of facilities planning and to describe a pilot project in which this information was applied for an investment decision in an Austrian region. The study updates a Health Technology Assessment (HTA) report (2015) on oncological indications, extending it to neurological indications and inflammatory disorders. Methods A systematic literature search to identify HTA reports, evidence-based guidelines, and systematic reviews/meta-analyses (SR/MA) was performed, supplemented by a manual search for professional society recommendations and explicit “not-to-do’s”. A needs-assessment was conducted in the context of the pilot study on investing in an additional PET–CT scanner in the Austrian region of Carinthia. Results Overall recommendations for indications as well as non-recommendations for the three areas (oncology, neurology, and inflammatory disorders) were compiled from the 2015 PET–HTA report and expanded for a final total of ten HTA, comprising 234 (positive and negative) recommendations from professional societies and databases, and supplemented by findings from 23 SR/MA. For the investment decision pilot study in Carinthia, 1762 PET scans were analyzed; 77.8% were assigned to the category “recommended evidence-based indications” (54.7%), “not recommended” (1.8%) or “contradictory recommendations” (21.3%). The remaining could not be assigned to any of the three categories. Conclusions The piloting of PET capacity planning using evidence-based information is a first of its kind in the published literature. On one hand, the high number of PET scans that could not be ascribed to any of the categories identified limits to the instructive power of the study to use evidence-based indication lists as the basis for a needs-assessment investment planning. On the other hand, this study reveals how there is a need to improve indication coding for enhanced capacity planning of medical services. Overall recommendations identified can serve as needs-based and evidence-based decision support for PET/PET–CT service provision

    An Efficient Mode Decision Algorithm Based on Dynamic Grouping and Adaptive Adjustment for H.264/AVC

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”The rate distortion optimization (RDO) enabled mode decision (MD) is one of the most important techniques introduced by H.264/AVC. By adopting the exhaustive calculation of rate distortion, the optimal MD enhances the video encoding quality. However, the computational complexity is significantly increased, which is a key challenge for real-time and low power consumption applications. This paper presents a new fast MD algorithm for highly efficient H.264/AVC encoder. The proposed algorithm employs a dynamic group of candidate inter/intra modes to reduce the computational cost. In order to minimize the performance loss incurred by improper mode selection for the previously encoded frames, an adaptive adjustment scheme based on the undulation of bitrate and PSNR is suggested. Experimental results show that the proposed algorithm reduces the encoding time by 35% on average, and the loss of PSNR is usually limited in 0.1 dB with less than 1% increase of bitrate

    Network-constrained joint energy and flexible ramping reserve market clearing of power- and heat-based energy systems : a two-stage hybrid IGDT-stochastic framework

    Get PDF
    This article proposes a new two-stage hybrid stochastic–information gap-decision theory (IGDT) based on the network-constrained unit commitment framework. The model is applied for the market clearing of joint energy and flexible ramping reserve in integrated heat- and power-based energy systems. The uncertainties of load demands and wind power generation are studied using the Monte Carlo simulation method and IGDT, respectively. The proposed model considers both risk-averse and risk-seeker strategies, which enables the independent system operator to provide flexible decisions in meeting system uncertainties in real-time dispatch. Moreover, the effect of feasible operating regions of the combined heat and power (CHP) plants on energy and flexible ramping reserve market and operation cost of the system is investigated. The proposed model is implemented on a test system to verify the effectiveness of the introduced two-stage hybrid framework. The analysis of the obtained results demonstrates that the variation of heat demand is effective on power and flexible ramping reserve supplied by CHP units.©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.fi=vertaisarvioitu|en=peerReviewed

    What conditions education for forgiveness in terms of the neo-Thomistic philosophy of education

    Get PDF
    The article addresses questions about the conditions of education for forgiveness, analysing them in terms of philosophy of education. The first part provides a review of the philosophical discussion on forgiveness. Revolving around the essence and moral value of forgiving, it is a debate which offers no explanations as to why people decide to forgive. In the light of this problem, the second part of the article seeks to view forgiveness as a moral decision. Neo-Thomistic philosophy is applied in the analysis, in which forgiveness appears to result from a dialogue between reason and will. The third part focuses on readiness to forgive which, in the Thomistic pedagogy, relates to the development of moral virtues; in particular, the cardinal virtues, i.e., prudence, justice, temperance and fortitude. The last part lists several conclusions applicable to pedagogy. They refer to the need for development of both intellect and will as a power sensitive to good

    Monopolistic and game-based approaches to transact energy flexibility

    Get PDF
    The appearance of the flexible behavior of end-users based on demand response programs makes the power distribution grids more active. Thus, electricity market participants in the bottom layer of the power system, wish to be involved in the decision-making process related to local energy management problems, increasing the efficiency of the energy trade in distribution networks. This paper proposes monopolistic and game-based approaches for the management of energy flexibility through end-users, aggregators, and the Distribution System Operator (DSO) which are defined as agents in the power distribution system. Besides, a 33-bus distribution network is considered to evaluate the performance of our proposed approaches for energy flexibility management model based on impact of flexibility behaviors of end-users and aggregators in the distribution network. According to the simulation results, it is concluded that although the monopolistic approach could be profitable for all agents in the distribution network, the game-based approach is not profitable for end-users.©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.fi=vertaisarvioitu|en=peerReviewed

    Machine Learning for the New York City Power Grid

    Get PDF
    Power companies can benefit from the use of knowledge discovery methods and statistical machine learning for preventive maintenance. We introduce a general process for transforming historical electrical grid data into models that aim to predict the risk of failures for components and systems. These models can be used directly by power companies to assist with prioritization of maintenance and repair work. Specialized versions of this process are used to produce (1) feeder failure rankings, (2) cable, joint, terminator, and transformer rankings, (3) feeder Mean Time Between Failure (MTBF) estimates, and (4) manhole events vulnerability rankings. The process in its most general form can handle diverse, noisy, sources that are historical (static), semi-real-time, or real-time, incorporates state-of-the-art machine learning algorithms for prioritization (supervised ranking or MTBF), and includes an evaluation of results via cross-validation and blind test. Above and beyond the ranked lists and MTBF estimates are business management interfaces that allow the prediction capability to be integrated directly into corporate planning and decision support; such interfaces rely on several important properties of our general modeling approach: that machine learning features are meaningful to domain experts, that the processing of data is transparent, and that prediction results are accurate enough to support sound decision making. We discuss the challenges in working with historical electrical grid data that were not designed for predictive purposes. The “rawness” of these data contrasts with the accuracy of the statistical models that can be obtained from the process; these models are sufficiently accurate to assist in maintaining New York City's electrical grid

    Computability and Complexity from a Programming Perspective (MFPS Draft preview)

    Get PDF
    AbstractThe author's forthcoming book proves central results in computability and complexity theory from a programmer-oriented perspective. In addition to giving more natural definitions, proofs and perspectives on classical theorems by Cook, Hartmanis, Savitch, etc., some new results have come from the alternative approach.One: for a computation model more natural than the Turing machine, multiplying the available problem-solving time provably increases problem-solving power (in general not true for Turing machines). Another: the class of decision problems solvable by Wadler's “treeless” programs [8], or by cons-free programs on Lisp-like lists, are identical with the well-studied complexity class LOGSPACE.A third is that cons-free programs augmented with recursion can solve all and only PTIME problems. Paradoxically, these programs often run in exponential time (not a contradiction, since they can be simulated in polynomial time by memoization.) This tradeoff indicates a tension between running time and memory space which seems worth further investigation

    Peer-to-peer energy trading between wind power producer and demand response aggregators for scheduling joint energy and reserve

    Get PDF
    In this article, a stochastic decision-making framework is presented in which a wind power producer (WPP) provides some required reserve capacity from demand response aggregators (DRAs) in a peer-to-peer (P2P) structure. In this structure, each DRA is able to choose the most competitive WPP, and purchase energy and sell reserve capacity to that WPP under a bilateral contract-based P2P electricity trading mechanism. Based on this structure, the WPP can determine the optimal buying reserve from DRAs to offset part of wind power deviation. The proposed framework is formulated as a bilevel stochastic model in which the upper level maximizes the WPP's profit based on the optimal bidding in the day-ahead and balancing markets, whereas the lower level minimizes DRAs' costs. In order to incorporate the risk associated with the WPP's decisions and to assess the effect of scheduling reserves on the profit variability, conditional value at risk is employed.©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.fi=vertaisarvioitu|en=peerReviewed
    • …
    corecore