942 research outputs found

    A Policy Switching Approach to Consolidating Load Shedding and Islanding Protection Schemes

    Full text link
    In recent years there have been many improvements in the reliability of critical infrastructure systems. Despite these improvements, the power systems industry has seen relatively small advances in this regard. For instance, power quality deficiencies, a high number of localized contingencies, and large cascading outages are still too widespread. Though progress has been made in improving generation, transmission, and distribution infrastructure, remedial action schemes (RAS) remain non-standardized and are often not uniformly implemented across different utilities, ISOs, and RTOs. Traditionally, load shedding and islanding have been successful protection measures in restraining propagation of contingencies and large cascading outages. This paper proposes a novel, algorithmic approach to selecting RAS policies to optimize the operation of the power network during and after a contingency. Specifically, we use policy-switching to consolidate traditional load shedding and islanding schemes. In order to model and simulate the functionality of the proposed power systems protection algorithm, we conduct Monte-Carlo, time-domain simulations using Siemens PSS/E. The algorithm is tested via experiments on the IEEE-39 topology to demonstrate that the proposed approach achieves optimal power system performance during emergency situations, given a specific set of RAS policies.Comment: Full Paper Accepted to PSCC 2014 - IEEE Co-Sponsored Conference. 7 Pages, 2 Figures, 2 Table

    Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework

    Full text link
    This paper examines the current landscape of AI regulations, highlighting the divergent approaches being taken, and proposes an alternative contextual, coherent, and commensurable (3C) framework. The EU, Canada, South Korea, and Brazil follow a horizontal or lateral approach that postulates the homogeneity of AI systems, seeks to identify common causes of harm, and demands uniform human interventions. In contrast, the U.K., Israel, Switzerland, Japan, and China have pursued a context-specific or modular approach, tailoring regulations to the specific use cases of AI systems. The U.S. is reevaluating its strategy, with growing support for controlling existential risks associated with AI. Addressing such fragmentation of AI regulations is crucial to ensure the interoperability of AI. The present degree of proportionality, granularity, and foreseeability of the EU AI Act is not sufficient to garner consensus. The context-specific approach holds greater promises but requires further development in terms of details, coherency, and commensurability. To strike a balance, this paper proposes a hybrid 3C framework. To ensure contextuality, the framework categorizes AI into distinct types based on their usage and interaction with humans: autonomous, allocative, punitive, cognitive, and generative AI. To ensure coherency, each category is assigned specific regulatory objectives: safety for autonomous AI; fairness and explainability for allocative AI; accuracy and explainability for punitive AI; accuracy, robustness, and privacy for cognitive AI; and the mitigation of infringement and misuse for generative AI. To ensure commensurability, the framework promotes the adoption of international industry standards that convert principles into quantifiable metrics. In doing so, the framework is expected to foster international collaboration and standardization without imposing excessive compliance costs

    Developing an understanding of coherent approaches between primary and secondary teachers:a case study within the design and technology curriculum in Scotland

    Get PDF
    This study is based around Education Scotland’s ambition to create a coherent learning framework for pupils aged 3–18, with particular focus on the technologies curricular area, and more specifically the subject of design and technology (D&T). The study investigates the views, definitions, and approaches adopted by primary and secondary educators applied to the D&T curricular area. Furthermore, the research explores curricular understanding and pedagogical approaches in addition to individual teacher’s understanding of technology education. A mixed method research approach was utilised and applied within one local authority region in Scotland. Data was collected from primary teachers and secondary design and technology teachers using online questionnaires and interviews. Findings reveal that there is a varied approach to teaching design and technology across primary and secondary schools with educators recognising different definitions and pedagogical approaches in the subject. This indicates that pupils transitioning from primary to secondary learning will have to cope with these differing teaching approaches when studying design and technology. However, participants agree on the importance of the design element and application of the subject to real world scenarios. It is recommended that school communities find opportunities to collaborate further with the aim of creating a more continuous, coherent learning journey for young people in the design and technology curriculum area. These findings provide a basis for future professional discussion and critical reflection for practitioners in both primary and secondary sectors, and for leaders and administrators across Scotland, the UK and around the world

    Fault-tolerant Algorithms for Tick-Generation in Asynchronous Logic: Robust Pulse Generation

    Full text link
    Today's hardware technology presents a new challenge in designing robust systems. Deep submicron VLSI technology introduced transient and permanent faults that were never considered in low-level system designs in the past. Still, robustness of that part of the system is crucial and needs to be guaranteed for any successful product. Distributed systems, on the other hand, have been dealing with similar issues for decades. However, neither the basic abstractions nor the complexity of contemporary fault-tolerant distributed algorithms match the peculiarities of hardware implementations. This paper is intended to be part of an attempt striving to overcome this gap between theory and practice for the clock synchronization problem. Solving this task sufficiently well will allow to build a very robust high-precision clocking system for hardware designs like systems-on-chips in critical applications. As our first building block, we describe and prove correct a novel Byzantine fault-tolerant self-stabilizing pulse synchronization protocol, which can be implemented using standard asynchronous digital logic. Despite the strict limitations introduced by hardware designs, it offers optimal resilience and smaller complexity than all existing protocols.Comment: 52 pages, 7 figures, extended abstract published at SSS 201

    Rethinking Distributed Caching Systems Design and Implementation

    Get PDF
    Distributed caching systems based on in-memory key-value stores have become a crucial aspect of fast and efficient content delivery in modern web-applications. However, due to the dynamic and skewed execution environments and workloads, under which such systems typically operate, several problems arise in the form of load imbalance. This thesis addresses the sources of load imbalance in caching systems, mainly: i) data placement, which relates to distribution of data items across servers and ii) data item access frequency, which describes amount of requests each server has to process, and how each server is able to cope with it. Thus, providing several strategies to overcome the sources of imbalance in isolation. As a use case, we analyse Memcached, its variants, and propose a novel solution for distributed caching systems. Our solution revolves around increasing parallelism through load segregation, and solutions to overcome the load discrepancies when reaching high saturation scenarios, mostly through access re-arrangement, and internal replication.Os sistemas de cache distribuídos baseados em armazenamento de pares chave-valor em RAM, tornaram-se um aspecto crucial em aplicações web modernas para o fornecimento rápido e eficiente de conteúdo. No entanto, estes sistemas normalmente estão sujeitos a ambientes muito dinâmicos e irregulares. Este tipo de ambientes e irregularidades, causa vários problemas, que emergem sob a forma de desequilíbrios de carga. Esta tese aborda as diferentes origens de desequilíbrio de carga em sistemas de caching distribuído, principalmente: i) colocação de dados, que se relaciona com a distribuição dos dados pelos servidores e a ii) frequência de acesso aos dados, que reflete a quantidade de pedidos que cada servidor deve processar e como cada servidor lida com a sua carga. Desta forma, demonstramos várias estratégias para reduzir o impacto proveniente das fontes de desequilíbrio, quando analizadas em isolamento. Como caso de uso, analisamos o sistema Memcached, as suas variantes, e propomos uma nova solução para sistemas de caching distribuídos. A nossa solução gira em torno de aumento de paralelismo atraves de segregação de carga e em como superar superar as discrepâncias de carga a quando de sistema entra em grande saturação, principalmente atraves de reorganização de acesso e de replicação intern

    Scenario Planning for Organizational Adaptability: The Lived Experiences of Executives

    Get PDF
    Organizational adaptability is critical to organizational survival, and executive leadership\u27s inability to adapt to extreme disruptive complex events threatens survival. Scenario planning is one means of adapting to extreme disruptive complex events. In this qualitative interpretive phenomenological study, 20 executives who had lived experience with extreme disruptive complex events and applied scenario planning to help adapt participated in phenomenological interviews to share their experiences related to the application of scenario planning as a means adaptation to extreme disruptive complex events. Participants were from a single large organization with executives distributed throughout the United States and executives from 10 state agencies located within a single state. Using the thematic analysis process, 14 themes emerged. The themes included knowing the difference between adaptation and response, not being afraid to tackle difficult questions, scenario planning is never over because the environment constantly changes, the true measures of scenario planning value are the benefits achieved via the planning exercise versus the business application, and participation should be individuals who can or could have a direct influence on adaptation and do not get bogged down in structured and/or rigid processes, methods, or tools because while useful, they are not required to be successful. The implications for positive social change include the ability for organizations to reduce economic injury and the compound effects of disruption including the social impacts of business injury, disruption, recovery, job loss, and reduced revenue on communities and local economies

    Automatic abstracting: a review and an empirical evaluation

    Get PDF
    The abstract is a fundamental tool in information retrieval. As condensed representations, they facilitate conservation of the increasingly precious search time and space of scholars, allowing them to manage more effectively an ever-growing deluge of documentation. Traditionally the product of human intellectual effort, attempts to automate the abstracting process began in 1958. Two identifiable automatic abstracting techniques emerged which reflect differing levels of ambition regarding simulation of the human abstracting process, namely sentence extraction and text summarisation. This research paradigm has recently diversified further, with a cross-fertilisation of methods. Commercial systems are beginning to appear, but automatic abstracting is still mainly confined to an experimental arena. The purpose of this study is firstly to chart the historical development and current state of both manual and automatic abstracting; and secondly, to devise and implement an empirical user-based evaluation to assess the adequacy of automatic abstracts derived from sentence extraction techniques according to a set of utility criteria. [Continues.
    corecore