310 research outputs found

    Use of supporting software tool for decision-making during low-probability severe accident management at nuclear power plants

    Get PDF
    In the project NARSIS – New Approach to Reactor Safety ImprovementS – possible advances in safety assessment of nuclear power plants (NPPs) were considered, which also included possible improvements in the field of management of low probability accident scenarios. As a part of it, a supporting software tool for making decisions under severe accident management was developed. The mentioned tool, named Severa, is a prototype demonstration-level decision supporting system, aimed for the use by the technical support center (TSC) while managing a severe accident, or for the training purposes. Severa interprets, stores and monitors key physical measurements during accident sequence progression. It assesses the current state of physical barriers: core, reactor coolant system, reactor pressure vessel and containment. The tool gives predictions regarding accident progression in the case that no action is taken by the TSC. It provides a list of possible recovery strategies and courses of action. The applicability and feasibility of possible action courses in the given situation are addressed. For each action course, Severa assesses consequences in terms of probability of the containment failure and estimated time window for failure. At the end, Severa evaluates and ranks the feasible actions, providing recommendations for the TSC. The verification and validation of Severa has been performed in the project and is also described in this paper. Although largely simplified in its current state, Severa successfully demonstrated its potential for supporting accident management and pointed toward the next steps needed with regard to further advancements in this fiel

    Data mining and database systems: integrating conceptual clustering with a relational database management system.

    Get PDF
    Many clustering algorithms have been developed and improved over the years to cater for large scale data clustering. However, much of this work has been in developing numeric based algorithms that use efficient summarisations to scale to large data sets. There is a growing need for scalable categorical clustering algorithms as, although numeric based algorithms can be adapted to categorical data, they do not always produce good results. This thesis presents a categorical conceptual clustering algorithm that can scale to large data sets using appropriate data summarisations. Data mining is distinguished from machine learning by the use of larger data sets that are often stored in database management systems (DBMSs). Many clustering algorithms require data to be extracted from the DBMS and reformatted for input to the algorithm. This thesis presents an approach that integrates conceptual clustering with a DBMS. The presented approach makes the algorithm main memory independent and supports on-line data mining

    Data mining and database systems : integrating conceptual clustering with a relational database management system

    Get PDF
    Many clustering algorithms have been developed and improved over the years to cater for large scale data clustering. However, much of this work has been in developing numeric based algorithms that use efficient summarisations to scale to large data sets. There is a growing need for scalable categorical clustering algorithms as, although numeric based algorithms can be adapted to categorical data, they do not always produce good results. This thesis presents a categorical conceptual clustering algorithm that can scale to large data sets using appropriate data summarisations. Data mining is distinguished from machine learning by the use of larger data sets that are often stored in database management systems (DBMSs). Many clustering algorithms require data to be extracted from the DBMS and reformatted for input to the algorithm. This thesis presents an approach that integrates conceptual clustering with a DBMS. The presented approach makes the algorithm main memory independent and supports on-line data mining.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Emergent relational schemas for RDF

    Get PDF

    Should I stay or should I go?: apoio multicritério à internacionalização de PMEs

    Get PDF
    Atualmente, fruto das condições económicas vividas nos mercados domésticos, as empresas sentem uma grande necessidade de se envolver no comércio internacional. Contudo, para além das restrições financeiras e intelectuais que as Pequenas e Médias Empresas (PMEs) enfrentam para se internacionalizar, os processos de internacionalização são por natureza muito complexos, sendo muitos os fatores que os decisores têm que ter em consideração para fundamentar as suas decisões. Nesse sentido, o presente estudo propõe a elaboração de um modelo de avaliação multicritério que, através da combinação de técnicas de mapeamento cognitivo com a metodologia Decision Expert (DEX), permite a identificação e a avaliação dos fatores que influenciam a capacidade de internacionalização das PMEs. Os resultados obtidos comprovam que as metodologias adotadas permitem a construção de modelos de avaliação transparentes e com uma grande utilidade no apoio à tomada de decisão. Em termos práticos, foi possível compreender que as características do produto representam o fator que maior influência tem na capacidade de internacionalização das PMEs; e que os fatores internos têm uma influência significativamente maior que os fatores externos. O processo de construção do modelo é também abordado, bem como respetivas vantagens e limitaçõesDue to the current economic conditions of domestic markets, companies feel an increasing need to become actively involved in international trading. Typically, however, there are several financial and intellectual constraints that small and medium-sized enterprises (SMEs) face during their internationalization process. This means that decision makers should consider a wide range of different variables before deciding on internationalization. This study sought to integrate cognitive mapping and the Decision EXpert (DEX) methodology to develop a multiple criteria decision model that may prove suitable for the identification and assessment of the variables that influence SMEs’ internationalization capability. The results show that the dual methodology adopted allows for the development of robust evaluation models that are able to improve the decision-making process in this study context. Specifically, the model identifies product features as the most relevant factor for SMEs’ internationalization capability. Additionally, internal factors are significantly more relevant than external factors. The model building process is discussed, including its advantages and limitation

    Levels of Analysis in Comprehensive River Basin Planning

    Get PDF
    Since nearly every water resource managment choice has two or more sides, differences must be resolved in decision making. Equitable resolution requires an understanding of the reasons for the differences. These reasons originate in the implemented plans have physical-environmental, economic, social, cultural, and political impacts at levels ranging from local to national or international in scope. Decisions are made by individuals and groups impacted in all of these dimensions and at all of these levels; the decisions generate additional impacts; and the entire interactive process changes water management practice in ways outside the control of any one decision point or even dicision dimension. The objective of this study is to conceptualize this process in a way that will help in establishing institutional mechanisms for reconciling differences among levels of analysis. The conceptualization used viewed differences in choices being made at the various levels of analysis as associated with perspective differences having value, jurisdiction, action, and temporal elements. The possible combinations of differences within and between these elements were used to identify ten categories of institutional obstacles to efficient water planning (differences in values, conflicts between value and jurisdiction, etc.). The history of water resources planning on the Colorado River basin was then examined to identify 17 specific institutional obstacles, and a computerized policy simulation was applied to levels of analysis in the Uintah basin of Utah to identify three more. These 20 obstacles were shown to be broadly distributed over the ten categories, and the nature of the obstacles defined provides valuable insight into the common characteristics of the major institutional obstacles to water management. The priciples of logic as applicable to rationality in decision making were then used to identify two root causes of levels\u27 conflicts. If alternatives are evaluated from a single perspective, the ostensible causal relationships commonly used lead to estimates of the sum of the consequences from the parts of a water management program being far more than the total consequences of the entire program. Looked at another way, since available water resources planning tools do not properly allocate consequences from interactive processes to individual causal sources, decisions made to acheive a desired impact are not based on reliable information. In fact, different decisions made over time from a single perspective have conflicting impacts. When multiple perspectices are considered, one finds that individual values do not aggregate linearly in forming social values, many actions are not efficient in achieving preferred values, and decision makers are not able to implement their plans as desired. Real world situations combine interacting perspectives and partial contributions. Nine recommendations are made on what to do next in improving water resources planning in an interactive, nonlinear world

    Graph Processing in Main-Memory Column Stores

    Get PDF
    Evermore, novel and traditional business applications leverage the advantages of a graph data model, such as the offered schema flexibility and an explicit representation of relationships between entities. As a consequence, companies are confronted with the challenge of storing, manipulating, and querying terabytes of graph data for enterprise-critical applications. Although these business applications operate on graph-structured data, they still require direct access to the relational data and typically rely on an RDBMS to keep a single source of truth and access. Existing solutions performing graph operations on business-critical data either use a combination of SQL and application logic or employ a graph data management system. For the first approach, relying solely on SQL results in poor execution performance caused by the functional mismatch between typical graph operations and the relational algebra. To the worse, graph algorithms expose a tremendous variety in structure and functionality caused by their often domain-specific implementations and therefore can be hardly integrated into a database management system other than with custom coding. Since the majority of these enterprise-critical applications exclusively run on relational DBMSs, employing a specialized system for storing and processing graph data is typically not sensible. Besides the maintenance overhead for keeping the systems in sync, combining graph and relational operations is hard to realize as it requires data transfer across system boundaries. A basic ingredient of graph queries and algorithms are traversal operations and are a fundamental component of any database management system that aims at storing, manipulating, and querying graph data. Well-established graph traversal algorithms are standalone implementations relying on optimized data structures. The integration of graph traversals as an operator into a database management system requires a tight integration into the existing database environment and a development of new components, such as a graph topology-aware optimizer and accompanying graph statistics, graph-specific secondary index structures to speedup traversals, and an accompanying graph query language. In this thesis, we introduce and describe GRAPHITE, a hybrid graph-relational data management system. GRAPHITE is a performance-oriented graph data management system as part of an RDBMS allowing to seamlessly combine processing of graph data with relational data in the same system. We propose a columnar storage representation for graph data to leverage the already existing and mature data management and query processing infrastructure of relational database management systems. At the core of GRAPHITE we propose an execution engine solely based on set operations and graph traversals. Our design is driven by the observation that different graph topologies expose different algorithmic requirements to the design of a graph traversal operator. We derive two graph traversal implementations targeting the most common graph topologies and demonstrate how graph-specific statistics can be leveraged to select the optimal physical traversal operator. To accelerate graph traversals, we devise a set of graph-specific, updateable secondary index structures to improve the performance of vertex neighborhood expansion. Finally, we introduce a domain-specific language with an intuitive programming model to extend graph traversals with custom application logic at runtime. We use the LLVM compiler framework to generate efficient code that tightly integrates the user-specified application logic with our highly optimized built-in graph traversal operators. Our experimental evaluation shows that GRAPHITE can outperform native graph management systems by several orders of magnitude while providing all the features of an RDBMS, such as transaction support, backup and recovery, security and user management, effectively providing a promising alternative to specialized graph management systems that lack many of these features and require expensive data replication and maintenance processes
    corecore