25 research outputs found

    Multiprocessor sparse L/U decomposition with controlled fill-in

    Get PDF
    Generation of the maximal compatibles of pivot elements for a class of small sparse matrices is studied. The algorithm involves a binary tree search and has a complexity exponential in the order of the matrix. Different strategies for selection of a set of compatible pivots based on the Markowitz criterion are investigated. The competing issues of parallelism and fill-in generation are studied and results are provided. A technque for obtaining an ordered compatible set directly from the ordered incompatible table is given. This technique generates a set of compatible pivots with the property of generating few fills. A new hueristic algorithm is then proposed that combines the idea of an ordered compatible set with a limited binary tree search to generate several sets of compatible pivots in linear time. Finally, an elimination set to reduce the matrix is selected. Parameters are suggested to obtain a balance between parallelism and fill-ins. Results of applying the proposed algorithms on several large application matrices are presented and analyzed

    Modeling Web Services by Iterative Reformulation of Functional and Non-Functional Requirements

    Get PDF
    Abstract. We propose an approach for incremental modeling of composite Web services. The technique takes into consideration both the functional and nonfunctional requirements of the composition. While the functional requirements are described using symbolic transition systems—transition systems augmented with state variables, function invocations, and guards; non-functional requirements are quantified using thresholds. The approach allows users to specify an abstract and possibly incomplete specification of the desired service (goal) that can be realized by selecting and composing a set of pre-existing services. In the event that such a composition is unrealizable, i.e. the composition is not functionally equivalent to the goal or the non-functional requirements are violated, our system provides the user with the causes for the failure, that can be used to appropriately reformulate the functional and/or non-functional requirements of the goal specification.

    Inconsistency-tolerant Query Answering in Ontology-based Data Access

    Get PDF
    Ontology-based data access (OBDA) is receiving great attention as a new paradigm for managing information systems through semantic technologies. According to this paradigm, a Description Logic ontology provides an abstract and formal representation of the domain of interest to the information system, and is used as a sophisticated schema for accessing the data and formulating queries over them. In this paper, we address the problem of dealing with inconsistencies in OBDA. Our general goal is both to study DL semantical frameworks that are inconsistency-tolerant, and to devise techniques for answering unions of conjunctive queries under such inconsistency-tolerant semantics. Our work is inspired by the approaches to consistent query answering in databases, which are based on the idea of living with inconsistencies in the database, but trying to obtain only consistent information during query answering, by relying on the notion of database repair. We first adapt the notion of database repair to our context, and show that, according to such a notion, inconsistency-tolerant query answering is intractable, even for very simple DLs. Therefore, we propose a different repair-based semantics, with the goal of reaching a good compromise between the expressive power of the semantics and the computational complexity of inconsistency-tolerant query answering. Indeed, we show that query answering under the new semantics is first-order rewritable in OBDA, even if the ontology is expressed in one of the most expressive members of the DL-Lite family

    Software components and formal methods from a computational viewpoint

    Full text link
    Software components and the methodology of component-based development offer a promising approach to master the design complexity of huge software products because they separate the concerns of software architecture from individual component behavior and allow for reusability of components. In combination with formal methods, the specification of a formal component model of the later software product or system allows for establishing and verifying important system properties in an automatic and convenient way, which positively contributes to the overall correctness of the system. Here, we study such a combined approach. As similar approaches, we also face the so-called state space explosion problem which makes property verification computationally hard. In order to cope with this problem, we derive techniques that are guaranteed to work in polynomial time in the size of the specification of the system under analysis, i.e., we put an emphasis on the computational viewpoint of verification. As a consequence, we consider interesting subclasses of component-based systems that are amenable to such analysis. We are particularly interested in ideas that exploit the compositionality of the component model and refrain from understanding a system as a monolithic block. The assumptions that accompany the set of systems that are verifiable with our techniques can be interpreted as general design rules that forbid to build systems at will in order to gain efficient verification techniques. The compositional nature of software components thereby offers development strategies that lead to systems that are correct by construction. Moreover, this nature also facilitates compositional reduction techniques that allow to reduce a given model to the core that is relevant for verification. We consider properties specified in Computation Tree Logic and put an emphasis on the property of deadlock-freedom. We use the framework of interaction systems as the formal component model, but our results carry over to other formal models for component-based development. We include several examples and evaluate some ideas with respect to experiments with a prototype implementation

    Integrating hypertext with information systems through dynamic mapping

    Get PDF
    This dissertation presents a general hypertext model (GHMI) supporting integration of hypertext and information systems through dynamic mapping. Information systems integrated based on this model benefit from hypertext function-alities (such as linking, backtracking, history, guided tours, annotations, etc.) while preserving their own computation capabilities. Although systems supporting integration of hypertext and interface-oriented information systems do exist in hypertext literature, there is no existing model or system effectively supporting integration of hypertext and computation-oriented information systems. GHMI makes its major contributions by both extending and specifying the well-known Dexter Hypertext Reference Model. GHMI extends the Dexter model to overcome its limitations. GHMI also maps its capabilities to the extended Dexter model with appropriate specifications to meet the requirements of our dynamic mapping environment. The extended Dexter functions apply bridge laws in the hypertext knowledge base to map information system objects and relationships to hypertext constructs at run-time. We have implemented GHMI as a prototype to prove its feasibility

    The Time Course of Processing of Natural Language Quantification

    Get PDF
    This thesis examines the processing of quantificational semantics during reading. It is motivated by some formal observations made by Barwise and Cooper (1981), and a psychological theory proposed by Moxey and Sanford (1987,1993a). Barwise and Cooper classify quantifiers in terms of the directional scalar inference they license. Quantifiers like a few and many are described as monotone-increasing which means that what is true about a subset is also true of the superset. For instance, if a few student passed the exam with ease, this entails that a few students passed the exam. Other quantifiers like few; and not many are monotone-decreasing, and license inferences in the opposite direction. If it is true that few students passed the exam, this entails [hat few students passed the exam with ease. Moxey and Sanford found that these categories of quantifier produce contrasting patterns of focus. They used an off-line production task to demonstrate the monotone-increasing quantifiers, like a few and many focus processing attention on that subset of the quantified NP which is true of the sentence predicate (called the 'refset'), and that subsequent pronouns are interpreted as referring to this set. This means that given a fragment like (1), the plural pronoun will be interpreted as referring to the set of students who passed the exam. (1) A few of the students passed the exam. They . .. In contrast, monotone-decreasing quantifiers like few and not many exhibit a more diffuse pattern of focus, and permit subsequent reference to either the refset, or the complement of this set (called the compset), which is false of the sentence predicate. This means that the plural pronoun in (2) can be interpreted as either referring to the set of student who passed the exam, or those who failed it. (2) A few of the students passed the exam. They ... However, the off-line nature of the Moxey and Sanford studies limit them as descriptions of reading processes, so this thesis reports a series of experimental investigations of pronominal reference during reading. The first two studies used materials like (3) in a self-paced reading experiment to demonstrate that reference is easier when the anaphor describes a property of the refset {their presence) following monotone- increasing quantification, but that reference to either a property of the refset (their presence) or compset (their absence) is possible following monotone-decreasing quantification, although there is a preference for compset reference. (3) [A few I Few] of the MPs attended the meeting. Their [presence | absence] helped the meeting run more smoothly. A second two experiments monitored subjects' eye movements as the read passages like (3) in order to determine the locus and time course of referential processes. In line with other studies (eg. Garrod, Freudenthal and Boyle, 1993), it was predicted that the anaphor would be immediately interpreted as anomalous when it describes the unfocused antecedent. However the studies failed to find any evidence of punctuate effects. A further eye movement study was conducted with a revised set of materials but still failed to find evidence of punctuate anomaly detection in the two quantificational conditions. It was concluded that pronominal reference to a quantified noun-phrase is not processed on-line, i.e. it is not processed as the anaphor is read. Chapter eight presents two experiments on the interpretation of the non-monotonic quantifier only a fezo. It was suggested that this has the simple function of marking a set relative to expectations, and that focus is pragmatically determined. Focus is maintained on the refset when the quantified sentence describes a situation which is consistent with expectations, but the compset is placed in focus when these expectations are violated. Experiment six uses a sentence-continuation task to demonstrate these preferences, and an interaction with sentence connectives. Experiment seven monitored subjects' eye movements are they read sentences which referred to either the refset or compset of a sentence quantified by only a few. There was no evidence that the processing of pronominal reference is contingent on the focusing properties of this quantifier. Chapters nine and ten make a digression to consider the interpretation of sentences with more than one quantifier. The resulting scope ambiguity has been the subject of considerable theoretical interest, but limited empirical research. The existing literature is reviewed in Chapter nine, and a preliminary off-line sentence-continuation study is reported in Chapter ten which examines the interaction of quantifier and pragmatic constraints on a doubly-quantified sentence. The experimental findings are summarised in Chapter eleven, where an effort is made to accommodate the quantifier focus and scope ambiguity strands of this thesis within a common representational framework

    Explainable Censored Learning: Finding Critical Features with Long Term Prognostic Values for Survival Prediction

    Full text link
    Interpreting critical variables involved in complex biological processes related to survival time can help understand prediction from survival models, evaluate treatment efficacy, and develop new therapies for patients. Currently, the predictive results of deep learning (DL)-based models are better than or as good as standard survival methods, they are often disregarded because of their lack of transparency and little interpretability, which is crucial to their adoption in clinical applications. In this paper, we introduce a novel, easily deployable approach, called EXplainable CEnsored Learning (EXCEL), to iteratively exploit critical variables and simultaneously implement (DL) model training based on these variables. First, on a toy dataset, we illustrate the principle of EXCEL; then, we mathematically analyze our proposed method, and we derive and prove tight generalization error bounds; next, on two semi-synthetic datasets, we show that EXCEL has good anti-noise ability and stability; finally, we apply EXCEL to a variety of real-world survival datasets including clinical data and genetic data, demonstrating that EXCEL can effectively identify critical features and achieve performance on par with or better than the original models. It is worth pointing out that EXCEL is flexibly deployed in existing or emerging models for explainable survival data in the presence of right censoring.Comment: 39 page

    SAKey: Scalable almost key discovery in RDF data

    Get PDF
    Exploiting identity links among RDF resources allows applications to efficiently integrate data. Keys can be very useful to discover these identity links. A set of properties is considered as a key when its values uniquely identify resources. However, these keys are usually not available. The approaches that attempt to automatically discover keys can easily be overwhelmed by the size of the data and require clean data. We present SAKey, an approach that discovers keys in RDF data in an efficient way. To prune the search space, SAKey exploits characteristics of the data that are dynamically detected during the process. Furthermore, our approach can discover keys in datasets where erroneous data or duplicates exist (i.e., almost keys). The approach has been evaluated on different synthetic and real datasets. The results show both the relevance of almost keys and the efficiency of discovering them

    Mining Interesting Patterns in Multi-Relational Data

    Get PDF
    corecore