923,220 research outputs found

    Visualization of graphs and trees for software analysis

    Get PDF
    A software architecture is an abstraction of a software system, which is indispensable for many software engineering tasks. Unfortunately, in many cases information pertaining to the software architecture is not available, outdated, or inappropriate for the task at hand. The RECONSTRUCTOR project focuses on software architecture reconstruction, i.e., obtaining architectural information from an existing system. Our research, which is part of RECONSTRUCTOR, focuses on interactive visualization and tries to answer the following question: How can users be enabled to understand the large amounts of information relevant for program understanding using visual representations? To answer this question, we have iteratively developed a number of techniques for visualizing software systems. A large number of these cases consists of hierarchically organized data, combined with adjacency relations. Examples are function calls within a hierarchically organized software system and correspondence relations between two different versions of a hierarchically organized software system. Hierarchical Edge Bundles (HEBs) are used to visualize adjacency relations in hierarchically organized data, such as the aforementioned function calls within a software system. HEBs significantly reduce visual clutter by visually bundling relations together. Massive Sequence Views (MSVs) are used in conjunction with HEBs to enable analysis of sequences of relations, such as function-call traces. HEBs are furthermore used to visually compare hierarchically organized data, e.g., two different versions of a software system. HEBs visually emphasize splits, joins, and relocations of subhierarchies and provide for interactive selection of sets of relations. Since HEBs require a hierarchy to perform the bundling, we present Force-Directed Edge Bundles (FDEBs) as an alternative to visually bundle relations together in the absence of a hierarchical component. FDEBs use a self-organizing approach to bundling in which edges are modeled as flexible springs that can attract each other. As a result, visual clutter is reduced and high-level edge patterns are better visible. Finally, in all these methods, a clear depiction of the direction of edges is important. We have therefore performed a separate study in which we evaluated ten representations (including the standard arrow) for depicting directed edges in a controlled user study

    Religious History of the Gaį¹‡įøÄ« Beam: Testimonies of Texts, Images and Ritual Practices

    Get PDF
    The gaį¹‡įøÄ« beam is a monastic tool which was already known to have been used in Buddhist monasteries in ancient India for the purpose of calling the monks to gather for a joint activity. With the spread of Buddhism the instrument was transmitted to the Tibetan and later Mongolian Buddhist cultures. It has been in use in the Tibetan and Mongolian Buddhist monastic traditions till the present day. One of the most prominent cases of the gaį¹‡įøÄ« beam application in modern Mongolia relates to the poį¹£adha ritual. In this article I attempt to present the history of the gaį¹‡įøÄ« beam within the framework of material culture studies. The analysis aims at the investigation of the mutual relations between the artefact and the societies that have made use of it as well as of the ways in which these relations could have changed. In order to accomplish this task I study the testimonies of the original Sanskrit and Tibetan texts, religious images and accounts of ritual practices

    A comparison of parsing technologies for the biomedical domain

    Get PDF
    This paper reports on a number of experiments which are designed to investigate the extent to which current nlp resources are able to syntactically and semantically analyse biomedical text. We address two tasks: parsing a real corpus with a hand-built widecoverage grammar, producing both syntactic analyses and logical forms; and automatically computing the interpretation of compound nouns where the head is a nominalisation (e.g., hospital arrival means an arrival at hospital, while patient arrival means an arrival of a patient). For the former task we demonstrate that exible and yet constrained `preprocessing ' techniques are crucial to success: these enable us to use part-of-speech tags to overcome inadequate lexical coverage, and to `package up' complex technical expressions prior to parsing so that they are blocked from creating misleading amounts of syntactic complexity. We argue that the xml-processing paradigm is ideally suited for automatically preparing the corpus for parsing. For the latter task, we compute interpretations of the compounds by exploiting surface cues and meaning paraphrases, which in turn are extracted from the parsed corpus. This provides an empirical setting in which we can compare the utility of a comparatively deep parser vs. a shallow one, exploring the trade-o between resolving attachment ambiguities on the one hand and generating errors in the parses on the other. We demonstrate that a model of the meaning of compound nominalisations is achievable with the aid of current broad-coverage parsers

    Unraveling the influence of domain knowledge during simulation-based inquiry learning

    Get PDF
    This study investigated whether the mere knowledge of the meaning of variables can facilitate inquiry learning processes and outcomes. Fifty-seven college freshmen were randomly allocated to one of three inquiry tasks. The concrete task had familiar variables from which hypotheses about their underlying relations could be inferred. The intermediate task used familiar variables that did not invoke underlying relations, whereas the abstract task contained unfamiliar variables that did not allow for inference of hypotheses about relations. Results showed that concrete participants performed more successfully and efficiently than intermediate participants, who in turn were equally successful and efficient as abstract participants. From these findings it was concluded that students learning by inquiry benefit little from knowledge of the meaning of variables per se. Some additional understanding of the way these variables are interrelated seems required to enhance inquiry learning processes and outcomes

    Learning Sentence-internal Temporal Relations

    Get PDF
    In this paper we propose a data intensive approach for inferring sentence-internal temporal relations. Temporal inference is relevant for practical NLP applications which either extract or synthesize temporal information (e.g., summarisation, question answering). Our method bypasses the need for manual coding by exploiting the presence of markers like after", which overtly signal a temporal relation. We first show that models trained on main and subordinate clauses connected with a temporal marker achieve good performance on a pseudo-disambiguation task simulating temporal inference (during testing the temporal marker is treated as unseen and the models must select the right marker from a set of possible candidates). Secondly, we assess whether the proposed approach holds promise for the semi-automatic creation of temporal annotations. Specifically, we use a model trained on noisy and approximate data (i.e., main and subordinate clauses) to predict intra-sentential relations present in TimeBank, a corpus annotated rich temporal information. Our experiments compare and contrast several probabilistic models differing in their feature space, linguistic assumptions and data requirements. We evaluate performance against gold standard corpora and also against human subjects

    Profit-oriented disassembly-line balancing

    Get PDF
    As product and material recovery has gained importance, disassembly volumes have increased, justifying construction of disassembly lines similar to assembly lines. Recent research on disassembly lines has focused on complete disassembly. Unlike assembly, the current industry practice involves partial disassembly with profit-maximization or cost-minimization objectives. Another difference between assembly and disassembly is that disassembly involves additional precedence relations among tasks due to processing alternatives or physical restrictions. In this study, we define and solve the profit-oriented partial disassembly-line balancing problem. We first characterize different types of precedence relations in disassembly and propose a new representation scheme that encompasses all these types. We then develop the first mixed integer programming formulation for the partial disassembly-line balancing problem, which simultaneously determines (1) the parts whose demand is to be fulfilled to generate revenue, (2) the tasks that will release the selected parts under task and station costs, (3) the number of stations that will be opened, (4) the cycle time, and (5) the balance of the disassembly line, i.e. the feasible assignment of selected tasks to stations such that various types of precedence relations are satisfied. We propose a lower and upper-bounding scheme based on linear programming relaxation of the formulation. Computational results show that our approach provides near optimal solutions for small problems and is capable of solving larger problems with up to 320 disassembly tasks in reasonable time

    Expert chess memory: Revisiting the chunking hypothesis

    Get PDF
    After reviewing the relevant theory on chess expertise, this paper re-examines experimentally the finding of Chase and Simon (1973a) that the differences in ability of chess players at different skill levels to copy and to recall positions are attributable to the experts' storage of thousands of chunks (patterned clusters of pieces) in long-term memory. Despite important differences in the experimental apparatus, the data of the present experiments regarding latencies and chess relations between successively placed pieces are highly correlated with those of Chase and Simon. We conclude that the 2-second inter-chunk interval used to define chunk boundaries is robust, and that chunks have psychological reality. We discuss the possible reasons why Masters in our new study used substantially larger chunks than the Master of the 1973 study, and extend the chunking theory to take account of the evidence for large retrieval structures (templates) in long-term memory
    • ā€¦
    corecore