11,626 research outputs found

    Three reasons to adopt TAG-based surface realisation

    Get PDF
    Surface realisation from flat semantic formulae is known to be exponential in the length of the input. In this paper, we argue that TAG naturally supports the integration of three main ways of reducing complexity: polarity filtering, delayed adjunction and empty semantic items elimination. We support these claims by presenting some preliminary results of the TAG-based surface realiser

    A computer-assisted approach to the analysis of metaphor variation across genres.

    Get PDF

    Treebank-based acquisition of Chinese LFG resources for parsing and generation

    Get PDF
    This thesis describes a treebank-based approach to automatically acquire robust,wide-coverage Lexical-Functional Grammar (LFG) resources for Chinese parsing and generation, which is part of a larger project on the rapid construction of deep, large-scale, constraint-based, multilingual grammatical resources. I present an application-oriented LFG analysis for Chinese core linguistic phenomena and (in cooperation with PARC) develop a gold-standard dependency-bank of Chinese f-structures for evaluation. Based on the Penn Chinese Treebank, I design and implement two architectures for inducing Chinese LFG resources, one annotation-based and the other dependency conversion-based. I then apply the f-structure acquisition algorithm together with external, state-of-the-art parsers to parsing new text into "proto" f-structures. In order to convert "proto" f-structures into "proper" f-structures or deep dependencies, I present a novel Non-Local Dependency (NLD) recovery algorithm using subcategorisation frames and f-structure paths linking antecedents and traces in NLDs extracted from the automatically-built LFG f-structure treebank. Based on the grammars extracted from the f-structure annotated treebank, I develop a PCFG-based chart generator and a new n-gram based pure dependency generator to realise Chinese sentences from LFG f-structures. The work reported in this thesis is the first effort to scale treebank-based, probabilistic Chinese LFG resources from proof-of-concept research to unrestricted, real text. Although this thesis concentrates on Chinese and LFG, many of the methodologies, e.g. the acquisition of predicate-argument structures, NLD resolution and the PCFG- and dependency n-gram-based generation models, are largely language and formalism independent and should generalise to diverse languages as well as to labelled bilexical dependency representations other than LFG

    SaferDrive: an NLG-based Behaviour Change Support System for Drivers

    Get PDF
    Despite the long history of Natural Language Generation (NLG) research, the potential for influencing real world behaviour through automatically generated texts has not received much attention. In this paper, we present SaferDrive, a behaviour change support system that uses NLG and telematic data in order to create weekly textual feedback for automobile drivers, which is delivered through a smartphone application. Usage-based car insurances use sensors to track driver behaviour. Although the data collected by such insurances could provide detailed feedback about the driving style, they are typically withheld from the driver and used only to calculate insurance premiums. SaferDrive instead provides detailed textual feedback about the driving style, with the intent to help drivers improve their driving habits. We evaluate the system with real drivers and report that the textual feedback generated by our system does have a positive influence on driving habits, especially with regard to speeding

    Weak Lensing Peaks in Simulated Light-Cones: Investigating the Coupling between Dark Matter and Dark Energy

    Get PDF
    In this paper, we study the statistical properties of weak lensing peaks in light-cones generated from cosmological simulations. In order to assess the prospects of such observable as a cosmological probe, we consider simulations that include interacting Dark Energy (hereafter DE) models with coupling term between DE and Dark Matter. Cosmological models that produce a larger population of massive clusters have more numerous high signal-to-noise peaks; among models with comparable numbers of clusters those with more concentrated haloes produce more peaks. The most extreme model under investigation shows a difference in peak counts of about 20%20\% with respect to the reference Λ\mathrm{\Lambda}CDM model. We find that peak statistics can be used to distinguish a coupling DE model from a reference one with the same power spectrum normalisation. The differences in the expansion history and the growth rate of structure formation are reflected in their halo counts, non-linear scale features and, through them, in the properties of the lensing peaks. For a source redshift distribution consistent with the expectations of future space-based wide field surveys, we find that typically seventy percent of the cluster population contributes to weak-lensing peaks with signal-to-noise ratios larger than two, and that the fraction of clusters in peaks approaches one-hundred percent for haloes with redshift z≀\leq0.5. Our analysis demonstrates that peak statistics are an important tool for disentangling DE models by accurately tracing the structure formation processes as a function of the cosmic time.Comment: accepted in MNRAS, figures improved and text update

    Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation

    Get PDF
    This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view of the changes that the field has undergone over the past decade or so, especially in relation to new (usually data-driven) methods, as well as new applications of NLG technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organised; (b) highlight a number of relatively recent research topics that have arisen partly as a result of growing synergies between NLG and other areas of artificial intelligence; (c) draw attention to the challenges in NLG evaluation, relating them to similar challenges faced in other areas of Natural Language Processing, with an emphasis on different evaluation methods and the relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118 pages, 8 figures, 1 tabl

    Is technology a new challenge for the field of construction management?

    Get PDF
    The central theme in Construction Management (CM) and CM research is improving\ud the performance of construction industry. Much effort and thought is given to improving\ud project performance. Within CM there is a natural inclination to focus on projects\ud and project management (PM). Companies in the construction industry also see project\ud management as their key competence. Both have little appreciation for technologies\ud other than those that support project management tasks. Technology – other than\ud PM support – is often seen as an outside resource that is "contracted in". By taking\ud such a neutral position regarding technology, CM and construction companies not\ud only disregard the potential of these technologies, but also fail to notice the adverse\ud effects when new technologies are "contracted in". This paper argues that CM as well\ud as companies in construction can gain by reconsidering their stance towards technology.\ud This argument is built on the case of road construction – in particular the asphalt\ud paving process. The case shows that development of the new technologies and the development\ud of the skills and operational practice of the people that are expected to use\ud the technologies are not in harmony. Projections for the upcoming decade indicate a\ud sharp rise and proliferation of SMART technologies – this too for the construction industry.\ud Construction companies need to take a more proactive and involved stance\ud towards these technologies to be able to reap the benefits. If not, then the gap between\ud technologies and construction will grow and the risks for the companies increase with\ud it. CM and CM research needs to address this gap, support the introduction of new\ud technologies and the synchronisation of new technology development and the development\ud of skills and working. If it fails to do so CM and CM research will struggle to\ud maintain its meaningful contribution in the improvement of the construction industry

    The Semantic Grid: A future e-Science infrastructure

    No full text
    e-Science offers a promising vision of how computer and communication technology can support and enhance the scientific process. It does this by enabling scientists to generate, analyse, share and discuss their insights, experiments and results in an effective manner. The underlying computer infrastructure that provides these facilities is commonly referred to as the Grid. At this time, there are a number of grid applications being developed and there is a whole raft of computer technologies that provide fragments of the necessary functionality. However there is currently a major gap between these endeavours and the vision of e-Science in which there is a high degree of easy-to-use and seamless automation and in which there are flexible collaborations and computations on a global scale. To bridge this practice–aspiration divide, this paper presents a research agenda whose aim is to move from the current state of the art in e-Science infrastructure, to the future infrastructure that is needed to support the full richness of the e-Science vision. Here the future e-Science research infrastructure is termed the Semantic Grid (Semantic Grid to Grid is meant to connote a similar relationship to the one that exists between the Semantic Web and the Web). In particular, we present a conceptual architecture for the Semantic Grid. This architecture adopts a service-oriented perspective in which distinct stakeholders in the scientific process, represented as software agents, provide services to one another, under various service level agreements, in various forms of marketplace. We then focus predominantly on the issues concerned with the way that knowledge is acquired and used in such environments since we believe this is the key differentiator between current grid endeavours and those envisioned for the Semantic Grid
    • 

    corecore