8,654 research outputs found

    Statistical langauge models for alternative sequence selection

    No full text

    Energy efficient transport technology: Program summary and bibliography

    Get PDF
    The Energy Efficient Transport (EET) Program began in 1976 as an element of the NASA Aircraft Energy Efficiency (ACEE) Program. The EET Program and the results of various applications of advanced aerodynamics and active controls technology (ACT) as applicable to future subsonic transport aircraft are discussed. Advanced aerodynamics research areas included high aspect ratio supercritical wings, winglets, advanced high lift devices, natural laminar flow airfoils, hybrid laminar flow control, nacelle aerodynamic and inertial loads, propulsion/airframe integration (e.g., long duct nacelles) and wing and empennage surface coatings. In depth analytical/trade studies, numerous wind tunnel tests, and several flight tests were conducted. Improved computational methodology was also developed. The active control functions considered were maneuver load control, gust load alleviation, flutter mode control, angle of attack limiting, and pitch augmented stability. Current and advanced active control laws were synthesized and alternative control system architectures were developed and analyzed. Integrated application and fly by wire implementation of the active control functions were design requirements in one major subprogram. Additional EET research included interdisciplinary technology applications, integrated energy management, handling qualities investigations, reliability calculations, and economic evaluations related to fuel savings and cost of ownership of the selected improvements

    Decentralized Web Search

    Get PDF
    Centrally controlled search engines will not be sufficient and reliable for indexing and searching the rapidly growing World Wide Web in near future. A better solution is to enable the Web to index itself in a decentralized manner. Existing distributed approaches for ranking search results do not provide flexible searching, complete results and ranking with high accuracy. This thesis presents a decentralized Web search mechanism, named DEWS, which enables existing webservers to collaborate with each other to form a distributed index of the Web. DEWS can rank the search results based on query keyword relevance and relative importance of websites in a distributed manner preserving a hyperlink overlay on top of a structured P2P overlay. It also supports approximate matching of query keywords using phonetic codes and n-grams along with list decoding of a linear covering code. DEWS supports incremental retrieval of search results in a decentralized manner which reduces network bandwidth required for query resolution. It uses an efficient routing mechanism extending the Plexus routing protocol with a message aggregation technique. DEWS maintains replica of indexes, which reduces routing hops and makes DEWS robust to webservers failure. The standard LETOR 3.0 dataset was used to validate the DEWS protocol. Simulation results show that the ranking accuracy of DEWS is close to the centralized case, while network overhead for collaborative search and indexing is logarithmic on network size. The results also show that DEWS is resilient to changes in the available pool of indexing webservers and works efficiently even in the presence of heavy query load

    Modelling and Analysis Using GROOVE

    Get PDF
    In this paper we present case studies that describe how the graph transformation tool GROOVE has been used to model problems from a wide variety of domains. These case studies highlight the wide applicability of GROOVE in particular, and of graph transformation in general. They also give concrete templates for using GROOVE in practice. Furthermore, we use the case studies to analyse the main strong and weak points of GROOVE

    A mesh adaptivity scheme on the Landau-de Gennes functional minimization case in 3D, and its driving efficiency

    Full text link
    This paper presents a 3D mesh adaptivity strategy on unstructured tetrahedral meshes by a posteriori error estimates based on metrics, studied on the case of a nonlinear finite element minimization scheme for the Landau-de Gennes free energy functional of nematic liquid crystals. Newton's iteration for tensor fields is employed with steepest descent method possibly stepping in. Aspects relating the driving of mesh adaptivity within the nonlinear scheme are considered. The algorithmic performance is found to depend on at least two factors: when to trigger each single mesh adaptation, and the precision of the correlated remeshing. Each factor is represented by a parameter, with its values possibly varying for every new mesh adaptation. We empirically show that the time of the overall algorithm convergence can vary considerably when different sequences of parameters are used, thus posing a question about optimality. The extensive testings and debugging done within this work on the simulation of systems of nematic colloids substantially contributed to the upgrade of an open source finite element-oriented programming language to its 3D meshing possibilities, as also to an outer 3D remeshing module

    Encoding Sequential Information in Semantic Space Models: Comparing Holographic Reduced Representation and Random Permutation

    Get PDF
    Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping) perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics

    Distributional Semantic Models for Clinical Text Applied to Health Record Summarization

    Get PDF
    As information systems in the health sector are becoming increasingly computerized, large amounts of care-related information are being stored electronically. In hospitals clinicians continuously document treatment and care given to patients in electronic health record (EHR) systems. Much of the information being documented is in the form of clinical notes, or narratives, containing primarily unstructured free-text information. For each care episode, clinical notes are written on a regular basis, ending with a discharge summary that basically summarizes the care episode. Although EHR systems are helpful for storing and managing such information, there is an unrealized potential in utilizing this information for smarter care assistance, as well as for secondary purposes such as research and education. Advances in clinical language processing are enabling computers to assist clinicians in their interaction with the free-text information documented in EHR systems. This includes assisting in tasks like query-based search, terminology development, knowledge extraction, translation, and summarization. This thesis explores various computerized approaches and methods aimed at enabling automated semantic textual similarity assessment and information extraction based on the free-text information in EHR systems. The focus is placed on the task of (semi-)automated summarization of the clinical notes written during individual care episodes. The overall theme of the presented work is to utilize resource-light approaches and methods, circumventing the need to manually develop knowledge resources or training data. Thus, to enable computational semantic textual similarity assessment, word distribution statistics are derived from large training corpora of clinical free text and stored as vector-based representations referred to as distributional semantic models. Also resource-light methods are explored in the task of performing automatic summarization of clinical freetext information, relying on semantic textual similarity assessment. Novel and experimental methods are presented and evaluated that focus on: a) distributional semantic models trained in an unsupervised manner from statistical information derived from large unannotated clinical free-text corpora; b) representing and computing semantic similarities between linguistic items of different granularity, primarily words, sentences and clinical notes; and c) summarizing clinical free-text information from individual care episodes. Results are evaluated against gold standards that reflect human judgements. The results indicate that the use of distributional semantics is promising as a resource-light approach to automated capturing of semantic textual similarity relations from unannotated clinical text corpora. Here it is important that the semantics correlate with the clinical terminology, and with various semantic similarity assessment tasks. Improvements over classical approaches are achieved when the underlying vector-based representations allow for a broader range of semantic features to be captured and represented. These are either distributed over multiple semantic models trained with different features and training corpora, or use models that store multiple sense-vectors per word. Further, the use of structured meta-level information accompanying care episodes is explored as training features for distributional semantic models, with the aim of capturing semantic relations suitable for care episode-level information retrieval. Results indicate that such models performs well in clinical information retrieval. It is shown that a method called Random Indexing can be modified to construct distributional semantic models that capture multiple sense-vectors for each word in the training corpus. This is done in a way that retains the original training properties of the Random Indexing method, by being incremental, scalable and distributional. Distributional semantic models trained with a framework called Word2vec, which relies on the use of neural networks, outperform those trained using the classic Random Indexing method in several semantic similarity assessment tasks, when training is done using comparable parameters and the same training corpora. Finally, several statistical features in clinical text are explored in terms of their ability to indicate sentence significance in a text summary generated from the clinical notes. This includes the use of distributional semantics to enable case-based similarity assessment, where cases are other care episodes and their “solutions”, i.e., discharge summaries. A type of manual evaluation is performed, where human experts rates the different aspects of the summaries using a evaluation scheme/tool. In addition, the original clinician-written discharge summaries are explored as gold standard for the purpose of automated evaluation. Evaluation shows a high correlation between manual and automated evaluation, suggesting that such a gold standard can function as a proxy for human evaluations. --- This thesis has been published jointly with Norwegian University of Science and Technology, Norway and University of Turku, Finland.This thesis has beenpublished jointly with Norwegian University of Science and Technology, Norway.Siirretty Doriast
    corecore