14,647 research outputs found

    miniDVMS v1.8 : A user manual(The data visualization and modeling system)

    Get PDF
    Today, the data available to tackle many scientific challenges is vast in quantity and diverse in nature. The exploration of heterogeneous information spaces requires suitable mining algorithms as well as effective visual interfaces. miniDVMS v1.8 provides a flexible visual data mining framework which combines advanced projection algorithms developed in the machine learning domain and visual techniques developed in the information visualisation domain. The advantage of this interface is that the user is directly involved in the data mining process. Principled projection methods, such as generative topographic mapping (GTM) and hierarchical GTM (HGTM), are integrated with powerful visual techniques, such as magnification factors, directional curvatures, parallel coordinates, and user interaction facilities, to provide this integrated visual data mining framework. The software also supports conventional visualisation techniques such as principal component analysis (PCA), Neuroscale, and PhiVis. This user manual gives an overview of the purpose of the software tool, highlights some of the issues to be taken care while creating a new model, and provides information about how to install and use the tool. The user manual does not require the readers to have familiarity with the algorithms it implements. Basic computing skills are enough to operate the software

    Software system safety

    Get PDF
    Software itself is not hazardous, but since software and hardware share common interfaces there is an opportunity for software to create hazards. Further, these software systems are complex, and proven methods for the design, analysis, and measurement of software safety are not yet available. Some past software failures, future NASA software trends, software engineering methods, and tools and techniques for various software safety analyses are reviewed. Recommendations to NASA are made based on this review

    Bridging the gap: building better tools for game development

    Get PDF
    The following thesis is about questioning how we design game making tools, and how developers may build easier tools to use. It is about the highlighting the inadequacies of current game making programs as well as introducing Goal-Oriented Design as a possible solution. It is also about the processes of digital product development, and reflecting on the necessity for both design and development methods to work cohesively for meaningful results. Interaction Design is in essence the abstracting of key relations that matter to the contextual environment. The result of attempting to tie the Interaction Design principles, Game Design issues together with Software Development practices has led to the production of the User-Centred game engine, PlayBoard

    Rodin: an open toolset for modelling and reasoning in Event-B

    No full text
    Event-B is a formal method for system-level modelling and analysis. Key features of Event-B are the use of set theory as a modelling notation, the use of refinement to represent systems at different abstraction levels and the use of mathematical proof to verify consistency between refinement levels. In this article we present the Rodin modelling tool that seamlessly integrates modelling and proving. We outline how the Event-B language was designed to facilitate proof and how the tool has been designed to support changes to models while minimising the impact of changes on existing proofs. We outline the important features of the prover architecture and explain how well-definedness is treated. The tool is extensible and configurable so that it can be adapted more easily to different application domains and development methods

    Decision support systems for solving discrete multicriteria decision making problems

    Get PDF
    Includes bibliography.The aim of this study was the design and implementation of an interactive decision support system, assisting a single decision maker in reaching a satisfactory decision when faced by a multicriteria decision making problem. There are clearly two components involved in designing such a system, namely the concept of decision support systems (DSS) and the area of multicriteria decision making (MCDM). The multicriteria decision making environment as well as the definitions of the multicriteria decision making concepts used, are discussed in chapter 1. Chapter 2 gives a brief historical review on MCDM, highlighting the origins of some of the more well-known methods for solving MCDM problems. A detailed discussion of interactive decision making is also given. Chapter 3 is concerned with the DSS concept, including a historical review thereof, a framework for the design of a DSS, various development approaches as well as the components constituting a decision support system. In chapter 4, the possibility of integrating the two concepts, MCDM and DSS, are discussed. A detailed discussion of various methodologies for solving MCDM problems is given in chapter 5. Specific attention is given to identifying the methodologies to be implemented in the DSS. Chapter 6 can be seen as a theoretical description of the system developed, while Chapter 7 is concerned with the evaluation procedures used for testing the system. A final summary and concluding remarks are given in Chapter 8

    Usability of hypertext : factors affecting the construction of meaning

    Get PDF
    One type of hypertext application, information retrieval, has become increasingly popular and accessible due to the explosion of activity occurring on the World Wide Web. These hypertext documents are referred to as web sites. Readers can now access a multitude of web sites and retrieve a wide variety of information. The uniqueness of a hypertext document centers around the concept that text is broken into an array of non-sequential text chunks, or nodes, which are connected through links. The hypertext reading can be considered an interactive experience requiring the reader to effectively navigate the document. The potentially complex link and node structure awaiting hypertext readers can lead them into becoming lost in hyperspace Usable hypertext design will maximize document coherence and minimize readers\u27 cognitive overhead, allowing readers to create an accurate mental model of the hypertext structure. Usability testing is designed to determine how easily the functionality of a particular system can be used, In this case, the system under investigation is New Jersey Institute of Technology\u27s web site. The usability of a hypertext document is affected by design elements which contribute to the content and structure of the hypertext. These design elements include good navigation aids, clear link labels, and consistent page layout

    Overcoming barriers and increasing independence: service robots for elderly and disabled people

    Get PDF
    This paper discusses the potential for service robots to overcome barriers and increase independence of elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly people and advances in technology which will make new uses possible and provides suggestions for some of these new applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses the complementarity of assistive service robots and personal assistance and considers the types of applications and users for which service robots are and are not suitable

    Processamento automático de texto de narrativas clínicas

    Get PDF
    The informatization of medical systems and the subsequent move towards the usage of Electronic Health Records (EHR) over the paper format by medical professionals allowed for safer and more e cient healthcare. Additionally, EHR can also be used as a data source for observational studies around the world. However, it is estimated that 70-80% of all clinical data is in the form of unstructured free text and regarding the data that is structured, not all of it follows the same standards, making it di cult to use on the mentioned observational studies. This dissertation aims to tackle those two adversities using natural language processing for the task of extracting concepts from free text and, afterwards, use a common data model to harmonize the data. The developed system employs an annotator, namely cTAKES, to extract the concepts from free text. The extracted concepts are then normalized using text preprocessing, word embeddings, MetaMap and UMLS Metathesaurus lookup. Finally, the normalized concepts are converted to the OMOP Common Data Model and stored in a database. In order to test the developed system, the i2b2 2010 data set was used. The di erent components of the system were tested and evaluated separately, with the concept extraction component achieving a precision, recall and F-score of 77.12%, 70.29% and 73.55%, respectively. The normalization component was evaluated by completing the N2C2 2019 challenge track 3, where it achieved a 77.5% accuracy. Finally, during the OMOP CDM conversion component, it was observed that 7.92% of the concepts were lost during the process. In conclusion, even though the developed system still has margin for improvements, it proves to be a viable method of automatically processing clinical narratives.A informatização dos sistemas médicos e a subsequente tendência por parte de profissionais de saúde a substituir registos em formato de papel por registos eletrónicos de saúde, permitiu que os serviços de saúde se tornassem mais seguros e eficientes. Além disso, estes registos eletrónicos apresentam também o benefício de poderem ser utilizados como fonte de dados para estudos observacionais. No entanto, estima-se que 70-80% de todos os dados clínicos se encontrem na forma de texto livre não-estruturado e os dados que estão estruturados não seguem todos os mesmos padrões, dificultando o seu potencial uso nos estudos observacionais. Esta dissertação pretende solucionar essas duas adversidades através do uso de processamento de linguagem natural para a tarefa de extrair conceitos de texto livre e, de seguida, usar um modelo comum de dados para os harmonizar. O sistema desenvolvido utiliza um anotador, especificamente o cTAKES, para extrair conceitos de texto livre. Os conceitos extraídos são, então, normalizados através de técnicas de pré-processamento de texto, Word Embeddings, MetaMap e um sistema de procura no Metathesaurus do UMLS. Por fim, os conceitos normalizados são convertidos para o modelo comum de dados da OMOP e guardados numa base de dados. Para testar o sistema desenvolvido usou-se o conjunto de dados i2b2 de 2010. As diferentes partes do sistema foram testadas e avaliadas individualmente sendo que na extração dos conceitos obteve-se uma precisão, recall e F-score de 77.12%, 70.29% e 73.55%, respetivamente. A normalização foi avaliada através do desafio N2C2 2019-track 3 onde se obteve uma exatidão de 77.5%. Na conversão para o modelo comum de dados OMOP observou-se que durante a conversão perderam-se 7.92% dos conceitos. Concluiu-se que, embora o sistema desenvolvido ainda tenha margem para melhorias, este demonstrou-se como um método viável de processamento automático do texto de narrativas clínicas.Mestrado em Engenharia de Computadores e Telemátic

    PyGrapherConnect

    Get PDF
    The evolving landscape of backend computational systems especially in biomedical research involving heavy data operations which have a gap of not being used properly. It is due to the lack of communication standard between the frontend and backend. This gap presents a problem to researchers who need to use the frontend for visualizing and manipulating their data but also want to do complex analysis. CAPRI a python-based backend system specializing in analyzing Evidential Reasoning data also has the same issue. This project offers a solution PyGrapherConnect module acting as a data conversion layer between CAPRI and PyGrapher, its frontend interface. It translates graph data generated by PyGrapher into a txt format readable by CAPRI for further analytical processing. It makes it easier for researchers working with Evidential Reasoning (ER) models, based on the Dempster-Shafer theory to perform the final belief assessment. PyGrapherConnect acting as bridge between frontend and backend, provides a solution to researchers for representing and manipulating these ER models as graph structures, helping researchers to make intricated analytical deductions
    corecore