10 research outputs found

    An automated reasoning framework for translational research

    Get PDF
    AbstractIn this paper we propose a novel approach to the design and implementation of knowledge-based decision support systems for translational research, specifically tailored to the analysis and interpretation of data from high-throughput experiments. Our approach is based on a general epistemological model of the scientific discovery process that provides a well-founded framework for integrating experimental data with preexisting knowledge and with automated inference tools.In order to demonstrate the usefulness and power of the proposed framework, we present its application to Genome-Wide Association Studies, and we use it to reproduce a portion of the initial analysis performed on the well-known WTCCC dataset. Finally, we describe a computational system we are developing, aimed at assisting translational research. The system, based on the proposed model, will be able to automatically plan and perform knowledge discovery steps, to keep track of the inferences performed, and to explain the obtained results

    AlexSys: a knowledge-based expert system for multiple sequence alignment construction and analysis

    Get PDF
    Multiple sequence alignment (MSA) is a cornerstone of modern molecular biology and represents a unique means of investigating the patterns of conservation and diversity in complex biological systems. Many different algorithms have been developed to construct MSAs, but previous studies have shown that no single aligner consistently outperforms the rest. This has led to the development of a number of ‘meta-methods’ that systematically run several aligners and merge the output into one single solution. Although these methods generally produce more accurate alignments, they are inefficient because all the aligners need to be run first and the choice of the best solution is made a posteriori. Here, we describe the development of a new expert system, AlexSys, for the multiple alignment of protein sequences. AlexSys incorporates an intelligent inference engine to automatically select an appropriate aligner a priori, depending only on the nature of the input sequences. The inference engine was trained on a large set of reference multiple alignments, using a novel machine learning approach. Applying AlexSys to a test set of 178 alignments, we show that the expert system represents a good compromise between alignment quality and running time, making it suitable for high throughput projects. AlexSys is freely available from http://alnitak.u-strasbg.fr/∼aniba/alexsys

    Digital Forensics Event Graph Reconstruction

    Get PDF
    Ontological data representation and data normalization can provide a structured way to correlate digital artifacts. This can reduce the amount of data that a forensics examiner needs to process in order to understand the sequence of events that happened on the system. However, ontology processing suffers from large disk consumption and a high computational cost. This paper presents Property Graph Event Reconstruction (PGER), a novel data normalization and event correlation system that leverages a native graph database to improve the speed of queries common in ontological data. PGER reduces the processing time of event correlation grammars and maintains accuracy over a relational database storage format

    Improving the Knowledge-Based Expert System Lifecycle

    Get PDF
    Knowledge-based expert systems are used to enhance and automate manual processes through the use of a knowledge base and modern computing power. The traditional methodology for creating knowledge-based expert systems has many commonly encountered issues that can prevent successful implementations. Complications during the knowledge acquisition phase can prevent a knowledge-based expert system from functioning properly. Furthermore, the time and resources required to maintain a knowledge-based expert system once implemented can become problematic. There are several concepts that can be integrated into a proposed methodology to improve the knowledge-based expert system lifecycle to create a more efficient process. These methods are commonly used in other disciplines but have not traditionally been incorporated into the knowledge-based expert system lifecycle. A container-loading knowledge-based expert system was created to test the concepts in the proposed methodology. The results from the container-loading knowledge-based expert system test were compared against the historical records of thirteen container ships loaded between 2008 and 2011

    Knowledge-based expert systems and a proof-of-concept case study for multiple sequence alignment construction and analysis

    No full text
    The traditional approach to bioinformatics analyses relies on independent task-specific services and applications, using different input and output formats, often idiosyncratic, and frequently not designed to inter-operate. In general, such analyses were performed by experts who manually verified the results obtained at each step in the process. Today, the amount of bioinformatics information continuously being produced means that handling the various applications used to study this information presents a major data management and analysis challenge to researchers. It is now impossible to manually analyse all this information and new approaches are needed that are capable of processing the large-scale heterogeneous data in order to extract the pertinent information. We review the recent use of integrated expert systems aimed at providing more efficient knowledge extraction for bioinformatics research. A general methodology for building knowledge-based expert systems is described, focusing on the unstructured information management architecture, UIMA, which provides facilities for both data and process management. A case study involving a multiple alignment expert system prototype called AlexSys is also presented

    Digging into Algorithms: Legal Ethics and Legal Access

    Get PDF
    The current discussions around algorithms, legal ethics, and expanding legal access through technological tools gravitate around two themes: (1) protection of the integrity of the legal profession and (2) a desire to ensure greater access to legal services. The hype cycle often pits the desire to protect the integrity of the legal profession against the ability to use algorithms to provide greater access to legal services, as though they are mutually exclusive. In reality, the arguments around protecting the profession from the threats posed by algorithms represent an over-fit in relation to what algorithms can actually achieve, while the visions of employing algorithms for access to justice initiatives represent an under- fit in relation to what algorithms could provide. A lack of precision about algorithms results in blunt protections of professional integrity leaving little room for the potential benefits of algorithmic tools. In other words, this incongruence persists because of imprecise understandings and unrealistic characterizations of the algorithmic technologies and how they fit within the broader technology of law itself. This Article provides an initial set of tools for empowering lawyers with a better understanding of, and critical engagement with, algorithms. With the goal of encouraging a more nuanced discussion around the ethical dimensions of using algorithms in legal technology—a discussion that better fits technological reality—the Article argues for lawyers and non-technologists to shift away from evaluating legal technology through a lens of mere algorithms— as though they can be evaluated outside of a specific context—to a focus on understanding algorithmic systems as technology created, manipulated, and used in a particular context. To make this argument, this Article first reviews the current use of algorithms in legal settings, both criminal and civil, reviewing the related literature and regulatory responses. This Article then uses the shortcomings of legal technology lamented by the current literature and the related regulatory responses to demonstrate the importance of shifting our collective paradigm from a consideration of law and algorithms to law and algorithmic systems. Finally, this Article offers a framework for use in assessing algorithmic systems and applies the framework to algorithmic systems employed in the legal context to demonstrate its usefulness in accurately separating true tensions from those that merely reverberate through the hype cycle. In using the framework to reveal areas at the intersection of law and algorithms truly most ripe for progress, this Article concludes with a call to action for more careful design of both legal systems and algorithmic ones

    Research into Early-Stage Identification of Entrepreneurs and Innovators with Development of an Identification Guidance Framework

    Get PDF
    In recent years, young entrepreneurs have attracted considerable interest from Government policy makers and the media and the evidence and general consensus of opinion is that the numbers of young people aspiring to start their own businesses is increasing. Courses are being created for further and higher education as well as modules developed at schools to introduce young people to business studies, however not everyone is suited to the courses, and those who undertake them may never go on to realise their aspirations of having a successful business. Entrepreneurs and innovators are crucial for society in order to develop new businesses, products and services, thus creating job prospects and wealth for the country and society as a whole. Many entrepreneurs don’t become entrepreneurs until later in life, or their skills and attributes never materialise and lie dormant due to factors such as financial insecurity, lack of confidence or guidance as to how they can control their destiny. It is believed that entrepreneurship can be, to a certain extent taught, but only successfully, to individuals who have entrepreneurial traits and who have been identified as being entrepreneurial. Previous studies have mainly focused on existing entrepreneurs and those with failed businesses. By contrast this thesis seeks to identify the traits and characteristics that make individuals entrepreneurs, with a view to devising a framework of identifiable indicators for the tertiary education age group of 16 - 18 years old, leading potentially to early-stage identification of entrepreneurs. Leading on from the validated identification framework, online software tools have been developed as a user age-appropriate interface that will be suitable for providing the necessary identification of entrepreneurs in the 16-18 age groups. This study provides a further opportunity for additional research into the development of entrepreneur mentoring and training guidelines that can assist in preparing the potential entrepreneurs for their future by giving them tutorial programmes to develop their businesses successfully. This research programme seeks to establish a paradigm as to what it is that makes someone entrepreneurial, primarily focussed on positively identifying traits exhibited by existing entrepreneurs which can be used to assist in that identification process. As part of the research work completed so far, these traits have now been identified and have expanded upon the limited number of traits previously recognised as being entrepreneurial. A new derivation for an Entrepreneurial Trait has been created which shall be known as an ‘Entrephonotypic Trait’ which is a grouping of specifically recognised traits which have been found to be common to entrepreneurs. In addition to the research new technologies such as Artificial Intelligence, Virtual Reality and Augmented Reality have been evaluated for integration and use in training programs to assist the development of young entrepreneurs who have the potential to become successful in their business ventures. This thesis makes a significant contribution to knowledge which can be further expanded upon and utilised in future studies and research

    (MASSA: Multi-agent system to support functional annotation)

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Ingeniería del Software e Inteligencia Artificial, leída el 23-11-2015Predecir la función biológica de secuencias de Ácido Desoxirribonucleico (ADN) es unos de los mayores desafíos a los que se enfrenta la Bioinformática. Esta tarea se denomina anotación funcional y es un proceso complejo, laborioso y que requiere mucho tiempo. Dado su impacto en investigaciones y anotaciones futuras, la anotación debe ser lo más able y precisa posible. Idealmente, las secuencias deberían ser estudiadas y anotadas manualmente por un experto, garantizando así resultados precisos y de calidad. Sin embargo, la anotación manual solo es factible para pequeños conjuntos de datos o genomas de referencia. Con la llegada de las nuevas tecnologías de secuenciación, el volumen de datos ha crecido signi cativamente, haciendo aún más crítica la necesidad de implementaciones automáticas del proceso. Por su parte, la anotación automática es capaz de manejar grandes cantidades de datos y producir un análisis consistente. Otra ventaja de esta aproximación es su rapidez y bajo coste en relación a la manual. Sin embargo, sus resultados son menos precisos que los manuales y, en general, deben ser revisados ( curados ) por un experto. Aunque los procesos colaborativos de la anotación en comunidad pueden ser utilizados para reducir este cuello de botella, los esfuerzos en esta línea no han tenido hasta ahora el éxito esperado. Además, el problema de la anotación, como muchos otros en el dominio de la Bioinformática, abarca información heterogénea, distribuida y en constante evolución. Una posible aproximación para superar estos problemas consiste en cambiar el foco del proceso de los expertos individuales a su comunidad, y diseñar las herramientas de manera que faciliten la gestión del conocimiento y los recursos. Este trabajo adopta esta línea y propone MASSA (Multi-Agent System to Support functional Annotation), una arquitectura de Sistema Multi-Agente (SMA) para Soportar la Anotación funcional...Predicting the biological function of Deoxyribonucleic Acid (DNA) sequences is one of the many challenges faced by Bioinformatics. This task is called functional annotation, and it is a complex, labor-intensive, and time-consuming process. This annotation has to be as accurate and reliable as possible given its impact in further researches and annotations. In order to guarantee a high-quality outcome, each sequence should be manually studied and annotated by an expert. Although desirable, the manual annotation is only feasible for small datasets or reference genomes. As the volume of genomic data has been increasing, specially after the advent of Next Generation Sequencing techniques, automatic implementations of this process are a necessity. The automatic annotation can handle a huge amount of data and produce consistent analyses. Besides, it is faster and less expensive than the manual approach. However, its outcome is less precise than the one predicted manually and often has to be curated by an expert. Although collaborative processes of community annotation could address this expert bottleneck in automatic annotation, these e orts have failed until now. Moreover, the annotation problem, as many others in this domain, has to deal with heterogeneous information that is distributed and constantly evolving. A possible way to overcome these hurdles is with a shift in the focus of the process from individual experts to communities, and with a design of tools that facilitates the management of knowledge and resources. This work follows this approach proposing MASSA, an architecture for a Multi-Agent System (MAS) to Support functional Annotation...Depto. de Ingeniería de Software e Inteligencia Artificial (ISIA)Fac. de InformáticaTRUEunpu

    Development of an electronic treatment decision aid for Parkinson's disease using mutli-criteria decision analysis.

    Get PDF
    Clinicians constantly weigh the relative importance of multiple attributes when they make decisions about how to treat patients. The literature shows that this is generally done in a relatively informal manner using intuition rather than evidence-based medicine. Decision analysis methods and computer decision support systems (CDSS) have been developed to help implement evidence-based medicine and to aid clinicians in their decision making. Multi-criteria decision analysis (MCDA) is a methodology used to break complex problems into manageable pieces, allow data and judgement to bear on them and then reassemble them to present an overall picture of the problem. The aim of the study was to use MCDA to develop a model to aid practitioners to choose the most effective drug treatments for Parkinson's disease (PD). A CDSS was developed from this model. Two surveys were sent to 304 neurologists, 88 geriatricians as well as Parkinson's disease nurse specialists across the UK to determine the criteria for the model. The seven steps of developing a MCDA model were carried out. A value tree was created from the criteria established from the surveys. The drugs were scored for their performance against the criteria using data from clinical trials and the weights were determined by the clinician for each individual patient. Software was developed using Excel and Visual Basic for Applications (VBA) to implement the functions of the model. A sensitivity analysis was carried out to determine whether the model was suitable for use with individual PD patients and whether the software was quick and easy to use. A total of 68 criteria were generated from the surveys, which was reduced to 11. This showed that clinicians were perhaps using personal experience more than evidence-based medicine. Scoring the data on the drugs showed that some drugs performed either better or worse than expected. The weights were phrased so that users could use swing-weighting to weight the criteria for their importance to each patient. The combined scores and weights were calculated by Excel and the result returned on the screen to the user by VBA. An expert panel carried out the sensitivity analysis and showed that there were some issues with the scores developed, such as potential bias from the trials data and that not all the expected criteria were included in the model, for example bradykinesia and tremor were not included. However, the expert panel felt that the software was quick and easy to use and overall the principle of the model was approved, subject to some modifications. Therefore, a model was successfully developed for Parkinson's disease using MCDA and a CDSS developed to implement the model's functions. The model needs further refinement but has the potential to be successfully used in a clinical setting. MCDA could additionally be used to develop models for other diseases
    corecore