411 research outputs found

    The Proficiency of Experts

    Get PDF
    Expert evidence plays a crucial role in civil and criminal litigation. Changes in the rules concerning expert admissibility, following the Supreme Court\u27s Daubert ruling, strengthened judicial review of the reliability and the validity of an expert\u27s methods. Judges and scholars, however, have neglected the threshold question for expert evidence: whether a person should be qualified as an expert in the first place. Judges traditionally focus on credentials or experience when qualifying experts without regard to whether those criteria are good proxies for true expertise. We argue that credentials and experience are often poor proxies for proficiency. Qualification of an expert presumes that the witness can perform in a particular domain with a proficiency that non-experts cannot achieve, yet many experts cannot provide empirical evidence that they do in fact perform at high levels of proficiency. To demonstrate the importance ofproficiency data, we collect and analyze two decades of proficiency testing of latent fingerprint examiners. In this important domain, we found surprisingly high rates of false positive identifications for the period 1995 to 2016. These data would qualify the claims of many fingerprint examiners regarding their near infallibility, but unfortunately, judges do not seek out such information. We survey the federal and state case law and show how judges typically accept expert credentials as a proxy for proficiency in lieu of direct proof of proficiency. Indeed, judges often reject parties\u27 attempts to obtain and introduce at trial empirical data on an expert\u27s actual proficiency. We argue that any expert who purports to give falsifiable opinions can be subjected to proficiency testing and that proficiency testing is the only objective means of assessing the accuracy and reliability ofexperts who rely on subjective judgments to formulate their opinions (so-called black-box experts ). Judges should use proficiency data to make expert qualification decisions when the data is available, should demand proof of proficiency before qualifying black-box experts, and should admit at trial proficiency data for any qualified expert. We seek to revitalize the standard for qualifying experts: expertise should equal proficiency

    Application of machine learning to predict quality of Portuguese wine based on sensory preferences

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceTechnology has been broadly used in the wine industry, from vineyards to purchases, improving means or understanding customers' preferences. Numerous companies are using machine learning solutions to leverage their business. Henceforth, the sensory properties of wines constitute a significant element to determine wine quality, that combined with the accuracy of predictive models attained by classification methods, could be helpful to support winemakers enhance their outcomes. This research proposes a supervised machine learning approach to predict the quality of Portuguese wines based on sensory characteristics such as acidity, intensity, sweetness, and tannin. Additionally, this study includes red and white wines, implements, and compare the effectiveness of three classification algorithms. The conclusions promote understanding the importance of the sensory characteristics that influence the wine quality throughout customers' perception.Tecnologia vem sendo amplamente empregada na indústria do vinho. Desde melhoria em processos de cultivo à compreensão de mercado por meio da análise de preferência de consumidores. Tendo em vista à atual dinâmica dos mercados, empresas estão gradualmente a considerar soluções que implementam conceitos de aprendizagem de máquina e tragam diferencial competitivo para potencializar o negócio. Doravante, propriedades sensoriais são importantes elementos para determinação da qualidade do vinho, que aliado à precisão obtida por modelos preditivos podem auxiliar produtores de vinho a melhorar produtos e resultados. O presente estudo propõe a elaboração de modelos de aprendizado supervisionado, baseado em algoritmos de classificação a fim de prever qualidade de vinhos portugueses a partir de dados sensoriais detetados por consumidores como acidez, intensidade, açúcar e taninos. A pesquisa inclui vinhos tintos e brancos; implementa e compara a efetividade de três algoritmos de classificação. Não obstante, o estudo permite compreender como dados sensoriais fornecidos por consumidores podem determinar a qualidade de vinhos, bem como perceber quais características contribuem no processo de avaliação

    Domain-independent method for developing an integrated engineering design tool

    Get PDF
    Engineering design is a complex, cognitive process requiring extensive knowledge and experience to be done effectively. Successful design depends on appropriate use of available resources. Competitive design cycles mandate convenient and reliable access to engineering tools and information. An integrated engineering design tool (IEDT) has been developed in response to these demands. Further, the tool development efforts have been made systematic by utilizing the engineering design process, which is shown to be a cognitive activity based on Bloom\u27s taxonomy of cognition. The engineering design process consists of six tasks: establishment of objectives, development of requirements, function analysis, creation of design alternatives, evaluation, and improvements to the design. These tasks are shown to map to the six levels of Bloom\u27s cognitive taxonomy: knowledge, comprehension, application, analysis, synthesis, and evaluation. Once engineering design is shown to be a cognitive process it can be employed to make each of the activities required to develop and IEDT, domain investigation, knowledge acquisition, and IEDT design, systematic. Past research has considered these to be largely ad hoc tasks. Application of the engineering design process to each of the three IEDT development tasks is discussed in general terms;A prototype IEDT has been created for the preliminary design of jet transport aircraft wings based on the systematic engineering design approach is used to demonstrate the implementation of the method. The IEDT is embedded in Microsoft Excel 97 with links to other software and executable code. Examples of different implementation strategies are provided. Several wing weight prediction models are included. The incorporation of depth knowledge is done using fuzzy logic. The IEDT is linked to relevant files containing design documentation, parameter information, graphics, drawings, and historical data. The designer has access to trade-off study information and sensitivity analysis and can choose to perform structural analysis or design optimization. The engineer can also consider design issues such as cost analysis. The modular IEDT has been designed to be easily adaptable by design domain experts so that it may continue to be updated and expanded

    Automated knowledge acquisition for knowledge-based systems: KE-KIT

    Get PDF
    Despite recent progress, knowledge acquisition remains a central problem for the development of intelligent systems. There are many people throughout the world doing studies in this area. However, very few automated techniques have made it to the market place. In this light, the idea of automating the knowledge acquisition process is very appealing and may lead to a break through. Most (if not all) of the approaches and techniques concerning intelligent, expert systems and specifically knowledge-based systems can still be considered in their infancy and definitely do not subscribe to any kind of standards. Many things have yet to be learned and incorporated into the technology and combined with methods from traditional computer science and psychology. KE-KIT is a prototype system which attempts to automate a portion of the knowledge engineering process. The emphasis is on the automation of knowledge acquisition activities. However, the transformation of knowledge from an intermediate form to a knowledge -base format is also addressed. The approach used to automate the knowledge acquisition process is based on the personal construct theory developed by George Kelly in the field of psychology. This thesis gives and in-depth view of knowledge engineering with a concentration on the knowledge acquisition process. Several issues and approaches are described. Greater details surrounding the personal construct theory approach to knowledge acquisition and its use of a repertory grid are given. In addition, some existing knowledge acquisition tools are briefly explored. Details concerning the implementation of KE-KIT and reflections on its applicability round out the presented material

    Multi-perspective modelling for knowledge management and knowledge engineering

    Get PDF
    ii It seems almost self-evident that “knowledge management ” and “knowledge engineering” should be related disciplines that may share techniques and methods between them. However, attempts by knowledge engineers to apply their techniques to knowledge management have been praised by some and derided by others, who claim that knowledge engineers have a fundamentally wrong concept of what “knowledge management” is. The critics also point to specific weaknesses of knowledge engineering, notably the lack of a broad context for the knowledge. Knowledge engineering has suffered some criticism from within its own ranks, too, particularly of the “rapid prototyping ” approach, in which acquired knowledge was encoded directly into an iteratively developed computer system. This approach was indeed rapid, but when used to deliver a final system, it became nearly impossible to verify and validate the system or to maintain it. A solution to this has come in the form of knowledge engineering methodology, and particularly in the CommonKAD

    The 1992 Goddard Conference on Space Applications of Artificial Intelligence

    Get PDF
    The purpose of this conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The papers fall into the following areas: planning and scheduling, control, fault monitoring/diagnosis and recovery, information management, tools, neural networks, and miscellaneous applications

    DataGauge: A Model-Driven Framework for Systematically Assessing the Quality of Clinical Data for Secondary Use

    Get PDF
    There is growing interest in the reuse of clinical data for research and clinical healthcare quality improvement. However, direct analysis of clinical data sets can yield misleading results. Data Cleaning is often employed as a means to detect and fix data issues during analysis but this approach lacks of systematicity. Data Quality (DQ) assessments are a more thorough way of spotting threats to the validity of analytical results stemming from data repurposing. This is because DQ assessments aim to evaluate ‘fitness for purpose’. However, there is currently no systematic method to assess DQ for the secondary analysis of clinical data. In this dissertation I present DataGauge, a framework to address this gap in the state of the art. I begin by introducing the problem and its general significance to the field of biomedical and clinical informatics (Chapter 1). I then present a literature review that surveys current methods for the DQ assessment of repurposed clinical data and derive the features required to advance the state of the art (Chapter 2). In chapter 3 I present DataGauge, a model-driven framework for systematically assessing the quality of repurposed clinical data, which addresses current limitations in the state of the art. Chapter 4 describes the development of a guidance framework to ensure the systematicity of DQ assessment design. I then evaluate DataGauge’s ability to flag potential DQ issues in comparison to a systematic state of the art method. DataGauge was able to increase ten fold the number of potential DQ issues found over the systematic state of the art method. It identified more specific issues that were a direct threat to fitness for purpose, but also provided broader coverage of the clinical data types and knowledge domains involved in secondary analyses. DataGauge sets the groundwork for systematic and purpose-specific DQ assessments that fully integrate with secondary analysis workflows. It also promotes a team-based approach and the explicit definition of DQ requirements to support communication and transparent reporting of DQ results. Overall, this work provides tools that pave the way to a deeper understanding of repurposed clinical dataset limitations before analysis. It is also a first step towards the automation of purpose-specific DQ assessments for the secondary use of clinical data. Future work will consist of further development of these methods and validating them with research teams making secondary use of clinical data
    corecore