35 research outputs found

    Explaining and Refining Decision-Theoretic Choices

    Get PDF
    As the need to make complex choices among competing alternative actions is ubiquitous, the reasoning machinery of many intelligent systems will include an explicit model for making choices. Decision analysis is particularly useful for modelling such choices, and its potential use in intelligent systems motivates the construction of facilities for automatically explaining decision-theoretic choices and for helping users to incrementally refine the knowledge underlying them. The proposed thesis addresses the problem of providing such facilities. Specifically, we propose the construction of a domain-independent facility called UTIL, for explaining and refining a restricted but widely applicable decision-theoretic model called the additive multi-attribute value model. In this proposal we motivate the task, address the related issues, and present preliminary solutions in the context of examples from the domain of intelligent process control

    Consideration of interdependencies in the relational database system, and, A proposal and evaluation of an expert system for the relational database structure

    Full text link
    This thesis addresses the issue of interdependencies in Distributed and non-Distributed Relational Database Management Systems and proposes a design and development of an Expert System to manage and enhance the current available Database Structures; In the first part, we study, compare and evaluate the interdependencies found in the operating environment relevant to the Distributed Relational structure. Hardware and software configurations are grouped and compared in an attempt to understand the interdependencies of the system so that an optimal configuration may be obtained; In the second part, we designed and developed an Expert System configuration with ease of use and functionality as foremost concerns. The system reuses the transient tables used to service queries to achieve a performance improvement without explicit user knowledge. Basic fragmentation principles are also used to aid in performance by implicitly restructuring the tables within a database to balance access time. (Abstract shortened with permission of author.)

    Data-centric Design and Training of Deep Neural Networks with Multiple Data Modalities for Vision-based Perception Systems

    Get PDF
    224 p.Los avances en visión artificial y aprendizaje automático han revolucionado la capacidad de construir sistemas que procesen e interpreten datos digitales, permitiéndoles imitar la percepción humana y abriendo el camino a un amplio rango de aplicaciones. En los últimos años, ambas disciplinas han logrado avances significativos,impulsadas por los progresos en las técnicas de aprendizaje profundo(deep learning). El aprendizaje profundo es una disciplina que utiliza redes neuronales profundas (DNNs, por sus siglas en inglés) para enseñar a las máquinas a reconocer patrones y hacer predicciones basadas en datos. Los sistemas de percepción basados en el aprendizaje profundo son cada vez más frecuentes en diversos campos, donde humanos y máquinas colaboran para combinar sus fortalezas.Estos campos incluyen la automoción, la industria o la medicina, donde mejorar la seguridad, apoyar el diagnóstico y automatizar tareas repetitivas son algunos de los objetivos perseguidos.Sin embargo, los datos son uno de los factores clave detrás del éxito de los algoritmos de aprendizaje profundo. La dependencia de datos limita fuertemente la creación y el éxito de nuevas DNN. La disponibilidad de datos de calidad para resolver un problema específico es esencial pero difícil de obtener, incluso impracticable,en la mayoría de los desarrollos. La inteligencia artificial centrada en datos enfatiza la importancia de usar datos de alta calidad que transmitan de manera efectiva lo que un modelo debe aprender. Motivada por los desafíos y la necesidad de los datos, esta tesis formula y valida cinco hipótesis sobre la adquisición y el impacto de los datos en el diseño y entrenamiento de las DNNs.Específicamente, investigamos y proponemos diferentes metodologías para obtener datos adecuados para entrenar DNNs en problemas con acceso limitado a fuentes de datos de gran escala. Exploramos dos posibles soluciones para la obtención de datos de entrenamiento, basadas en la generación de datos sintéticos. En primer lugar, investigamos la generación de datos sintéticos utilizando gráficos 3D y el impacto de diferentes opciones de diseño en la precisión de los DNN obtenidos. Además, proponemos una metodología para automatizar el proceso de generación de datos y producir datos anotados variados, mediante la replicación de un entorno 3D personalizado a partir de un archivo de configuración de entrada. En segundo lugar, proponemos una red neuronal generativa(GAN) que genera imágenes anotadas utilizando conjuntos de datos anotados limitados y datos sin anotaciones capturados en entornos no controlados

    Goddard Conference on Mass Storage Systems and Technologies, volume 2

    Get PDF
    Papers and viewgraphs from the conference are presented. Discussion topics include the IEEE Mass Storage System Reference Model, data archiving standards, high-performance storage devices, magnetic and magneto-optic storage systems, magnetic and optical recording technologies, high-performance helical scan recording systems, and low end helical scan tape drives. Additional discussion topics addressed the evolution of the identifiable unit for processing (file, granule, data set, or some similar object) as data ingestion rates increase dramatically, and the present state of the art in mass storage technology

    Fourth NASA Goddard Conference on Mass Storage Systems and Technologies

    Get PDF
    This report contains copies of all those technical papers received in time for publication just prior to the Fourth Goddard Conference on Mass Storage and Technologies, held March 28-30, 1995, at the University of Maryland, University College Conference Center, in College Park, Maryland. This series of conferences continues to serve as a unique medium for the exchange of information on topics relating to the ingestion and management of substantial amounts of data and the attendant problems involved. This year's discussion topics include new storage technology, stability of recorded media, performance studies, storage system solutions, the National Information infrastructure (Infobahn), the future for storage technology, and lessons learned from various projects. There also will be an update on the IEEE Mass Storage System Reference Model Version 5, on which the final vote was taken in July 1994

    Benchmarking the performance of two automated term-extraction systems : LOGOS and ATAO

    Full text link
    Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.Pour consulter le document d'accompagnement du mémoire, veuillez contacter le Centre de conservation Lionel-Groulx de l'Université de Montréal ([email protected])

    A shared-disk parallel cluster file system

    Get PDF
    Dissertação apresentada para obtenção do Grau de Doutor em Informática Pela Universidade Nova de Lisboa, Faculdade de Ciências e TecnologiaToday, clusters are the de facto cost effective platform both for high performance computing (HPC) as well as IT environments. HPC and IT are quite different environments and differences include, among others, their choices on file systems and storage: HPC favours parallel file systems geared towards maximum I/O bandwidth, but which are not fully POSIX-compliant and were devised to run on top of (fault prone) partitioned storage; conversely, IT data centres favour both external disk arrays (to provide highly available storage) and POSIX compliant file systems, (either general purpose or shared-disk cluster file systems, CFSs). These specialised file systems do perform very well in their target environments provided that applications do not require some lateral features, e.g., no file locking on parallel file systems, and no high performance writes over cluster-wide shared files on CFSs. In brief, we can say that none of the above approaches solves the problem of providing high levels of reliability and performance to both worlds. Our pCFS proposal makes a contribution to change this situation: the rationale is to take advantage on the best of both – the reliability of cluster file systems and the high performance of parallel file systems. We don’t claim to provide the absolute best of each, but we aim at full POSIX compliance, a rich feature set, and levels of reliability and performance good enough for broad usage – e.g., traditional as well as HPC applications, support of clustered DBMS engines that may run over regular files, and video streaming. pCFS’ main ideas include: · Cooperative caching, a technique that has been used in file systems for distributed disks but, as far as we know, was never used either in SAN based cluster file systems or in parallel file systems. As a result, pCFS may use all infrastructures (LAN and SAN) to move data. · Fine-grain locking, whereby processes running across distinct nodes may define nonoverlapping byte-range regions in a file (instead of the whole file) and access them in parallel, reading and writing over those regions at the infrastructure’s full speed (provided that no major metadata changes are required). A prototype was built on top of GFS (a Red Hat shared disk CFS): GFS’ kernel code was slightly modified, and two kernel modules and a user-level daemon were added. In the prototype, fine grain locking is fully implemented and a cluster-wide coherent cache is maintained through data (page fragments) movement over the LAN. Our benchmarks for non-overlapping writers over a single file shared among processes running on different nodes show that pCFS’ bandwidth is 2 times greater than NFS’ while being comparable to that of the Parallel Virtual File System (PVFS), both requiring about 10 times more CPU. And pCFS’ bandwidth also surpasses GFS’ (600 times for small record sizes, e.g., 4 KB, decreasing down to 2 times for large record sizes, e.g., 4 MB), at about the same CPU usage.Lusitania, Companhia de Seguros S.A, Programa IBM Shared University Research (SUR

    Why Do We Do Track Two?: Transnational Security Policy Networks and U.S. Nuclear Nonproliferation Policy

    Get PDF
    As globalization has accelerated, unofficial transnational (a.k.a. track-two) dialogues have proliferated. Do these networks matter? This study examines both their effects in the United States and, by focusing on nuclear nonproliferation, their potential to improve cooperative security as well as conflict resolution. Reviews of relevant theory, secondary literature, and primary materials produced by three case studies--the Council on Security Cooperation in the Asia Pacific (CSCAP), the Northeast Asian Cooperation Dialogue (NEACD), and the Program on New Approaches to Russian Security (PONARS)--supplemented and guided 67 original interviews to help answer the question: Have transnational security policy networks changed U.S. nuclear nonproliferation policies or the perceptions that shape them? These networks have improved intelligence and private as well as public diplomacy, enhancing the analytical capacity and soft power of their participants and interlocutors. They have strengthened otherwise weak ties across countries, areas of expertise, generations, and professions, particularly from inside government to nongovernmental experts, to provide blunter feedback and improve open-source intelligence analysis. These improvements are three-dimensional--delving deeper into overseas foreign policy elite, integrating across wider issues and regions, over longer periods of time--to help understand the implications of political changes, summits, and crises. Diplomatically, they have provided fora for nongovernmental experts and government officials in their private capacity to better understand and convey interests behind official talking points. Although U.S. policymakers will realistically rarely participate, they benefit from one-page or personal briefings by the most effective networks--those that have diverse members, integrate current or former government officials, and focus on ideas and information exchange. Although pressures exist to prove networks changed near-term policy decisions, the diversity that improves intelligence also impedes consensus on policy recommendations, which can be more effectively made by issue-specific cells derived from the network base. Ultimately, these networks empower their members and interlocutors with ideas and information, which enhances their soft power and builds their capacity to diagnose and agree on the root causes of contemporary threats, understand the political pressures shaping national responses to them, evaluate the merits of potential strategies to respond, and explore prospects for cooperative solutions
    corecore