1,081 research outputs found

    Consciosusness in Cognitive Architectures. A Principled Analysis of RCS, Soar and ACT-R

    Get PDF
    This report analyses the aplicability of the principles of consciousness developed in the ASys project to three of the most relevant cognitive architectures. This is done in relation to their aplicability to build integrated control systems and studying their support for general mechanisms of real-time consciousness.\ud To analyse these architectures the ASys Framework is employed. This is a conceptual framework based on an extension for cognitive autonomous systems of the General Systems Theory (GST).\ud A general qualitative evaluation criteria for cognitive architectures is established based upon: a) requirements for a cognitive architecture, b) the theoretical framework based on the GST and c) core design principles for integrated cognitive conscious control systems

    Towards integrating multi-criteria analysis techniques in dependability benchmarking

    Full text link
    [EN] Increasing integration scales are promoting the development of myriads of new devices and technologies, such smartphones, ad hoc networks, or field-programmable devices, among others. The proliferation of such devices, with increasing autonomy and communication capabilities, is paving the way for a new paradigm known as Internet of Things, in which computing is ubiquitous and devices autonomously exchange information and cooperate among them and already existing IT infrastructures to improve people’s and society’s welfare. This new paradigm leads to huge business opportunities to manufacturers, application developers, and services providers in very different application domains, like consumer electronics, transport, or health. Accordingly, and to make the most of these incipient opportunities, industry relies more than ever on the use and re-use of commercial off-the-shelf (COTS), developed either in-house or by third parties, to decrease time-to-market and costs. In this race for hitting the market first, companies are nowadays concerned with the dependability of both COTS and final products, even for non-critical applications, as unexpected failures may damage the reputation of the manufacturer and limit the acceptability of their new products. Therefore, benchmarking techniques adapted to dependability contexts (dependability benchmarking) are being deployed in order to assess, compare, and select, i) the best suited COTS, among existing alternatives, to be integrated into a new product, and ii) the configuration parameters that gets the best trade-off between performance and dependability. However, although dependability benchmarking procedures have been defined and applied to a wide set of application domains, no rigorous and precise decision making process has been established yet, thus hindering the main goal of these approaches: the fair and accurate comparison and selection of existing alternatives taking into account both performance and dependability attributes. Indeed, results extracted from experimentation could be interpreted in so many different ways, according to the context of use of the system and the subjectivity of the benchmark analyser, that defining a clear and accurate decision making process is a must to enable the reproducibility of conclusions. Thus, this master thesis focuses on how integrating a decision making methodology into the regular dependability benchmarking procedure. The challenges to be faced include how to deal with the requirements from industry, just getting a single score characterising a system, and academia, getting as much measures as possible to accurately characterise the system, and how to navigate from one representation to another without losing meaningful information.[ES] El incremento de las escalas de integración están dando lugar a una infinidad de nuevos dispositivos y tecnologías, tales como smartphones, redes ad hoc, y dispositivos reprogramables entre otros. La proliferación de estos dispositivos con mejoras en autonomía y capacidades de comunicación está allanando el camino a un nuevo paradigma conocido como Internet of Things (el Internet de las cosas), donde la computación es ubícua y los dispositivos cooperan e intercambian información de forma autónoma entre ellos, y donde las infrastructuras para las TI mejoran el bienestar de la gente y de la sociedad. De la mano de este paradigma llegan una gran cantidad de oportunidades de negocio para fabricantes, desarrolladores de aplicaciones, y provedores de servicios en areas tan distintas como la electrónica de consumo, el transporte o la sanidad. De acuerdo con esto, y para sacar el mayor provecho de estas oportunidades, la industria depende ahora más que nunca de la utilización y reutilización de productos desarrollados por terceros, que les permiten reducir el tiempo de lanzamiento al mercado y los costes para sus productos. En esta carrera por ser el primero en llegar al mercado, las compañias se preocupan de la confiabilidad de tanto los componentes desarrollados por terceros, como de los propios productos finales, ya que fallos inesperados podrían perjudicar la reputación del fabricante y limitar la aceptación de sus nuevos productos. Por tanto, las técnicas de evaluación adaptadas al contexto de la confiabilidad se están desplegando para evaluar, comparar y seleccionar, i) aquellos componentes que mejor se ajustan para ser integrados en un nuevo producto, y ii) los parámetros de configuración que ofrecen el mejor equilibrio entre rendimiento y confiabilidad. Sin embargo, aunque los procesos de evaluación de la confiabilidad se han definido y aplicato a un gran conjunto de entornos de aplicación, todavía no se han establecido procesos precisos y rigurosos para llevar a cabo el proceso the toma de decisiones, dificultando así los objetivos de este tipo de aproximaciones: una comparación y una selección justa de las alternativas existentes tomando en consideración atributos del rendimiento y de la confiabilidad. De echo, los resultados extraídos de la experimentación se pueden interpretar de muchas maneras distintas dependiendo del contexto de uso del sistema, y del criterio subjetivo del evaluador. Por lo que definir un proceso de toma de decisiones claro y conciso es una tarea obligatoria para permitir la reproducibilidad de las conclusiones. Así pues, éste trabajo final de máster se centra en el proceso de integración de una metodología de toma de decisiones en un proceso de evaluación de la confiabilidad común. Los retos a afrontar incluyen cómo tratar con los requisitos de la industria, obteniendo una única medida para caracterizar el sistema, y con los requisitos de los académicos, donde se prefiere la obtención de cuantas más medidas posibles para caracterizar el sistema, y como navegar de una representación a la otra sin sufrir una pérdida de información relevante.Martínez Raga, M. (2013). Towards integrating multi-criteria analysis techniques in dependability benchmarking. http://hdl.handle.net/10251/39987Archivo delegad

    Enabling Astronaut Self-Scheduling using a Robust Advanced Modelling and Scheduling system: an assessment during a Mars analogue mission

    Full text link
    Human long duration exploration missions (LDEMs) raise a number of technological challenges. This paper addresses the question of the crew autonomy: as the distances increase, the communication delays and constraints tend to prevent the astronauts from being monitored and supported by a real time ground control. Eventually, future planetary missions will necessarily require a form of astronaut self-scheduling. We study the usage of a computer decision-support tool by a crew of analog astronauts, during a Mars simulation mission conducted at the Mars Desert Research Station (MDRS, Mars Society) in Utah. The proposed tool, called Romie, belongs to the new category of Robust Advanced Modelling and Scheduling (RAMS) systems. It allows the crew members (i) to visually model their scientific objectives and constraints, (ii) to compute near-optimal operational schedules while taking uncertainty into account, (iii) to monitor the execution of past and current activities, and (iv) to modify scientific objectives/constraints w.r.t. unforeseen events and opportunistic science. In this study, we empirically measure how the astronauts, who are novice planners, perform at using such a tool when self-scheduling under the realistic assumptions of a simulated Martian planetary habitat

    Total Quality Management System in an Education Environment: The Case of a Private University in Bahrain

    Get PDF
    The study aims to analyze and explain the effectiveness and efficiency of implementing total quality management principles in private educational institutions. The context of the study narrowed down the areas of comparisons to tutorial conducts, student affairs and infrastructure. A detailed analysis of the existing total quality management currently in place at the selected university was duly comprehended. This resulted in finding the flaws/weaknesses in the system of universities in the kingdom of Bahrain generally. Findings through a survey and interview sessions indicated that teachers were not consulted for any changes in curriculum which leads to a lack of co-operation between management and teachers. Another problem in regards to the total quality management implementation was that students perceived the university as not being concerned with maintenance of the premises. Moreover, student affairs/services section was seriously lacking sport facilities, limited training sessions, poor equipment maintenance

    COVLIAS 1.0Lesion vs. MedSeg : An Artificial Intelligence Framework for Automated Lesion Segmentation in COVID-19 Lung Computed Tomography Scans

    Get PDF
    COVID-19 is a disease with multiple variants, and is quickly spreading throughout the world. It is crucial to identify patients who are suspected of having COVID-19 early, because the vaccine is not readily available in certain parts of the world.Lung computed tomography (CT) imaging can be used to diagnose COVID-19 as an alternative to the RT-PCR test in some cases. The occurrence of ground-glass opacities in the lung region is a characteristic of COVID-19 in chest CT scans, and these are daunting to locate and segment manually. The proposed study consists of a combination of solo deep learning (DL) and hybrid DL (HDL) models to tackle the lesion location and segmentation more quickly. One DL and four HDL models-namely, PSPNet, VGG-SegNet, ResNet-SegNet, VGG-UNet, and ResNet-UNet-were trained by an expert radiologist. The training scheme adopted a fivefold cross-validation strategy on a cohort of 3000 images selected from a set of 40 COVID-19-positive individuals.The proposed variability study uses tracings from two trained radiologists as part of the validation. Five artificial intelligence (AI) models were benchmarked against MedSeg. The best AI model, ResNet-UNet, was superior to MedSeg by 9% and 15% for Dice and Jaccard, respectively, when compared against MD 1, and by 4% and 8%, respectively, when compared against MD 2. Statistical tests-namely, the Mann-Whitney test, paired t-test, and Wilcoxon test-demonstrated its stability and reliability, with p < 0.0001. The online system for each slice was <1 s.The AI models reliably located and segmented COVID-19 lesions in CT scans. The COVLIAS 1.0Lesion lesion locator passed the intervariability test
    corecore