2,745 research outputs found

    Semantic Support for Log Analysis of Safety-Critical Embedded Systems

    Full text link
    Testing is a relevant activity for the development life-cycle of Safety Critical Embedded systems. In particular, much effort is spent for analysis and classification of test logs from SCADA subsystems, especially when failures occur. The human expertise is needful to understand the reasons of failures, for tracing back the errors, as well as to understand which requirements are affected by errors and which ones will be affected by eventual changes in the system design. Semantic techniques and full text search are used to support human experts for the analysis and classification of test logs, in order to speedup and improve the diagnosis phase. Moreover, retrieval of tests and requirements, which can be related to the current failure, is supported in order to allow the discovery of available alternatives and solutions for a better and faster investigation of the problem.Comment: EDCC-2014, BIG4CIP-2014, Embedded systems, testing, semantic discovery, ontology, big dat

    Automating Text Encapsulation Using Deep Learning

    Get PDF
    Data is an important aspect in any form be it communication, reviews, news articles, social media data, machine or real-time data. With the emergence of Covid-19, a pandemic seen like no other in recent times, information is being poured in from all directions on the internet. At times it is overwhelming to determine which data to read and follow. Another crucial aspect is separating factual data from distorted data that is being circulated widely. The title or short description of this data can play a key role. Many times, these descriptions can deceive a user with unwanted information. The user is then more likely to spread this information with his colleagues/family and if they too are unaware, this false piece of information can spread like a forest wildfire. Deep machine learning models can play a vital role in automatically encapsulating the description and providing an accurate overview. This automated overview can then be used by the end user to determine if that piece of information can be consumed or not. This research presents an efficient Deep learning model for automating text encapsulation and its comparison with existing systems in terms of data, features and their point of failures. It aims at condensing text percepts more accurately

    Biblioteca de procesamiento de imágenes optimizada para Arm Cortex-M7

    Get PDF
    La mayoría de los vehículos en la actualidad están equipados con sistemas que asisten al conductor en tareas difíciles y repetitivas, como reducir la velocidad del vehículo en una zona escolar. Algunos de estos sistemas requieren una computadora a bordo capaz de realizar el procesamiento en tiempo real de las imágenes del camino obtenidas por una cámara. El objetivo de este proyecto es implementar una librería de procesamiento de imagen optimizada para la arquitectura ARM® Cortex®-M7. Esta librería provee rutinas para realizar filtrado espacial, resta, binarización y extracción de la información direccional de una imagen, así como el reconocimiento parametrizado de patrones de una figura predefinida utilizando la Transformada Generalizada de Hough. Estas rutinas están escritas en el lenguaje de programación C, para aprovechar las optimizaciones del compilador GNU ARM C, y obtener el máximo desempeño y el mínimo tamaño de objetos. El desempeño de las rutinas fue comparado con la implementación existente para otro microcontrolador, el Freescale® MPC5561. Para probar la funcionalidad de esta librería en una aplicación de tiempo real, se desarrolló un sistema de reconocimiento de señales de tráfico. Los resultados muestran que en promedio el tiempo de ejecución es 18% más rápido y el tamaño de objetos es 25% menor que en la implementación de referencia, lo que habilita a este sistema para procesar hasta 24 cuadros por segundo. En conclusión, estos resultados demuestran la funcionalidad de la librería de procesamiento de imágenes en sistemas de tiempo real.Most modern vehicles are equipped with systems that assist the driver by automating difficult and repetitive tasks, such as reducing the vehicle speed in a school zone. Some of these systems require an onboard computer capable of performing real-time processing of the road images captured by a camera. The goal of this project is to implement an optimized image processing library for the ARM® Cortex®-M7 architecture. This library includes the routines to perform image spatial filtering, subtraction, binarization, and extraction of the directional information along with the parameterized pattern recognition of a predefined template using the Generalized Hough Transform (GHT). These routines are written in the C programming language, leveraging GNU ARM C compiler optimizations to obtain maximum performance and minimum object size. The performance of the routines was benchmarked with an existing implementation for a different microcontroller, the Freescale® MPC5561. To prove the usability of this library in a real-time application, a Traffic Sign Recognition (TSR) system was implemented. The results show that in average the execution time is 18% faster and the binary object size is 25% smaller than in the reference implementation, enabling the TSR application to process up to 24 fps. In conclusion, these results demonstrate that the image processing library implemented in this project is suitable for real-time applications.ITESO, A. C.Consejo Nacional de Ciencia y TecnologíaContinental Automotiv

    An Annotated Corpus for Machine Reading of Instructions in Wet Lab Protocols

    Full text link
    We describe an effort to annotate a corpus of natural language instructions consisting of 622 wet lab protocols to facilitate automatic or semi-automatic conversion of protocols into a machine-readable format and benefit biological research. Experimental results demonstrate the utility of our corpus for developing machine learning approaches to shallow semantic parsing of instructional texts. We make our annotated Wet Lab Protocol Corpus available to the research community

    Automating image analysis by annotating landmarks with deep neural networks

    Full text link
    Image and video analysis is often a crucial step in the study of animal behavior and kinematics. Often these analyses require that the position of one or more animal landmarks are annotated (marked) in numerous images. The process of annotating landmarks can require a significant amount of time and tedious labor, which motivates the need for algorithms that can automatically annotate landmarks. In the community of scientists that use image and video analysis to study the 3D flight of animals, there has been a trend of developing more automated approaches for annotating landmarks, yet they fall short of being generally applicable. Inspired by the success of Deep Neural Networks (DNNs) on many problems in the field of computer vision, we investigate how suitable DNNs are for accurate and automatic annotation of landmarks in video datasets representative of those collected by scientists studying animals. Our work shows, through extensive experimentation on videos of hawkmoths, that DNNs are suitable for automatic and accurate landmark localization. In particular, we show that one of our proposed DNNs is more accurate than the current best algorithm for automatic localization of landmarks on hawkmoth videos. Moreover, we demonstrate how these annotations can be used to quantitatively analyze the 3D flight of a hawkmoth. To facilitate the use of DNNs by scientists from many different fields, we provide a self contained explanation of what DNNs are, how they work, and how to apply them to other datasets using the freely available library Caffe and supplemental code that we provide.https://arxiv.org/abs/1702.00583Published versio

    Development of an Artificial Intelligence-based Solution for Document Processing Automation Using Machine Learning and NLP Techniques

    Get PDF
    The proposal focuses on Intelligent Document Processing (IDP), which aims to automate various activities related to document processing using Artificial Intelligence technologies, particularly Machine Learning and Natural Language Processing techniques. The proposed solution seeks to improve the efficiency and quality of document processing in many business and organizational contexts by automating tasks such as classification, information extraction, validation, and verification of consistency between documents. This thesis paper includes the following phases: “Text Identification, OCR, Invoice Data Extraction and Quality Assurance”. In case of document files, the data extraction is done in the first phase. This project thesis details the IDP solution developed, analyse processing results and the quality of the extracted information, and evaluate the accuracy and efficiency of the system. The thesis is focused on information extraction from key fields of invoices using two different methods based on sequence labeling. Invoices are unstructured documents in which data can be located based on the context. Their performances are expected to be generally high on documents they have been trained for but processing new templates often requires new manual annotations like prodigy tool, which is tedious and time-consuming to produce labeled data. This showcases a set of trials utilizing neural networks methods to examine the balance between data prerequisites and efficacy in retrieving data from crucial sections of invoices (such as invoice date, invoice number, order number, amount, supplier's name...). The main contribution of this thesis is a system that achieves competitive results using a small amount of data compared to the state-of-the-art systems that need to be trained on large datasets, using a custom Named Entity Recognition (NER) model to extract that relevant information from non-uniform commercial invoice formats
    corecore