23 research outputs found

    Teor铆a de C贸digos en pr谩ctica: Del aula a la implementaci贸n hardware con fines industriales

    Get PDF
    El proyecto aborda uno de los mayores retos existentes en asignaturas con contenidos de Teor铆a de C贸digos dentro de la Universidad de Granada, que consiste en acercar conceptos te贸ricos de la tem谩tica a su aplicaci贸n pr谩ctica real, principalmente en el mercado laboral y en la industria. La Teor铆a de C贸digos es una tem谩tica que se encuentra entre las Matem谩ticas y la Ingenier铆a, y que se imparte en diversos programas de asignaturas en la Universidad de Granada. Por su naturaleza, tiene una fuerte componente te贸rica cuya base es el 脕lgebra, aunque tiene enormes aplicaciones de car谩cter industrial que podemos encontrar de manera cotidiana: Validaci贸n de n煤meros de cuentas corrientes o tarjetas de cr茅dito, generaci贸n y lectura de c贸digos QR, lectura de c贸digos de barras, apertura autom谩tica de puertas de garajes mediante mando a distancia, accesos con tarjetas a diversas instalaciones, verificaci贸n de identidad con lectores de huella digital, etc. El proyecto actual se focaliza en las pr谩cticas de las asignaturas donde se imparten conceptos de Teor铆a de C贸digos, y propone la implementaci贸n de los modelos de codificaci贸n te贸ricos en dispositivos reales. En particular, se plantea el uso de dispositivos Arduino equipados con sensores de bajo coste (lectores de tarjetas, receptores de radio-frecuencia de mandos a distancia, lectores de huellas digitales o lectores de c贸digos de barras), especialmente dise帽ados para docencia, de modo que el estudiante implemente los algoritmos resultantes en un dispositivo real de forma equivalente a como se realizar铆a a nivel industrial. El proyecto toma como estudio piloto la asignatura Teor铆a de la Informaci贸n y la Codificaci贸n de cuarto curso del Grado en Ingenier铆a Inform谩tica de la Universidad de Granada. La metodolog铆a seguida durante el proyecto ha sido la siguiente: 1- Identificaci贸n de contenidos te贸ricos con aplicaci贸n pr谩ctica en la industria, y selecci贸n de casos de uso pr谩cticos a simular en las pr谩cticas. 2- Selecci贸n y adquisici贸n de hardware necesario para poder simular el caso de uso industrial en el aula de pr谩cticas. 3- Desarrollo de material docente que introduzca los conceptos previos necesarios para poder utilizar el hardware. 4- Desarrollo de material docente que profundice en el caso de uso, explicando su relaci贸n con la teor铆a. 5- Resoluci贸n de casos pr谩cticos. 6- Evaluaci贸n del material y su idoneidad para diferentes perfiles de estudiantes de grado/posgrado. Con respecto a los puntos 1 y 2, se seleccion贸 la plataforma Arduino como eje tractor de las pr谩cticas, debido a su enorme transversalidad y posibilidades para diferentes asignaturas. Adem谩s, se da el caso de que el lenguaje de programaci贸n de estas plataformas (lenguaje C/C++) es el lenguaje vehicular dentro del Grado en Ingenier铆a Inform谩tica. Como casos de uso, se plantearon dos: - Lectores de huellas digitales, como ejemplo de medio para codificar y decodificar informaci贸n biom茅trica, muy utilizada hoy d铆a en la industria. - Lectores de c贸digos de barras y c贸digos QR, utilizados diariamente en tareas de trazabilidad en el transporte de mercanc铆as, o en actividades de compra/venta en negocios. Este tipo de c贸digos se relaciona con los contenidos de la asignatura en diversos temas, como pueden ser la detecci贸n de errores, o la detecci贸n y correcci贸n de errores a la hora de observar el c贸digo particular en cuesti贸n. Con respecto al punto 3, se ha desarrollado material docente, en formato de diapositivas, que introduce al estudiante con conocimientos previos de programaci贸n en la plataforma Arduino y su integraci贸n con hardware externo como dispositivos de entrada y/o salida. Con respecto al punto 4, se han dise帽ado y desarrollado dos tipos de materiales en formato de diapositivas: - Formaci贸n espec铆fica sobre el hardware a usar en las pr谩cticas para lectura de huellas dactilares. - Formaci贸n espec铆fica sobre el hardware a usar en las pr谩cticas para lectura de c贸digos de barras y QR. Con respecto al punto 5, se han dise帽ado dos guiones de trabajo para elaborar un caso de uso de ejemplo que utilice cada uno de los dispositivos (lector de huellas dactilares y lector de c贸digos de barras/QR). Finalmente, el punto 6 se evalu贸 por el personal docente de la asignatura, que verific贸 que las competencias y objetivos del material desarrollado refozcaban/contribu铆an a alcanzar las competencias y objetivos del documento VERIFICA existentes en la gu铆a docente.The project addresses one of the biggest challenges existing in subjects with Code Theory content at the University of Granada, which consists of bringing theoretical concepts of the subject closer to its real practical application, mainly in the labour market and in industry. Code Theory is a subject that lies between Mathematics and Engineering, and which is taught in various subject programmes at the University of Granada. By its nature, it has a strong theoretical component based on Algebra, although it has enormous industrial applications that we can find on a daily basis: Validation of current account numbers or credit cards, generation and reading of QR codes, reading of barcodes, automatic opening of garage doors by remote control, card access to various facilities, identity verification with fingerprint readers, etc. The current project focuses on the practices of the subjects where Code Theory concepts are taught, and proposes the implementation of theoretical coding models in real devices. In particular, it proposes the use of Arduino devices equipped with low-cost sensors (card readers, radio-frequency receivers for remote controls, fingerprint readers or barcode readers), specially designed for teaching, so that the student implements the resulting algorithms in a real device in an equivalent way to how it would be done at an industrial level. The project takes as a pilot study the subject Information Theory and Coding in the fourth year of the Degree in Computer Engineering at the University of Granada. The methodology followed during the project was as follows: 1- Identification of theoretical contents with practical application in the industry, and selection of practical use cases to be simulated in the practices. 2- Selection and acquisition of the necessary hardware to be able to simulate the industrial use case in the practical classroom. 3- Development of teaching material that introduces the previous concepts necessary to be able to use the hardware. 4- Development of teaching material that goes into the use case in depth, explaining its relationship with the theory. 5- Resolution of practical cases. 6- Evaluation of the material and its suitability for different profiles of undergraduate/postgraduate students. With regard to points 1 and 2, the Arduino platform was selected as the driving force of the practices, due to its enormous transversality and possibilities for different subjects. In addition, the programming language of these platforms (C/C++ language) is the language used in the Bachelor's Degree in Computer Engineering. As use cases, two were proposed: - Fingerprint readers, as an example of a means of encoding and decoding biometric information, widely used today in industry. - Barcode and QR code readers, used on a daily basis in traceability tasks in the transport of goods, or in buying/selling activities in businesses. This type of codes is related to the contents of the subject in various topics, such as error detection, or the detection and correction of errors when observing the particular code in question. With regard to point 3, teaching material has been developed, in the form of slides, which introduces students with previous programming knowledge to the Arduino platform and its integration with external hardware such as input and/or output devices. Regarding point 4, two types of materials have been designed and developed in slide format: - Specific training on the hardware to be used in the fingerprint reading practices. - Specific training on the hardware to be used in the practices for barcode and QR reading. Regarding point 5, two working scripts have been designed to elaborate an example use case using each of the devices (fingerprint reader and barcode/QR reader). Finally, point 6 was evaluated by the teaching staff of the subject, who verified that the competences and objectives of the developed material were consistent with/contribute to achieving the competences and objectives of the VERIFICA document in the teaching guide.Unidad de Calidad, Innovaci贸n Docente y Prospectiva. Universidad de Granad

    On the Use of Quantum Reinforcement Learning in Energy-Efficiency Scenarios

    Get PDF
    In the last few years, deep reinforcement learning has been proposed as a method to perform online learning in energy-efficiency scenarios such as HVAC control, electric car energy management, or building energy management, just to mention a few. On the other hand, quantum machine learning was born during the last decade to extend classic machine learning to a quantum level. In this work, we propose to study the benefits and limitations of quantum reinforcement learning to solve energyefficiency scenarios. As a testbed, we use existing energy-efficiency-based reinforcement learning simulators and compare classic algorithms with the quantum proposal. Results in HVAC control, electric vehicle fuel consumption, and profit optimization of electrical charging stations applications suggest that quantum neural networks are able to solve problems in reinforcement learning scenarios with better accuracy than their classical counterpart, obtaining a better cumulative reward with fewer parameters to be learned.project QUANERGY TED2021-129360B-I00Ecological and Digital Transition R&D projects call 2022, Government of Spai

    Few-Shot User-Adaptable Radar-Based Breath Signal Sensing

    Get PDF
    Vital signs estimation provides valuable information about an individual鈥檚 overall health status. Gathering such information usually requires wearable devices or privacy-invasive settings. In this work, we propose a radar-based user-adaptable solution for respiratory signal prediction while sitting at an office desk. Such an approach leads to a contact-free, privacy-friendly, and easily adaptable system with little reference training data. Data from 24 subjects are preprocessed to extract respiration information using a 60 GHz frequency-modulated continuous wave radar. With few training examples, episodic optimization-based learning allows for generalization to new individuals. Episodically, a convolutional variational autoencoder learns how to map the processed radar data to a reference signal, generating a constrained latent space to the central respiration frequency. Moreover, autocorrelation over recorded radar data time assesses the information corruption due to subject motions. The model learning procedure and breathing prediction are adjusted by exploiting the motion corruption level. Thanks to the episodic acquired knowledge, the model requires an adaptation time of less than one and two seconds for one to five training examples, respectively. The suggested approach represents a novel, quickly adaptable, non-contact alternative for office settings with little user motion.ITEA3 Unleash Potentials in Simulation (UPSIM) project (N掳19006) German Federal Ministry of Education and Research (BMBF)Austrian Research Promotion Agency (FFG)Rijksdienst voor Ondernemend Nederland (Rvo)Innovation Fund Denmark (IFD

    Few-Shot User-Definable Radar-Based Hand Gesture Recognition at the Edge

    Get PDF
    This work was supported in part by ITEA3 Unleash Potentials in Simulation (UPSIM) by the German Federal Ministry of Education and Research (BMBF) under Project 19006, in part by the Austrian Research Promotion Agency (FFG), in part by the Rijksdienst voor Ondernemend Nederland (Rvo), and in part by the Innovation Fund Denmark (IFD).Technological advances and scalability are leading Human-Computer Interaction (HCI) to evolve towards intuitive forms, such as through gesture recognition. Among the various interaction strategies, radar-based recognition is emerging as a touchless, privacy-secure, and versatile solution in different environmental conditions. Classical radar-based gesture HCI solutions involve deep learning but require training on large and varied datasets to achieve robust prediction. Innovative self-learning algorithms can help tackling this problem by recognizing patterns and adapt from similar contexts. Yet, such approaches are often computationally expensive and hardly integrable into hardware-constrained solutions. In this paper, we present a gesture recognition algorithm which is easily adaptable to new users and contexts. We exploit an optimization-based meta-learning approach to enable gesture recognition in learning sequences. This method targets at learning the best possible initialization of the model parameters, simplifying training on new contexts when small amounts of data are available. The reduction in computational cost is achieved by processing the radar sensed data of gestures in the form of time maps, to minimize the input data size. This approach enables the adaptation of simple convolutional neural network (CNN) to new hand poses, thus easing the integration of the model into a hardware-constrained platform. Moreover, the use of a Variational Autoencoders (VAE) to reduce the gestures' dimensionality leads to a model size decrease of an order of magnitude and to half of the required adaptation time. The proposed framework, deployed on the Intel(R) Neural Compute Stick 2 (NCS 2), leads to an average accuracy of around 84% for unseen gestures when only one example per class is utilized at training time. The accuracy increases up to 92.6% and 94.2% when three and five samples per class are used.Federal Ministry of Education & Research (BMBF) 19006Austrian Research Promotion Agency (FFG)Rijksdienst voor Ondernemend Nederland (Rvo)Innovation Fund Denmark (IFD

    Context-adaptable radar-based people counting via few-shot learning

    Get PDF
    This work has received funding from the ECSEL Joint Under-taking (JU) under grant agreement No. 876925 (ANDANTE). The JU receives support from the European Union's Horizon 2020 research and innovation programme and France, Belgium, Germany, Netherlands, Portugal, Spain, Switzerland. Funding for open access publishing: Universidad de Granada/CBUA.In many industrial or healthcare contexts, keeping track of the number of people is essential. Radar systems, with their low overall cost and power consumption, enable privacy-friendly monitoring in many use cases. Yet, radar data are hard to interpret and incompatible with most computer vision strategies. Many current deep learning-based systems achieve high monitoring performance but are strongly context-dependent. In this work, we show how context generalization approaches can let the monitoring system fit unseen radar scenarios without adaptation steps. We collect data via a 60 GHz frequency-modulated continuous wave in three office rooms with up to three people and preprocess them in the frequency domain. Then, using meta learning, specifically the Weighting-Injection Net, we generate relationship scores between the few training datasets and query data. We further present an optimization-based approach coupled with weighting networks that can increase the training stability when only very few training examples are available. Finally, we use pool-based sampling active learning to fine-tune the model in new scenarios, labeling only the most uncertain data. Without adaptation needs, we achieve over 80% and 70% accuracy by testing the meta learning algorithms in new radar positions and a new office, respectively.ECSEL Joint Under-taking (JU) 876925Horizon 2020Universidad de Granada/CBU

    A similarity measure for Straight Line Programs and its application to control diversity in Genetic Programming

    Get PDF
    Finding a balance between diversity and convergence plays an important role in evolutionary algorithms to avoid premature convergence and to perform a better exploration of the search space. In the case of Genetic Programming, and more specifically for symbolic regression problems, different mechanisms have been devised to control diversity, ranging from novel crossover and/or mutation procedures to the design of distance measures that help genetic operators to increase diversity in the population. In this paper, we start from previous works where Straight Line Programs are used as an alternative representation to expression trees for symbolic regression, and develop a similarity measure based on edit distance in order to determine how different the Straight Line Programs in the population are. This measure is used in combination with the CHC algorithm strategy to control diversity in the population, and therefore to avoid local optima to solve symbolic regression problems. The proposal is first validated in a controlled scenario of benchmark datasets and it is compared with previous approaches to promote diversity in Genetic Programming. After that, the approach is also evaluated in a real world dataset of energy consumption data from a set of buildings of the University of Granada.PID2020-112495RB-C21B-TIC-42-UGR2

    Concept logic trees: enabling user interaction for transparent image classification and human-in-the-loop learning

    Get PDF
    Interpretable deep learning models are increasingly important in domains where transparent decision-making is required. In this field, the interaction of the user with themodel can contribute to the interpretability of themodel. In this research work, we present an innovative approach that combines soft decision trees, neural symbolic learning, and concept learning to create an image classificationmodel that enhances interpretability and user interaction, control, and intervention. The key novelty of our method relies on the fusion of an interpretable architecture with neural symbolic learning, allowing the incorporation of expert knowledge and user interaction. Furthermore, our solution facilitates the inspection of the model through queries in the form of first-order logic predicates. Our main contribution is a human-in-the-loop model as a result of the fusion of neural symbolic learning and an interpretable architecture.We validate the effectiveness of our approach through comprehensive experimental results, demonstrating competitive performance on challenging datasets when compared to state-of-the-art solutions.HAT.tec GmbHFunding for open access publishing: Universidad de Granada/CBUA

    Time series quantum classifiers with amplitude embedding

    Get PDF
    Quantum Machine Learningwas born during the past decade as the intersection of Quantum Computing and Machine Learning. Today, advances in quantum computer hardware and the design of simulation frameworks able to run quantum algorithms in classic computers make it possible to extend classic artificial intelligence models to a quantum environment. Despite these achievements, several questions regarding the whole quantum machine learning pipeline remain unanswered, for instance the problem of classical data representation on quantum hardware, or the methodologies for designing and evaluating quantum models for common learning tasks such as classification, function approximation, clustering, etc. These problems become even more difficult to solve in the case of Time Series processing, where the context of past historical data may influence the behavior of the decision-making model. In this piece of research, we address the problem of Time Series classification using quantum models, and propose an efficient and compact representation of time series in quantum data using amplitude embedding. The proposal is capable of representing a time series of length n in log2(n) computational units, and experiments conducted on benchmark time series classification problems show that quantum models designed for classification can also outperform the accuracy of classic methods.Project QUANERGY (Ref. TED2021-129360B-I00), Ecological and Digital Transition R&D projects call 2022 by MCIN/AEI/10.13039/501100011033 and European Union NextGeneration EU/PRTRGrant PID2021-128970OA-I00 funded by MCIN/AEI/10.13039/50110 0011033/FEDE

    Handling Real-World Context Awareness, Uncertainty and Vagueness in Real-Time Human Activity Tracking and Recognition with a Fuzzy Ontology-Based Hybrid Method

    Get PDF
    Human activity recognition is a key task in ambient intelligence applications to achieve proper ambient assisted living. There has been remarkable progress in this domain, but some challenges still remain to obtain robust methods. Our goal in this work is to provide a system that allows the modeling and recognition of a set of complex activities in real life scenarios involving interaction with the environment. The proposed framework is a hybrid model that comprises two main modules: a low level sub-activity recognizer, based on data-driven methods, and a high-level activity recognizer, implemented with a fuzzy ontology to include the semantic interpretation of actions performed by users. The fuzzy ontology is fed by the sub-activities recognized by the low level data-driven component and provides fuzzy ontological reasoning to recognize both the activities and their influence in the environment with semantics. An additional benefit of the approach is the ability to handle vagueness and uncertainty in the knowledge-based module, which substantially outperforms the treatment of incomplete and/or imprecise data with respect to classic crisp ontologies. We validate these advantages with the public CAD-120 dataset (Cornell Activity Dataset), achieving an accuracy of 90.1% and 91.07% for low-level and high-level activities, respectively. This entails an improvement over fully data-driven or ontology-based approaches.This work was funded by TUCS (Turku Centre for Computer Science), Finnish Cultural Foundation, Nokia Foundation, Google Anita Borg Scholarship, CEI BioTIC Project CEI2013-P-3, Contrato-Programa of Faculty of Education, Economy and Technology of Ceuta and Project TIN2012-30939 from National I+D Research Program (Spain). We also thank Fernando Bobillo for his support with FuzzyOWL and FuzzyDL tools
    corecore