5,358 research outputs found

    The Immune System: the ultimate fractionated cyber-physical system

    Full text link
    In this little vision paper we analyze the human immune system from a computer science point of view with the aim of understanding the architecture and features that allow robust, effective behavior to emerge from local sensing and actions. We then recall the notion of fractionated cyber-physical systems, and compare and contrast this to the immune system. We conclude with some challenges.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455

    Designing adaptivity in educational games to improve learning

    Get PDF
    The study of pedagogy has shown that students have different ways of learning and processing information. Students in a classroom learn best when being taught by a teacher who is able to adapt and/or change the pedagogical model being used, to better suit said students and/or the subject being taught. When considering other teaching mediums such as computer-assisted learning systems or educational video games, research also identified the benefits of adapting educational features to better teach players. However, effective methods for adaptation in educational video games are less well researched.This study addresses four points regarding adaptivity within educational games. Firstly, a framework for making any game adaptive was extracted from the literature. Secondly, an algorithm capable of monitoring, modelling and executing adaptations was developed and explained using the framework. Thirdly, the algorithm's effect on learning gains in players was evaluated using a customised version of Minecraft as the educational game and topics from critical thinking as the educational content. Lastly, a methodology explaining the process of utilising the algorithm with any educational game and the evaluation of said methodology were detailed

    Processo de Seleção de Participantes numa Plataforma de Testes de Usabilidade

    Get PDF
    With the exponential growth of web and mobile applications, the practice of usability tests began to be more prevalent in different organizations and revealed a positive impact on the empathy of their products with the intended users. Skipping this step can cause major usability issues on the final product, as it was not tested with its users before it was launched. Despite companies realizing the importance of user testing and starting to make use of it, this task requires participants who are willing to participate in the tests, and this aspect is often a major obstacle for UX (User eXperience) researchers, not only in terms of their recruitment but also the suitability of the participants’ attributes to the product to be created. The aim of this project is to develop a web application that unifies the steps present in carrying out usability tests, from their creation to their execution, with a special focus on the selection of participants for them, which is the main problem to be solved. The hypothesis of research here is to confirm that using other platforms' APIs (Application Programming Interface) to recruit testers for usability tests is an efficient way of finding testers from a wide variety of market niches, which we will confirm or deny by the end of the project. In the project's initial phase, it is intended to study the state of the art in order to bring to light deeper understandings of the UX field and usability testing, as well as current applications on the market for usability testing management and potential technologies for this project’s development. Then, possible solutions for approaching the problem will be described, and a value analysis will be carried out focusing on strategic and business concepts for the project, namely its value proposition. In this stage, it is intended to choose a solution for the problem, taking into account factors such as time, adequation, and complexity of each one. As a result of the previous steps, an initial concept for the UI (User Interface) of the chosen solution will be sketched, and its usability will be tested in order to find any issues and fix them before going on to the final design. After this phase, the solution will be implemented, and the technique for doing so, including the technology used, the code architecture and documentation, will be described. Accordingly, the project will be experimented and evaluated again after its implementation phase, in order to assess the effectiveness of its requirements’ execution and potential problems the testers may have run into, which we will have to analyze and consider fixing at a later stage. At last, thorough conclusions about the project will be held out, including those regarding the challenges and limitations faced, achieved objectives, and, lastly, the work to be developed in the future.Com o crescimento das aplicações web e mobile, a prática de testes de usabilidade começou a estar mais presente nas diversas organizações e revelou um impacto positivo na empatia dos seus produtos com os utilizadores a que se destinam. Saltar esta etapa pode causar grandes problemas de usabilidade no produto final, já que não foi testado com o público-alvo antes de ser lançado. Apesar das empresas perceberem a importância de testes de usabilidade e começarem a fazer uso deste método, estes testes requerem participantes que estejam dispostos a participarem nos mesmos (conhecidos como avaliadores ou, em inglês, testers), sendo este aspeto, muitas vezes, um grande obstáculo para os investigadores na área de UX (User eXperience), não só no que toca à sua procura, como também à adequação dos atributos destes ao produto a criar. O objetivo deste projeto é desenvolver uma aplicação web que unifique as etapas presentes na realização de testes de usabilidade, desde a sua criação à sua execução, com um foco especial na seleção de participantes para os mesmos, sendo este o problema a resolver. A aplicação terá as suas funcionalidades principais para criação e partilha dos testes, assim como seleção de participantes para os mesmos, em funcionamento, podendo, futuramente, ser desenvolvida na sua totalidade, isto é, com a presença de mais funcionalidades extra, úteis para a realização destes testes. A hipótese de investigação aqui é confirmar que usar APIs (Application Programming Interface) de outras plataformas para recrutar avaliadores para testes de usabilidade é uma forma eficiente de encontrar testers dos mais diversos nichos de mercado, o que iremos confirmar ou negar no final do projeto. Na fase inicial do projeto, pretende-se estudar o estado da arte de forma a obter conhecimentos mais aprofundados sobre a área de UX e testes de usabilidade (respondendose, para isso, a questões como: o que é UX Design, o que é usability testing e qual a sua importância; que tipos de testes de usabilidade existem; quando se conduzem; quantos participantes são necessários; entre outras questões), bem como perceber as aplicações atualmente existentes no mercado para gestão destes testes (percebendo quem é a nossa concorrência e que funcionalidades é que oferecem nas suas aplicações, identificando-se também a nossa oportunidade), e, por fim, potenciais tecnologias para o desenvolvimento deste projeto (incluindo possíveis APIs de aplicações externas que poderão ser usadas para procura de avaliadores). Em seguida, serão pleaneadas possíveis soluções para a abordagem do problema, incluindo-se uma solução completa (que não será implementada neste projeto devido à sua complexidade e tempo que exige), e três outras soluções simplificadas possíveis de serem implementadas, permitindo, caso haja algum imprevisto na fase de desenvolvimento que impossibilite a implementação da solução escolhida a tempo, a implementação de uma outra solução aqui especificada, visto que estas variam no número de requisitos que exigem. Será ainda realizada uma análise de valor com foco nos conceitos estratégicos e de negócio do projeto, nomeadamente a oportunidade, uma análise SWOT (Strengths, Weaknesses, Oppurtunities, Threats), proposta de valor, modelo de negócio, e os requisitos a ter em conta para satisfazer os desejos dos clientes, através de uma técnica denominada Quality Function Deployment (QFD). Nesta etapa, pretende-se também escolher uma solução para o problema, levando em consideração fatores como tempo, adequação e complexidade de cada uma. Para isso, será utilizado o método de Analytic Hierarchy Process (AHP). Como resultado das etapas anteriores, será desenhada a solução para o problema. Em primeiro lugar, serão definidos os tipos de utilizador que poderão registar-se na aplicação, esboçando também um mapa de navegação distinto para cada um destes. Em segundo lugar, será esboçado um conceito inicial para o UI (User Interface) da solução escolhida, onde será depois conduzido um teste de usabilidade – System Usability Scale (SUS) – para que se possam identificar e corrigir problemas antes da sua versão final. Na versão final do design será também construída uma identidade visual, escolhendo-se um nome para a aplicação e desenhando-se o seu logótipo, definindo-se também a paleta de cores e tipografia a utilizar, mantendo coerência ao longo das páginas da aplicação. Após esta fase, a solução será implementada: será escolhida e fundamentada a tecnologia a utilizar; será descrito o processo de implementação, nomeadamente a definição de requisitos e a utilização de um sistema de controlo de versões; será explicada a arquitetura do código e a sua respetiva documentação, com foco em casos especiais do projeto; e, por último, será disponibilizada a aplicação final. Assim, o projeto será novamente experimentado e avaliado após a sua fase de implementação, de forma a classificar-se o sucesso da implementação dos seus requisitos e potenciais problemas que os testers possam ter encontrado, os quais terão de ser analisados e ponderados para corrigir numa fase posterior. Para isto, será utilizado um QEF (Quantitative Evaluation Framework) e um formulário de feedback, por forma a obtermos uma avaliação quantitativa e qualitativa da aplicação desenvolvida. Por último, serão tiradas conclusões aprofundadas sobre o projeto, incluindo as relativas aos desafios e limitações enfrentados, objetivos alcançados (focando-nos também em justificar a hipótese apresentada) e, por último, o trabalho a desenvolver-se no futuro

    Inexact Mapping of Short Biological Sequences in High Performance Computational Environments

    Full text link
    La bioinformática es la aplicación de las ciencias computacionales a la gestión y análisis de datos biológicos. A partir de 2005, con la aparición de los secuenciadores de ADN de nueva generación surge lo que se conoce como Next Generation Sequencing o NGS. Un único experimento biológico puesto en marcha en una máquina de secuenciación NGS puede producir fácilmente cientos de gigabytes o incluso terabytes de datos. Dependiendo de la técnica elegida este proceso puede realizarse en unas pocas horas o días. La disponibilidad de recursos locales asequibles, tales como los procesadores multinúcleo o las nuevas tarjetas gráfi cas preparadas para el cálculo de propósito general GPGPU (General Purpose Graphic Processing Unit ), constituye una gran oportunidad para hacer frente a estos problemas. En la actualidad, un tema abordado con frecuencia es el alineamiento de secuencias de ADN. En bioinformática, el alineamiento permite comparar dos o más secuencias de ADN, ARN, o estructuras primarias proteicas, resaltando sus zonas de similitud. Dichas similitudes podrían indicar relaciones funcionales o evolutivas entre los genes o proteínas consultados. Además, la existencia de similitudes entre las secuencias de un individuo paciente y de otro individuo con una enfermedad genética detectada podría utilizarse de manera efectiva en el campo de la medicina diagnóstica. El problema en torno al que gira el desarrollo de la tesis doctoral consiste en la localización de fragmentos de secuencia cortos dentro del ADN. Esto se conoce bajo el sobrenombre de mapeo de secuencia o sequence mapping. Dicho mapeo debe permitir errores, pudiendo mapear secuencias incluso existiendo variabilidad genética o errores de lectura en el mapeo. Existen diversas técnicas para abordar el mapeo, pero desde la aparición de la NGS destaca la búsqueda por pre jos indexados y agrupados mediante la transformada de Burrows-Wheeler [28] (o BWT en lo sucesivo). Dicha transformada se empleó originalmente en técnicas de compresión de datos, como es el caso del algoritmo bzip2. Su utilización como herramienta para la indización y búsqueda posterior de información es más reciente [22]. La ventaja es que su complejidad computacional depende únicamente de la longitud de la secuencia a mapear. Por otra parte, una gran cantidad de técnicas de alineamiento se basan en algoritmos de programación dinámica, ya sea Smith-Watterman o modelos ocultos de Markov. Estos proporcionan mayor sensibilidad, permitiendo mayor cantidad de errores, pero su coste computacional es mayor y depende del tamaño de la secuencia multiplicado por el de la cadena de referencia. Muchas herramientas combinan una primera fase de búsqueda con la BWT de regiones candidatas al alineamiento y una segunda fase de alineamiento local en la que se mapean cadenas con Smith-Watterman o HMM. Cuando estamos mapeando permitiendo pocos errores, una segunda fase con un algoritmo de programación dinámica resulta demasiado costosa, por lo que una búsqueda inexacta basada en BWT puede resultar más e ficiente. La principal motivación de la tesis doctoral es la implementación de un algoritmo de búsqueda inexacta basado únicamente en la BWT, adaptándolo a las arquitecturas paralelas modernas, tanto en CPU como en GPGPU. El algoritmo constituirá un método nuevo de rami cación y poda adaptado a la información genómica. Durante el periodo de estancia se estudiarán los Modelos ocultos de Markov y se realizará una implementación sobre modelos de computación funcional GTA (Aggregate o Test o Generate), así como la paralelización en memoria compartida y distribuida de dicha plataforma de programación funcional.Salavert Torres, J. (2014). Inexact Mapping of Short Biological Sequences in High Performance Computational Environments [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/43721TESI

    A Dynamic Scaling Methodology for Improving Performance of Big Data Systems

    Get PDF
    The continuous growth of data volume in various fields such as, healthcare, sciences, economics, and business has caused an overwhelming flow of data in the last decade. The overwhelming flow of data has raised challenges in processing, analyzing, and storing data, which lead many systems to face an issue in performance. Poor performance of systems creates negative impact such as delays, unprocessed data, and increasing response time. Processing huge amounts of data demands a powerful computational infrastructure to ensure that data processing and analysis success [7]. However, the architectures of these systems are not suitable to process that quantity of data. This calls for necessity to develop a methodology to improve the performance of systems handle massive amount of data. This thesis presents a novel dynamic scaling methodology to improve the performance of big data systems. The dynamic scaling methodology is developed to scale up the system based on the several aspects from the big data perspective. Moreover, these aspects are used by the helper project algorithm which is designed to divide a task into small chunks to be processed by the system. These small chunks run on several virtual machines to work in parallel to enhance the system’s runtime performance. In addition, the dynamic scaling methodology does not require many modifications on the applied, which makes it easy to use. The dynamic scaling methodology improves the performance of the big data system significantly. As a result, it provides a solution for performance failures in systems that process huge amount of data. This is study would be beneficial to IT researches that focus on performance of big data systems

    Development of a custom scripting language and a custom scripting system

    Get PDF
    The problem being addressed by this particular project lies in programming territory. The learning curve of programming effectively at a professional level is excruciatingly steep, in big part due to the complexity of the environment programmers work in. This environment involves both hardware and software, each with its own unique set of specifications and protocols that need to be learned in order to be effective. The general objectives that have been set are learning how to single-handedly manage a project from start to finish successfully and testing all the skills learned through the course of the degree, delivering at the end of this project a fully-functional product. The scope only considers developing a functional scripting language prototype featuring all the base, and only the base, functionalities expected of a state-of-the-art language. This decision has been made taking in consideration to the fact that a commercial product of the same type requires an amount of work that is unfeasible for a four-month project with only a single developer in charge. The methodology chosen to be applied for this project is the Iterative Model (SDLC model). This means that the project has been developed in cycles, after which the product has expanded and become more complex. Tool-wise, Visual Studio has been chosen as the main IDE and the project has been developed in C. The end result of this project is, on one hand, an Interpreter prototype that closely follows the clox implementation that is proposed in Crafting Interpreters and that can be fed one line of source code at a time and outputs the corresponding operations and on the other hand a Scripting System that can read Script Files and execute the code within. With the completion of the project, it has been concluded that the product has significant potential as a learning tool to deepen one’s understanding of programming and programming languages alike. For those interested in the inner workings of this project and its result, the links to the repositories can be found in the Links section

    Telehealth Sensor Authentication Through Memory Chip Variability

    Get PDF
    In light of the COVID-19 world-wide pandemic, the need for secure and readily available remote patient monitoring has never been more important. Rural and low income communities in particular have been severely impacted by the lack of accessibility to in-person healthcare. This has created the need for access to remote patient monitoring and virtual health visits in order for greater accessibility to premier care. In this paper, we propose hardware security primitives as a viable solution to meet the security challenges of the telehealth market. We have created a novel solution, called the High-Low (HiLo) method, that generates physical unclonable function (PUF) signatures based on process variation within flash memory in order to uniquely identify and authenticate remote sensors. The HiLo method consumes 20x less power than conventional authentication schemes, has an average latency of only 39ms for signature generation, and can be readily implemented through firmware on ONFI 2.2 compliant off-the-shelf NAND flash memory chips. The HiLo method generates 512 bit signatures with an average error rate of 5.9 * 10-4, while also adapting for flash chip aging. Due to its low latency, low error rate, and high power efficiency, we believe that the HiLo method could help progress the field of remote patient monitoring by accurately and efficiently authenticating remote health sensors

    Search-Based Information Systems Migration: Case Studies on Refactoring Model Transformations

    Full text link
    Information systems are built to last for decades; however, the reality suggests otherwise. Companies are often pushed to modernize their systems to reduce costs, meet new policies, improve the security, or to be more competitive. Model-driven engineering (MDE) approaches are used in several successful projects to migrate systems. MDE raises the level of abstraction for complex systems by relying on models as first-class entities. These models are maintained and transformed using model transformations (MT), which are expressed by means of transformation rules to transform models from source to target meta-models. The migration process for information systems may take years for large systems. Thus, many changes are going to be introduced to the transformations to reflect the new business requirements, fix bugs, or to meet the updated metamodels. Therefore, the quality of MT should be continually checked and improved during the evolution process to avoid future technical debts. Most MT programs are written as one large module due to the lack of refactoring/modularization and regression testing tools support. In object-oriented systems, composition and modularization are used to tackle the issues of maintainability and testability. Moreover, refactoring is used to improve the non-functional attributes of the software, making it easier and faster for developers to work and manipulate the code. Thus, we proposed an intelligent computational search approach to automatically modularize MT. Furthermore, we took inspiration from a well-defined quality assessment model for object-oriented design to propose a quality assessment model for MT in particular. The results showed a 45% improvement in the developer’s speed to detect or fix bugs, and developers made 40% less errors when performing a task with the optimized version. Since refactoring operations changes the transformation, it is important to apply regression testing to check their correctness and robustness. Thus, we proposed a multi-objective test case selection technique to find the best trade-off between coverage and computational cost. Results showed a drastic speed-up of the testing process while still showing a good testing performance. The survey with practitioners highlighted the need of such maintenance and evolution framework to improve the quality and efficiency of the existing migration process.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/149153/1/Bader Alkhazi Final Dissertation.pdfDescription of Bader Alkhazi Final Dissertation.pdf : Restricted to UM users only

    Mallipohjainen järjestelmäintegraatio tuotannonohjausjärjestelmille

    Get PDF
    Application integration becomes more complex as software becomes more advanced. This thesis investigates the applicability of model-driven application integration methods to the software integration of manufacturing execution systems (MES). The goal was to create a code generator that uses models to generate a working program that transfers data from a MES to another information system. The focus of the implementation was on generality. First, past research of MES was reviewed, the means to integrate it with other information systems were investigated, and the international standard ISA-95 and B2MML as well as model-driven engineering (MDE) were revised. Next, requirements were defined for the system. The requirements were divided into user and developer requirements. A suitable design for a code generator was introduced and, after that, implemented and experimented. The experiment was conducted by reading production data from the database of MES-like Delfoi Planner and then transforming that data to B2MML-styled XML-schema. The experiment verified that the code generator functioned as intended. However, compared to a manually created program, the generated code was longer and less efficient. It should also be considered that adopting MDE methods takes time. Therefore, for MDE to be better than traditional programming, the code generator has to be used multiple times in order to achieve the benefits and the systems cannot be too time-critical either. Based on the findings, it can be said, that model-driven application integration methods can be used to integrate MESs, but there are restrictions.Järjestelmäintegraatio vaikeutuu ohjelmien monimutkaistuessa. Tässä työssä tutkitaan mallipohjaisten järjestelmäintegraatiometodien soveltuvuutta tuotannonohjausjärjestelmille (MES). Tavoitteena oli muodostaa koodigeneraattori, joka käyttää malleja luodakseen toimivan ohjelman, joka siirtää tietoa MES-järjestelmästä johonkin toiseen tietojärjestelmään. Toteutuksessa keskityttiin yleistettävyyteen. Aluksi työssä käytiin läpi aikaisempaa tutkimusta MES-järjestelmistä ja mahdollisuuksista integroida niitä toisiin informaatiojärjestelmiin. Lisäksi otettiiin selvää kansainvälisestä ISA-95 standardista ja B2MML:sta sekä mallipohjaisesta tekniikasta (MDE). Tämän jälkeen järjestelmälle määriteltiin vaatimukset, jotka jaettiin käyttäjän ja kehittäjän vaatimuksiin. Koodigeneraattorista tehtiin ehdot täyttävä suunnitelma, joka toteutettiin ja jolla suoritettiin kokeita. Koe toteutettiin lukemalla tuotantodataa MES:n kaltaisen Delfoi Plannerin tietokannasta, jonka jälkeen data muutettiin B2MML tyyliä noudattavaan XML-schema muotoon. Kokeet osoittivat, että koodigeneraattori toimi kuten toivottiin. Kuitenkin havaittiin, että verrattuna manuaalisesti toteutettuun ohjelmaan, luotu ohjelma ei ollut yhtä tehokas ja lisäksi se oli pidempi. Huomattiin myös, että MDE-metodien käyttöönotto vie paljon aikaa. Jotta MDE olisi perinteistä ohjelmointia parempi vaihtoehto, sitä pitäisi käyttää useita kertoja ja sillä luotu järjestelmä ei saisi olla liian aikariippuvainen. Havaintojen perusteella voidaan sanoa, että mallipohjaisia järjestelmäintegraatiometodeja voidaan käyttää MES-järjestelmien integrointiin, mutta sille on rajoituksia
    corecore