980 research outputs found

    An Adaptive Design Methodology for Reduction of Product Development Risk

    Full text link
    Embedded systems interaction with environment inherently complicates understanding of requirements and their correct implementation. However, product uncertainty is highest during early stages of development. Design verification is an essential step in the development of any system, especially for Embedded System. This paper introduces a novel adaptive design methodology, which incorporates step-wise prototyping and verification. With each adaptive step product-realization level is enhanced while decreasing the level of product uncertainty, thereby reducing the overall costs. The back-bone of this frame-work is the development of Domain Specific Operational (DOP) Model and the associated Verification Instrumentation for Test and Evaluation, developed based on the DOP model. Together they generate functionally valid test-sequence for carrying out prototype evaluation. With the help of a case study 'Multimode Detection Subsystem' the application of this method is sketched. The design methodologies can be compared by defining and computing a generic performance criterion like Average design-cycle Risk. For the case study, by computing Average design-cycle Risk, it is shown that the adaptive method reduces the product development risk for a small increase in the total design cycle time.Comment: 21 pages, 9 figure

    Power estimation of an ECDSA core applied in V2X scenarios using heterogeneous distributed simulation

    Get PDF
    Embedded systems are steadily growing in complexity and nowadays power consumption additionally plays an important role. Designing and exploring such systems embedded in its environment demand for holistic and efficient simulations. In this work we use a simulation framework based on the HLA (High-Level Architecture) and the modeling tool Ptolemy II to enable complex heterogeneous distributed simulations of embedded systems. In this context, we introduce a co-simulation based power estimation approach by integrating domain-specific simulators as well as off-the-shelf HDL simulator and synthesis tools. This enables cross-domain interaction and generation of realistic on-the-fly stimuli data for Register Transfer Level and Gate Level models as well as the gathering of power estimation data. We apply the framework to a Vehicle-2-X scenario evaluating an ECDSA signature processing core which ensures trustworthiness in vehicular wireless networks. To evaluate dynamic power reduction possibilities on application level we additionally introduce a V2X Message Evaluation technique to reduce signature verification efforts. It shows how realistic on-the-fly stimuli data obtained by the framework can improve the exploration and estimation of dynamic power consumption

    Design and management of image processing pipelines within CPS : Acquired experience towards the end of the FitOptiVis ECSEL Project

    Get PDF
    Cyber-Physical Systems (CPSs) are dynamic and reactive systems interacting with processes, environment and, sometimes, humans. They are often distributed with sensors and actuators, characterized for being smart, adaptive, predictive and react in real-time. Indeed, image- and video-processing pipelines are a prime source for environmental information for systems allowing them to take better decisions according to what they see. Therefore, in FitOptiVis, we are developing novel methods and tools to integrate complex image- and video-processing pipelines. FitOptiVis aims to deliver a reference architecture for describing and optimizing quality and resource management for imaging and video pipelines in CPSs both at design- and run-time. The architecture is concretized in low-power, high-performance, smart components, and in methods and tools for combined design-time and run-time multi-objective optimization and adaptation within system and environment constraints.Peer reviewe

    Hybrid Multiresolution Simulation & Model Checking: Network-On-Chip Systems

    Get PDF
    abstract: Designers employ a variety of modeling theories and methodologies to create functional models of discrete network systems. These dynamical models are evaluated using verification and validation techniques throughout incremental design stages. Models created for these systems should directly represent their growing complexity with respect to composition and heterogeneity. Similar to software engineering practices, incremental model design is required for complex system design. As a result, models at early increments are significantly simpler relative to real systems. While experimenting (verification or validation) on models at early increments are computationally less demanding, the results of these experiments are less trustworthy and less rewarding. At any increment of design, a set of tools and technique are required for controlling the complexity of models and experimentation. A complex system such as Network-on-Chip (NoC) may benefit from incremental design stages. Current design methods for NoC rely on multiple models developed using various modeling frameworks. It is useful to develop frameworks that can formalize the relationships among these models. Fine-grain models are derived using their coarse-grain counterparts. Moreover, validation and verification capability at various design stages enabled through disciplined model conversion is very beneficial. In this research, Multiresolution Modeling (MRM) is used for system level design of NoC. MRM aids in creating a family of models at different levels of scale and complexity with well-formed relationships. In addition, a variant of the Discrete Event System Specification (DEVS) formalism is proposed which supports model checking. Hierarchical models of Network-on-Chip components may be created at different resolutions while each model can be validated using discrete-event simulation and verified via state exploration. System property expressions are defined in the DEVS language and developed as Transducers which can be applied seamlessly for model checking and simulation purposes. Multiresolution Modeling with verification and validation capabilities of this framework complement one another. MRM manages the scale and complexity of models which in turn can reduces V&V time and effort and conversely the V&V helps ensure correctness of models at multiple resolutions. This framework is realized through extending the DEVS-Suite simulator and its applicability demonstrated for exemplar NoC models.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    OddAssist - An eSports betting recommendation system

    Get PDF
    It is globally accepted that sports betting has been around for as long as the sport itself. Back in the 1st century, circuses hosted chariot races and fans would bet on who they thought would emerge victorious. With the evolution of technology, sports evolved and, mainly, the bookmakers evolved. Due to the mass digitization, these houses are now available online, from anywhere, which makes this market inherently more tempting. In fact, this transition has propelled the sports betting industry into a multi-billion-dollar industry that can rival the sports industry. Similarly, younger generations are increasingly attached to the digital world, including electronic sports – eSports. In fact, young men are more likely to follow eSports than traditional sports. Counter-Strike: Global Offensive, the videogame on which this dissertation focuses, is one of the pillars of this industry and during 2022, 15 million dollars were distributed in tournament prizes and there was a peak of 2 million concurrent viewers. This factor, combined with the digitization of bookmakers, make the eSports betting market extremely appealing for exploring machine learning techniques, since young people who follow this type of sports also find it easy to bet online. In this dissertation, a betting recommendation system is proposed, implemented, tested, and validated, which considers the match history of each team, the odds of several bookmakers and the general feeling of fans in a discussion forum. The individual machine learning models achieved great results by themselves. More specifically, the match history model managed an accuracy of 66.66% with an expected calibration error of 2.10% and the bookmaker odds model, with an accuracy of 65.05% and a calibration error of 2.53%. Combining the models through stacking increased the accuracy to 67.62% but worsened the expected calibration error to 5.19%. On the other hand, merging the datasets and training a new, stronger model on that data improved the accuracy to 66.81% and had an expected calibration error of 2.67%. The solution is thoroughly tested in a betting simulation encapsulating 2500 matches. The system’s final odd is compared with the odds of the bookmakers and the expected long-term return is computed. A bet is made depending on whether it is above a certain threshold. This strategy called positive expected value betting was used at multiple thresholds and the results were compared. While the stacking solution did not perform in a betting environment, the match history model prevailed with profits form 8% to 90%; the odds model had profits ranging from 13% to 211%; and the dataset merging solution profited from 11% to 77%, all depending on the minimum expected value thresholds. Therefore, from this work resulted several machine learning approaches capable of profiting from Counter Strike: Global Offensive bets long-term.É globalmente aceite que as apostas desportivas existem há tanto tempo quanto o próprio desporto. Mesmo no primeiro século, os circos hospedavam corridas de carruagens e os fãs apostavam em quem achavam que sairia vitorioso, semelhante às corridas de cavalo de agora. Com a evolução da tecnologia, os desportos foram evoluindo e, principalmente, evoluíram as casas de apostas. Devido à onda de digitalização em massa, estas casas passaram a estar disponíveis online, a partir de qualquer sítio, o que torna este mercado inerentemente mais tentador. De facto, esta transição propulsionou a indústria das apostas desportivas para uma indústria multibilionária que agora pode mesmo ser comparada à indústria dos desportos. De forma semelhante, gerações mais novas estão cada vez mais ligadas ao digital, incluindo desportos digitais – eSports. Counter-Strike: Global Offensive, o videojogo sobre o qual esta dissertação incide, é um dos grandes impulsionadores desta indústria e durante 2022, 15 milhões de dólares foram distribuídos em prémios de torneios e houve um pico de espectadores concorrentes de 2 milhões. Embora esta realidade não seja tão pronunciada em Portugal, em vários países, jovens adultos do sexo masculino, têm mais probabilidade de acompanharem eSports que desportos tradicionais. Este fator, aliado à digitalização das casas de apostas, tornam o mercado de apostas em eSports muito apelativo para a exploração técnicas de aprendizagem automática, uma vez que os jovens que acompanham este tipo de desportos têm facilidade em apostar online. Nesta dissertação é proposto, implementado, testado e validado um sistema de recomendação de apostas que considera o histórico de resultados de cada equipa, as cotas de várias casas de apostas e o sentimento geral dos fãs num fórum de discussão – HLTV. Deste modo, foram inicialmente desenvolvidos 3 sistemas de aprendizagem automática. Para avaliar os sistemas criados, foi considerado o período de outubro de 2020 até março de 2023, o que corresponde a 2500 partidas. Porém, sendo o período de testes tão extenso, existe muita variação na competitividade das equipas. Deste modo, para evitar que os modelos ficassem obsoletos durante este período de teste, estes foram re-treinados no mínimo uma vez por mês durante a duração do período de testes. O primeiro sistema de aprendizagem automática incide sobre a previsão a partir de resultados anteriores, ou seja, o histórico de jogos entre as equipas. A melhor solução foi incorporar os jogadores na previsão, juntamente com o ranking da equipa e dando mais peso aos jogos mais recentes. Esta abordagem, utilizando regressão logística teve uma taxa de acerto de 66.66% com um erro expectável de calibração de 2.10%. O segundo sistema compila as cotas das várias casas de apostas e faz previsões com base em padrões das suas variações. Neste caso, incorporar as casas de aposta tendo atingido uma taxa de acerto de 65.88% utilizando regressão logística, porém, era um modelo pior calibrado que o modelo que utilizava a média das cotas utilizando gradient boosting machine, que exibiu uma taxa de acerto de 65.06%, mas melhores métricas de calibração, com um erro expectável de 2.53%. O terceiro sistema, baseia-se no sentimento dos fãs no fórum HLTV. Primeiramente, é utilizado o GPT 3.5 para extrair o sentimento de cada comentário, com uma taxa geral de acerto de 84.28%. No entanto, considerando apenas os comentários classificados como conclusivos, a taxa de acerto é de 91.46%. Depois de classificados, os comentários são depois passados a um modelo support vector machine que incorpora o comentador e a sua taxa de acerto nas partidas anteriores. Esta solução apenas previu corretamente 59.26% dos casos com um erro esperado de calibração de 3.22%. De modo a agregar as previsões destes 3 modelos, foram testadas duas abordagens. Primeiramente, foi testado treinar um novo modelo a partir das previsões dos restantes (stacking), obtendo uma taxa de acerto de 67.62%, mas com um erro de calibração esperado de 5.19%. Na segunda abordagem, por outro lado, são agregados os dados utilizados no treino dos 3 modelos individuais, e é treinado um novo modelo com base nesse conjunto de dados mais complexo. Esta abordagem, recorrendo a support vector machine, obteve uma taxa de acerto mais baixa, 66.81% mas um erro esperado de calibração mais baixo, 2.67%. Por fim, as abordagens são postas à prova através de um simulador de apostas, onde sistema cada faz uma previsão e a compara com a cota oferecia pelas casas de apostas. A simulação é feita para vários patamares de retorno mínimo esperado, onde os sistemas apenas apostam caso a taxa esperada de retorno da cota seja superior à do patamar. Esta cota final é depois comparada com as cotas das casas de apostas e, caso exista uma casa com uma cota superior, uma aposta é feita. Esta estratégia denomina-se de apostas de valor esperado positivo, ou seja, apostas cuja cota é demasiado elevada face à probabilidade de se concretizar e que geram lucros a longo termo. Nesta simulação, os melhores resultados, para uma taxa de mínima de 5% foram os modelos criados a partir das cotas das casas de apostas, com lucros entre os 13% e os 211%; o dos dados históricos que lucrou entre 8% e 90%; e por fim, o modelo composto, com lucros entre os 11% e os 77%. Assim, deste trabalho resultaram diversos sistemas baseados em machine learning capazes de obter lucro a longo-termo a apostar em Counter Strike: Global Offensive

    Journal of Telecommunications and Information Technology, 2008, nr 2

    Get PDF
    kwartalni

    Real-time multi-domain optimization controller for multi-motor electric vehicles using automotive-suitable methods and heterogeneous embedded platforms

    Get PDF
    Los capítulos 2,3 y 7 están sujetos a confidencialidad por el autor. 145 p.In this Thesis, an elaborate control solution combining Machine Learning and Soft Computing techniques has been developed, targeting a chal lenging vehicle dynamics application aiming to optimize the torque distribution across the wheels with four independent electric motors.The technological context that has motivated this research brings together potential -and challenges- from multiple dom ains: new automotive powertrain topologies with increased degrees of freedom and controllability, which can be approached with innovative Machine Learning algorithm concepts, being implementable by exploiting the computational capacity of modern heterogeneous embedded platforms and automated toolchains. The complex relations among these three domains that enable the potential for great enhancements, do contrast with the fourth domain in this context: challenging constraints brought by industrial aspects and safe ty regulations. The innovative control architecture that has been conce ived combines Neural Networks as Virtual Sensor for unmeasurable forces , with a multi-objective optimization function driven by Fuzzy Logic , which defines priorities basing on the real -time driving situation. The fundamental principle is to enhance vehicle dynamics by implementing a Torque Vectoring controller that prevents wheel slip using the inputs provided by the Neural Network. Complementary optimization objectives are effici ency, thermal stress and smoothness. Safety -critical concerns are addressed through architectural and functional measures.Two main phases can be identified across the activities and milestones achieved in this work. In a first phase, a baseline Torque Vectoring controller was implemented on an embedded platform and -benefiting from a seamless transition using Hardware-in -the -Loop - it was integrated into a real Motor -in -Wheel vehicle for race track tests. Having validated the concept, framework, methodology and models, a second simulation-based phase proceeds to develop the more sophisticated controller, targeting a more capable vehicle, leading to the final solution of this work. Besides, this concept was further evolved to support a joint research work which lead to outstanding FPGA and GPU based embedded implementations of Neural Networks. Ultimately, the different building blocks that compose this work have shown results that have met or exceeded the expectations, both on technical and conceptual level. The highly non-linear multi-variable (and multi-objective) control problem was tackled. Neural Network estimations are accurate, performance metrics in general -and vehicle dynamics and efficiency in particular- are clearly improved, Fuzzy Logic and optimization behave as expected, and efficient embedded implementation is shown to be viable. Consequently, the proposed control concept -and the surrounding solutions and enablers- have proven their qualities in what respects to functionality, performance, implementability and industry suitability.The most relevant contributions to be highlighted are firstly each of the algorithms and functions that are implemented in the controller solutions and , ultimately, the whole control concept itself with the architectural approaches it involves. Besides multiple enablers which are exploitable for future work have been provided, as well as an illustrative insight into the intricacies of a vivid technological context, showcasing how they can be harmonized. Furthermore, multiple international activities in both academic and professional contexts -which have provided enrichment as well as acknowledgement, for this work-, have led to several publications, two high-impact journal papers and collateral work products of diverse nature
    corecore