2,076 research outputs found

    systemc based electronic system level design space exploration environment for dedicated heterogeneous multi processor systems

    Get PDF
    Abstract This work faces the problem of the Electronic System-Level (ESL) HW/SW co-design of dedicated electronic digital systems based on heterogeneous multi-processor architectures. In particular, the work presents a prototype SystemC-based environment that exploits a Design Space Exploration (DSE) approach able to suggest an HW/SW partitioning of the system specification and a mapping onto an automatically defined architecture. The descriptions of the reference HW/SW co-design methodology and the main design issues related to the developed DSE SW tools, supported by two reference use cases that allows to understand the role of the DSE step in the whole design flow, represent the core of the paper

    Optimal Selection of Preemption Points to Minimize Preemption Overhead

    Get PDF
    A central issue for verifying the schedulability of hard real-time systems is the correct evaluation of task execution times. These values are significantly influenced by the preemption overhead, which mainly includes the cache related delays and the context switch times introduced by each preemption. Since such an overhead significantly depends on the particular point in the code where preemption takes place, this paper proposes a method for placing suitable preemption points in each task in order to maximize the chances of finding a schedulable solution. In a previous work, we presented a method for the optimal selection of preemption points under the restrictive assumption of a fixed preemption cost, identical for each preemption point. In this paper, we remove such an assumption, exploring a more realistic and complex scenario where the preemption cost varies throughout the task code. Instead of modeling the problem with an integer programming formulation, with exponential worst-case complexity, we derive an optimal algorithm that has a linear time and space complexity. This somewhat surprising result allows selecting the best preemption points even in complex scenarios with a large number of potential preemption locations. Experimental results are also presented to show the effectiveness of the proposed approach in increasing the system schedulability

    Comparison of embedded and added motor imagery training in patients after stroke: Results of a randomised controlled pilot trial

    Get PDF
    Copyright @ 2012 Schuster et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Background: Motor imagery (MI) when combined with physiotherapy can offer functional benefits after stroke. Two MI integration strategies exist: added and embedded MI. Both approaches were compared when learning a complex motor task (MT): ‘Going down, laying on the floor, and getting up again’. Methods: Outpatients after first stroke participated in a single-blinded, randomised controlled trial with MI embedded into physiotherapy (EG1), MI added to physiotherapy (EG2), and a control group (CG). All groups participated in six physiotherapy sessions. Primary study outcome was time (sec) to perform the motor task at pre and post-intervention. Secondary outcomes: level of help needed, stages of MT-completion, independence, balance, fear of falling (FOF), MI ability. Data were collected four times: twice during one week baseline phase (BL, T0), following the two week intervention (T1), after a two week follow-up (FU). Analysis of variance was performed. Results: Thirty nine outpatients were included (12 females, age: 63.4 ± 10 years; time since stroke: 3.5 ± 2 years; 29 with an ischemic event). All were able to complete the motor task using the standardised 7-step procedure and reduced FOF at T0, T1, and FU. Times to perform the MT at baseline were 44.2 ± 22s, 64.6 ± 50s, and 118.3 ± 93s for EG1 (N = 13), EG2 (N = 12), and CG (N = 14). All groups showed significant improvement in time to complete the MT (p < 0.001) and degree of help needed to perform the task: minimal assistance to supervision (CG) and independent performance (EG1+2). No between group differences were found. Only EG1 demonstrated changes in MI ability over time with the visual indicator increasing from T0 to T1 and decreasing from T1 to FU. The kinaesthetic indicator increased from T1 to FU. Patients indicated to value the MI training and continued using MI for other difficult-to-perform tasks. Conclusions: Embedded or added MI training combined with physiotherapy seem to be feasible and benefi-cial to learn the MT with emphasis on getting up independently. Based on their baseline level CG had the highest potential to improve outcomes. A patient study with 35 patients per group could give a conclusive answer of a superior MI integration strategy.The research project was partially funded by the Gottfried und Julia Bangerter-Rhyner Foundation

    Novel Control Flow Checking Implementations for Automotive Software

    Get PDF
    Safety-critical applications shall be implemented on highly dependable systems, and a part of their reliability is based on checking if the software is executed correctly. Various techniques are available for this purpose, like Control Flow Checking (CFC). Many CFC algorithms can be found in the literature, but their detection performances are assessed in theoretical scenarios, when implemented in Assembly language. The international standard on functional safety for automotive applications is ISO26262. It mandates to develop using high-level programming languages and the computation of the Diagnostic Coverage (DC). The DC measures the effectiveness of the chosen hardening method, in order to detect various Failure Modes (FMs). This paper discusses two alternative solutions, one software-only, and the other involving customized hardware, for these concerns: (i) address the FMs affecting the computation units described by Table 30 of part 11 of the ISO26262 (ii) guarantee the Freedom From Interference between the hardening method and the monitored entity

    Passive optical network (PON) monitoring using optical coding technology

    Get PDF
    Les réseaux optiques passifs (PON) semblent être la technologie gagnante et ultime du futur pour les "fibres jusqu'au domicile" ayant une haute capacité. L'écoute de contrôle de ce genre de système est nécessaire pour s'assurer un niveau de qualité de service prédéterminé pour chaque client. En outre, l'écoute de contrôle réduit considérablement les dépenses en capital et de fonctionnement (CAPEX et OPEX), tant pour le fournisseur du réseau que les clients. Alors que la capacité des PON est croissante, les gestionnaires de réseau ne disposent pas encore d'une technologie efficace et appropriée pour l'écoute de contrôle des réseaux de capacité aussi élevée. Une variété de solutions a été proposée. Toutes ces dernières solutions ne sont pas pratiques à cause de leur faible capacité (nombre de clients), d'une faible évolutivité, d'une grande complexité et des défis technologiques. Plus important encore, la technologie souhaitable pour l'écoute de contrôle devrait être rentable car le marché des PON est très sensible aux coûts. Dans cette thèse, nous considérons l'application de la technologie du codage optique passif (OC) comme une solution prometteuse pour l'écoute de contrôle centralisée d'un réseau optique ramifié tels que les réseaux PON. Dans la première étape, nous développons une expression pour le signal détecté par l'écoute de contrôle et étudions ses statistiques. Nous trouvons une nouvelle expression explicite pour le rapport signal utile/signal brouillé (SIR) comme outil de mesure métrique de performance. Nous considérons cinq distributions PON géographiques différentes et étudions leurs effets sur l'SIR pour l'écoute de contrôle d'OC. Dans la prochaine étape, nous généralisons notre modèle mathématique et ses expressions pour le contrôle des signaux détectés par un détecteur quadratique et des paramètres réalistes. Nous évaluons ensuite les performances théoriques de la technologie basée sur l'écoute de contrôle selon le rapport signal/bruit (SNR), le rapport signal/bruit plus coefficient d'interférence (SNIR), et la probabilité de fausse alarme. Nous élaborons l'effet de la puissance d'impulsion transmise, la taille du réseau et la cohérence de la source lumineuse sur le rendement des codes unidimensionnels (ID) et bidimensionnels (2D) de l'écoute de contrôle d'OC. Une conception optimale est également abordée. Enfin, nous appliquons les tests de Neyman-Pearson pour le récepteur de notre système d'écoute de contrôle et enquêtons sur la façon dont le codage et la taille du réseau affectent les dépenses de fonctionnement (OPEX) de notre système d'écoute de contrôle. Malgré le fait que les codes ID et 2D fournissent des performances acceptables, elles exigent des encodeurs avec un nombre élevé de composants optiques : ils sont encombrants, causent des pertes, et ils sont coûteux. Par conséquent, nous proposons un nouveau schéma de codage simple et plus approprié pour notre application de l'écoute de contrôle que nous appelons le codage périodique. Par simulation, nous évaluons l'efficacité de l'écoute de contrôle en terme de SNR pour un PON employant cette technologie. Ce système de codage est utilisé dans notre vérification expérimentale de l'écoute de contrôle d'OC. Nous étudions expérimentalement et par simulation, l'écoute de contrôle d'un PON utilisant la technologie de codage périodique. Nous discutons des problèmes de conception pour le codage périodique et les critères de détection optimale. Nous développons également un algorithme séquentiel pour le maximum de vraisemblance avec une complexité réduite. Nous menons des expériences pour valider notre algorithme de détection à l'aide de quatre encodeurs périodiques que nous avons conçus et fabriqués. Nous menons également des simulations de Monte-Carlo pour des distributions géographiques de PON réalistes, avec des clients situés au hasard. Nous étudions l'effet de la zone de couverture et la taille du réseau (nombre d'abonnés) sur l'efficacité de calcul de notre algorithme. Nous offrons une borne sur la probabilité pour un réseau donné d'entraîner l'algorithme vers un temps exorbitant de surveillance du réseau, c'est à dire le délai d'attente de probabilité. Enfin, nous soulignons l'importance du moyennage pour remédier aux restrictions budgétaires en puissance/perte dans notre système de surveillance afin de supporter de plus grandes tailles de réseaux et plus grandes portées de fibres. Ensuite, nous mettrons à niveau notre dispositif expérimental pour démontrer un m PON avec 16 clients. Nous utilisons un laser à modulation d'exploitation directement à 1 GHz pour générer les impulsions sonde. Les données mesurées par le dispositif expérimental est exploité par l'algorithme de MLSE à détecter et à localiser les clients. Trois déploiements PON différents sont réalisés. Nous démontrons une surveillance plus rigoureuse pour les réseaux ayant une répartition géographique à plusieurs niveaux. Nous étudions aussi le budget de la perte de notre dispositif de soutien plus élevés de capacités du réseau. Enfin, nous étudions le budget total admissible de la perte d'exploitation du système de surveillance dans la bande de fréquences à 1650 nm en fonction des spécifications de l'émetteur/récepteur. En particulier, la limite totale de la perte de budget est représentée en fonction du gain de l'amplicateure de transimpédance (TIA) et le résolution de la conversion analogique-numérique (ADC). Par ailleurs, nous enquêtons sur le compromis entre la distance portée et la capacité (taille de fractionnement au niveau du noeud distant) dans notre système de suivi

    Design methodologies for space systems in a System of Systems (SoS) architecture

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Explicit Building-Block Multiobjective Genetic Algorithms: Theory, Analysis, and Developing

    Get PDF
    This dissertation research emphasizes explicit Building Block (BB) based MO EAs performance and detailed symbolic representation. An explicit BB-based MOEA for solving constrained and real-world MOPs is developed the Multiobjective Messy Genetic Algorithm II (MOMGA-II) which is designed to validate symbolic BB concepts. The MOMGA-II demonstrates that explicit BB-based MOEAs provide insight into solving difficult MOPs that is generally not realized through the use of implicit BB-based MOEA approaches. This insight is necessary to increase the effectiveness of all MOEA approaches. In order to increase MOEA computational efficiency parallelization of MOEAs is addressed. Communications between processors in a parallel MOEA implementation is extremely important, hence innovative migration and replacement schemes for use in parallel MOEAs are detailed and tested. These parallel concepts support the development of the first explicit BB-based parallel MOEA the pMOMGA-II. MOEA theory is also advanced through the derivation of the first MOEA population sizing theory. The multiobjective population sizing theory presented derives the MOEA population size necessary in order to achieve good results within a specified level of confidence. Just as in the single objective approach the MOEA population sizing theory presents a very conservative sizing estimate. Validated results illustrate insight into building block phenomena good efficiency excellent effectiveness and motivation for future research in the area of explicit BB-based MOEAs. Thus the generic results of this research effort have applicability that aid in solving many different MOPs

    Cryptographic primitives on reconfigurable platforms.

    Get PDF
    Tsoi Kuen Hung.Thesis (M.Phil.)--Chinese University of Hong Kong, 2002.Includes bibliographical references (leaves 84-92).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Motivation --- p.1Chapter 1.2 --- Objectives --- p.3Chapter 1.3 --- Contributions --- p.3Chapter 1.4 --- Thesis Organization --- p.4Chapter 2 --- Background and Review --- p.6Chapter 2.1 --- Introduction --- p.6Chapter 2.2 --- Cryptographic Algorithms --- p.6Chapter 2.3 --- Cryptographic Applications --- p.10Chapter 2.4 --- Modern Reconfigurable Platforms --- p.11Chapter 2.5 --- Review of Related Work --- p.14Chapter 2.5.1 --- Montgomery Multiplier --- p.14Chapter 2.5.2 --- IDEA Cipher --- p.16Chapter 2.5.3 --- RC4 Key Search --- p.17Chapter 2.5.4 --- Secure Random Number Generator --- p.18Chapter 2.6 --- Summary --- p.19Chapter 3 --- The IDEA Cipher --- p.20Chapter 3.1 --- Introduction --- p.20Chapter 3.2 --- The IDEA Algorithm --- p.21Chapter 3.2.1 --- Cipher Data Path --- p.21Chapter 3.2.2 --- S-Box: Multiplication Modulo 216 + 1 --- p.23Chapter 3.2.3 --- Key Schedule --- p.24Chapter 3.3 --- FPGA-based IDEA Implementation --- p.24Chapter 3.3.1 --- Multiplication Modulo 216 + 1 --- p.24Chapter 3.3.2 --- Deeply Pipelined IDEA Core --- p.26Chapter 3.3.3 --- Area Saving Modification --- p.28Chapter 3.3.4 --- Key Block in Memory --- p.28Chapter 3.3.5 --- Pipelined Key Block --- p.30Chapter 3.3.6 --- Interface --- p.31Chapter 3.3.7 --- Pipelined Design in CBC Mode --- p.31Chapter 3.4 --- Summary --- p.32Chapter 4 --- Variable Radix Montgomery Multiplier --- p.33Chapter 4.1 --- Introduction --- p.33Chapter 4.2 --- RSA Algorithm --- p.34Chapter 4.3 --- Montgomery Algorithm - Ax B mod N --- p.35Chapter 4.4 --- Systolic Array Structure --- p.36Chapter 4.5 --- Radix-2k Core --- p.37Chapter 4.5.1 --- The Original Kornerup Method (Bit-Serial) --- p.37Chapter 4.5.2 --- The Radix-2k Method --- p.38Chapter 4.5.3 --- Time-Space Relationship of Systolic Cells --- p.38Chapter 4.5.4 --- Design Correctness --- p.40Chapter 4.6 --- Implementation Details --- p.40Chapter 4.7 --- Summary --- p.41Chapter 5 --- Parallel RC4 Engine --- p.42Chapter 5.1 --- Introduction --- p.42Chapter 5.2 --- Algorithms --- p.44Chapter 5.2.1 --- RC4 --- p.44Chapter 5.2.2 --- Key Search --- p.46Chapter 5.3 --- System Architecture --- p.47Chapter 5.3.1 --- RC4 Cell Design --- p.47Chapter 5.3.2 --- Key Search --- p.49Chapter 5.3.3 --- Interface --- p.50Chapter 5.4 --- Implementation --- p.50Chapter 5.4.1 --- RC4 cell --- p.51Chapter 5.4.2 --- Floorplan --- p.53Chapter 5.5 --- Summary --- p.53Chapter 6 --- Blum Blum Shub Random Number Generator --- p.55Chapter 6.1 --- Introduction --- p.55Chapter 6.2 --- RRNG Algorithm . . --- p.56Chapter 6.3 --- PRNG Algorithm --- p.58Chapter 6.4 --- Architectural Overview --- p.59Chapter 6.5 --- Implementation --- p.59Chapter 6.5.1 --- Hardware RRNG --- p.60Chapter 6.5.2 --- BBS PRNG --- p.61Chapter 6.5.3 --- Interface --- p.66Chapter 6.6 --- Summary --- p.66Chapter 7 --- Experimental Results --- p.68Chapter 7.1 --- Design Platform --- p.68Chapter 7.2 --- IDEA Cipher --- p.69Chapter 7.2.1 --- Size of IDEA Cipher --- p.70Chapter 7.2.2 --- Performance of IDEA Cipher --- p.70Chapter 7.3 --- Variable Radix Systolic Array --- p.71Chapter 7.4 --- Parallel RC4 Engine --- p.75Chapter 7.5 --- BBS Random Number Generator --- p.76Chapter 7.5.1 --- Size --- p.76Chapter 7.5.2 --- Speed --- p.76Chapter 7.5.3 --- External Clock --- p.77Chapter 7.5.4 --- Random Performance --- p.78Chapter 7.6 --- Summary --- p.78Chapter 8 --- Conclusion --- p.81Chapter 8.1 --- Future Development --- p.83Bibliography --- p.8

    Modeling assembly program with constraints. A contribution to WCET problem

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Lógica ComputacionalModel checking with program slicing has been successfully applied to compute Worst Case Execution Time (WCET) of a program running in a given hardware. This method lacks path feasibility analysis and suffers from the following problems: The model checker (MC) explores exponential number of program paths irrespective of their feasibility. This limits the scalability of this method to multiple path programs. And the witness trace returned by the MC corresponding to WCET may not be feasible (executable). This may result in a solution which is not tight i.e., it overestimates the actual WCET. This thesis complements the above method with path feasibility analysis and addresses these problems. To achieve this: we first validate the witness trace returned by the MC and generate test data if it is executable. For this we generate constraints over a trace and solve a constraint satisfaction problem. Experiment shows that 33% of these traces (obtained while computing WCET on standard WCET benchmark programs) are infeasible. Second, we use constraint solving technique to compute approximate WCET solely based on the program (without taking into account the hardware characteristics), and suggest some feasible and probable worst case paths which can produce WCET. Each of these paths forms an input to the MC. The more precise WCET then can be computed on these paths using the above method. The maximum of all these is the WCET. In addition this, we provide a mechanism to compute an upper bound of over approximation for WCET computed using model checking method. This effort of combining constraint solving technique with model checking takes advantages of their strengths and makes WCET computation scalable and amenable to hardware changes. We use our technique to compute WCET on standard benchmark programs from M¨alardalen University and compare our results with results from model checking method

    Fault Tolerant Electronic System Design

    Get PDF
    Due to technology scaling, which means reduced transistor size, higher density, lower voltage and more aggressive clock frequency, VLSI devices may become more sensitive against soft errors. Especially for those devices used in safety- and mission-critical applications, dependability and reliability are becoming increasingly important constraints during the development of system on/around them. Other phenomena (e.g., aging and wear-out effects) also have negative impacts on reliability of modern circuits. Recent researches show that even at sea level, radiation particles can still induce soft errors in electronic systems. On one hand, processor-based system are commonly used in a wide variety of applications, including safety-critical and high availability missions, e.g., in the automotive, biomedical and aerospace domains. In these fields, an error may produce catastrophic consequences. Thus, dependability is a primary target that must be achieved taking into account tight constraints in terms of cost, performance, power and time to market. With standards and regulations (e.g., ISO-26262, DO-254, IEC-61508) clearly specify the targets to be achieved and the methods to prove their achievement, techniques working at system level are particularly attracting. On the other hand, Field Programmable Gate Array (FPGA) devices are becoming more and more attractive, also in safety- and mission-critical applications due to the high performance, low power consumption and the flexibility for reconfiguration they provide. Two types of FPGAs are commonly used, based on their configuration memory cell technology, i.e., SRAM-based and Flash-based FPGA. For SRAM-based FPGAs, the SRAM cells of the configuration memory highly susceptible to radiation induced effects which can leads to system failure; and for Flash-based FPGAs, even though their non-volatile configuration memory cells are almost immune to Single Event Upsets induced by energetic particles, the floating gate switches and the logic cells in the configuration tiles can still suffer from Single Event Effects when hit by an highly charged particle. So analysis and mitigation techniques for Single Event Effects on FPGAs are becoming increasingly important in the design flow especially when reliability is one of the main requirements
    corecore