62 research outputs found

    Conception Assistée des Logiciels Sécurisés pour les SystÚmes Embarqués

    Get PDF
    A vast majority of distributed embedded systems is concerned by security risks. The fact that applications may result poorly protected is partially due to methodological lacks in the engineering development process. More specifically, methodologies targeting formal verification may lack support to certain phases of the development process. Particularly, system modeling frameworks may be complex-to-use or not address security at all. Along with that, testing is not usually addressed by verification methodologies since formal verification and testing are considered as exclusive stages. Nevertheless, we believe that platform testing can be applied to ensure that properties formally verified in a model are truly endowed to the real system. Our contribution is made in the scope of a model-driven based methodology that, in particular, targets secure-by-design embedded systems. The methodology is an iterative process that pursues coverage of several engineering development phases and that relies upon existing security analysis techniques. Still in evolution, the methodology is mainly defined via a high level SysML profile named Avatar. The contribution specifically consists on extending Avatar so as to model security concerns and in formally defining a model transformation towards a verification framework. This contribution allows to conduct proofs on authenticity and confidentiality. We illustrate how a cryptographic protocol is partially secured by applying several methodology stages. In addition, it is described how Security Testing was conducted on an embedded prototype platform within the scope of an automotive project.Une vaste majoritĂ© de systĂšmes embarquĂ©s distribuĂ©s sont concernĂ©s par des risques de sĂ©curitĂ©. Le fait que les applications peuvent ĂȘtre mal protĂ©gĂ©es est partiellement Ă  cause des manques mĂ©thodologiques dans le processus d’ingĂ©nierie de dĂ©veloppement. ParticuliĂšrement, les mĂ©thodologies qui ciblent la vĂ©rification formelle peuvent manquer de support pour certaines Ă©tapes du processus de dĂ©veloppement SW. Notamment, les cadres de modĂ©lisation peuvent ĂȘtre complexes Ă  utiliser ou ne pas adresser la sĂ©curitĂ© du tout. Avec cela, l’étape de tests n’est pas normalement abordĂ©e par les mĂ©thodologies de vĂ©rification formelle. NĂ©anmoins, nous croyons que faire des tests sur la plateforme peut aider Ă  assurer que les propriĂ©tĂ©s vĂ©rifiĂ©es dans le modĂšle sont vĂ©ritablement prĂ©servĂ©es par le systĂšme embarquĂ©. Notre contribution est faite dans le cadre d’une mĂ©thodologie nommĂ©e Avatar qui est basĂ©e sur les modĂšles et vise la sĂ©curitĂ© dĂšs la conception du systĂšme. La mĂ©thodologie est un processus itĂ©ratif qui poursuit la couverture de plusieurs Ă©tapes du dĂ©veloppement SW et qui s’appuie sur plusieurs techniques d’analyse de sĂ©curitĂ©. La mĂ©thodologie compte avec un cadre de modĂ©lisation SysML. Notre contribution consiste notamment Ă  Ă©tendre le cadre de modĂ©lisation Avatar afin d’aborder les aspects de sĂ©curitĂ© et aussi Ă  dĂ©finir une transformation du modĂšle Avatar vers un cadre de vĂ©rification formel. Cette contribution permet d’effectuer preuves d’authenticitĂ© et confidentialitĂ©. Nous montrons comment un protocole cryptographique est partiellement sĂ©curisĂ©. Aussi, il est dĂ©crit comment les tests de sĂ©curitĂ© ont Ă©tĂ© menĂ©s sur un prototype dans le cadre d’un projet vĂ©hiculaire

    Robots and Art:Interactive Art and Robotics Education Program in the Humanities

    Get PDF

    Instrumentation of the da Vinci Robotic Surgical System

    Get PDF

    DEVELOPMENT OF A NOVEL INTERACTIVE VISUAL TASK FOR A ROBOT-ASSISTED GAIT TRAINING IN STROKE

    Get PDF
    The goal of this thesis is to develop an interactive visual task for robot-assisted gait training after stroke. This is designed as a simple soccer-based computer video-game displayed on a screen, played by moving the ankle in dorsiflexion or plantarflexion to guide a soccer ball from its original position towards the goal. This stand-alone game is interfaced with the impedance controlled modular ankle exoskeleton (“Anklebot”) that provides assistance only as-needed, as an augmentative tool to further enhance ankle neuro-motor control and whole-body function after task-oriented robot-assisted treadmill walking. The design and features of the interactive video game, as well as the underlying biomechanical model that relates patient-to-game performance are presented. Simple adaptive performance algorithms are embedded, and bench tested to auto-adjust game parameters in real-time, concomitant to ongoing patient performance during robot-assisted therapy. Human in-loop testing strategies are proposed to validate the video-game performance and its feasibility for clinical use

    SdrLift: A Domain-Specific Intermediate Hardware Synthesis Framework for Prototyping Software-Defined Radios

    Get PDF
    Modern design of Software-Defined Radio (SDR) applications is based on Field Programmable Gate Arrays (FPGA) due to their ability to be configured into solution architectures that are well suited to domain-specific problems while achieving the best trade-off between performance, power, area, and flexibility. FPGAs are well known for rich computational resources, which traditionally include logic, register, and routing resources. The increased technological advances have seen FPGAs incorporating more complex components that comprise sophisticated memory blocks, Digital Signal Processing (DSP) blocks, and high-speed interfacing to Gigabit Ethernet (GbE) and Peripheral Component Interconnect Express (PCIe) bus. Gateware for programming FPGAs is described at a lowlevel of design abstraction using Register Transfer Language (RTL), typically using either VHSIC-HDL (VHDL) or Verilog code. In practice, the low-level description languages have a very steep learning curve, provide low productivity for hardware designers and lack readily available open-source library support for fundamental designs, and consequently limit the design to only hardware experts. These limitations have led to the adoption of High-Level Synthesis (HLS) tools that raise design abstraction using syntax, semantics, and software development notations that are well-known to most software developers. However, while HLS has made programming of FPGAs more accessible and can increase the productivity of design, they are still not widely adopted in the design community due to the low-level skills that are still required to produce efficient designs. Additionally, the resultant RTL code from HLS tools is often difficult to decipher, modify and optimize due to the functionality and micro-architecture that are coupled together in a single High-Level Language (HLL). In order to alleviate these problems, Domain-Specific Languages (DSL) have been introduced to capture algorithms at a high level of abstraction with more expressive power and providing domain-specific optimizations that factor in new transformations and the trade-off between resource utilization and system performance. The problem of existing DSLs is that they are designed around imperative languages with an instruction sequence that does not match the hardware structure and intrinsics, leading to hardware designs with system properties that are unconformable to the high-level specifications and constraints. The aim of this thesis is, therefore, to design and implement an intermediatelevel framework namely SdrLift for use in high-level rapid prototyping of SDR applications that are based on an FPGA. The SdrLift input is a HLL developed using functional language constructs and design patterns that specify the structural behavior of the application design. The functionality of the SdrLift language is two-fold, first, it can be used directly by a designer to develop the SDR applications, secondly, it can be used as the Intermediate Representation (IR) step that is generated by a higher-level language or a DSL. The SdrLift compiler uses the dataflow graph as an IR to structurally represent the accelerator micro-architecture in which the components correspond to the fine-level and coarse-level Hardware blocks (HW Block) which are either auto-synthesized or integrated from existing reusable Intellectual Property (IP) core libraries. Another IR is in the form of a dataflow model and it is used for composition and global interconnection of the HW Blocks while making efficient interfacing decisions in an attempt to satisfy speed and resource usage objectives. Moreover, the dataflow model provides rules and properties that will be used to provide a theoretical framework that formally analyzes the characteristics of SDR applications (i.e. the throughput, sample rate, latency, and buffer size among other factors). Using both the directed graph flow (DFG) and the dataflow model in the SdrLift compiler provides two benefits: an abstraction of the microarchitecture from the high-level algorithm specifications and also decoupling of the microarchitecture from the low-level RTL implementation. Following the IR creation and model analyses is the VHDL code generation which employs the low-level optimizations that ensure optimal hardware design results. The code generation process per forms analysis to ensure the resultant hardware system conforms to the high-level design specifications and constraints. SdrLift is evaluated by developing representative SDR case studies, in which the VHDL code for eight different SDR applications is generated. The experimental results show that SdrLift achieves the desired performance and flexibility, while also conserving the hardware resources utilized

    Engineering Resilient Space Systems

    Get PDF
    Several distinct trends will influence space exploration missions in the next decade. Destinations are becoming more remote and mysterious, science questions more sophisticated, and, as mission experience accumulates, the most accessible targets are visited, advancing the knowledge frontier to more difficult, harsh, and inaccessible environments. This leads to new challenges including: hazardous conditions that limit mission lifetime, such as high radiation levels surrounding interesting destinations like Europa or toxic atmospheres of planetary bodies like Venus; unconstrained environments with navigation hazards, such as free-floating active small bodies; multielement missions required to answer more sophisticated questions, such as Mars Sample Return (MSR); and long-range missions, such as Kuiper belt exploration, that must survive equipment failures over the span of decades. These missions will need to be successful without a priori knowledge of the most efficient data collection techniques for optimum science return. Science objectives will have to be revised ‘on the fly’, with new data collection and navigation decisions on short timescales. Yet, even as science objectives are becoming more ambitious, several critical resources remain unchanged. Since physics imposes insurmountable light-time delays, anticipated improvements to the Deep Space Network (DSN) will only marginally improve the bandwidth and communications cadence to remote spacecraft. Fiscal resources are increasingly limited, resulting in fewer flagship missions, smaller spacecraft, and less subsystem redundancy. As missions visit more distant and formidable locations, the job of the operations team becomes more challenging, seemingly inconsistent with the trend of shrinking mission budgets for operations support. How can we continue to explore challenging new locations without increasing risk or system complexity? These challenges are present, to some degree, for the entire Decadal Survey mission portfolio, as documented in Vision and Voyages for Planetary Science in the Decade 2013–2022 (National Research Council, 2011), but are especially acute for the following mission examples, identified in our recently completed KISS Engineering Resilient Space Systems (ERSS) study: 1. A Venus lander, designed to sample the atmosphere and surface of Venus, would have to perform science operations as components and subsystems degrade and fail; 2. A Trojan asteroid tour spacecraft would spend significant time cruising to its ultimate destination (essentially hibernating to save on operations costs), then upon arrival, would have to act as its own surveyor, finding new objects and targets of opportunity as it approaches each asteroid, requiring response on short notice; and 3. A MSR campaign would not only be required to perform fast reconnaissance over long distances on the surface of Mars, interact with an unknown physical surface, and handle degradations and faults, but would also contain multiple components (launch vehicle, cruise stage, entry and landing vehicle, surface rover, ascent vehicle, orbiting cache, and Earth return vehicle) that dramatically increase the need for resilience to failure across the complex system. The concept of resilience and its relevance and application in various domains was a focus during the study, with several definitions of resilience proposed and discussed. While there was substantial variation in the specifics, there was a common conceptual core that emerged—adaptation in the presence of changing circumstances. These changes were couched in various ways—anomalies, disruptions, discoveries—but they all ultimately had to do with changes in underlying assumptions. Invalid assumptions, whether due to unexpected changes in the environment, or an inadequate understanding of interactions within the system, may cause unexpected or unintended system behavior. A system is resilient if it continues to perform the intended functions in the presence of invalid assumptions. Our study focused on areas of resilience that we felt needed additional exploration and integration, namely system and software architectures and capabilities, and autonomy technologies. (While also an important consideration, resilience in hardware is being addressed in multiple other venues, including 2 other KISS studies.) The study consisted of two workshops, separated by a seven-month focused study period. The first workshop (Workshop #1) explored the ‘problem space’ as an organizing theme, and the second workshop (Workshop #2) explored the ‘solution space’. In each workshop, focused discussions and exercises were interspersed with presentations from participants and invited speakers. The study period between the two workshops was organized as part of the synthesis activity during the first workshop. The study participants, after spending the initial days of the first workshop discussing the nature of resilience and its impact on future science missions, decided to split into three focus groups, each with a particular thrust, to explore specific ideas further and develop material needed for the second workshop. The three focus groups and areas of exploration were: 1. Reference missions: address/refine the resilience needs by exploring a set of reference missions 2. Capability survey: collect, document, and assess current efforts to develop capabilities and technology that could be used to address the documented needs, both inside and outside NASA 3. Architecture: analyze the impact of architecture on system resilience, and provide principles and guidance for architecting greater resilience in our future systems The key product of the second workshop was a set of capability roadmaps pertaining to the three reference missions selected for their representative coverage of the types of space missions envisioned for the future. From these three roadmaps, we have extracted several common capability patterns that would be appropriate targets for near-term technical development: one focused on graceful degradation of system functionality, a second focused on data understanding for science and engineering applications, and a third focused on hazard avoidance and environmental uncertainty. Continuing work is extending these roadmaps to identify candidate enablers of the capabilities from the following three categories: architecture solutions, technology solutions, and process solutions. The KISS study allowed a collection of diverse and engaged engineers, researchers, and scientists to think deeply about the theory, approaches, and technical issues involved in developing and applying resilience capabilities. The conclusions summarize the varied and disparate discussions that occurred during the study, and include new insights about the nature of the challenge and potential solutions: 1. There is a clear and definitive need for more resilient space systems. During our study period, the key scientists/engineers we engaged to understand potential future missions confirmed the scientific and risk reduction value of greater resilience in the systems used to perform these missions. 2. Resilience can be quantified in measurable terms—project cost, mission risk, and quality of science return. In order to consider resilience properly in the set of engineering trades performed during the design, integration, and operation of space systems, the benefits and costs of resilience need to be quantified. We believe, based on the work done during the study, that appropriate metrics to measure resilience must relate to risk, cost, and science quality/opportunity. Additional work is required to explicitly tie design decisions to these first-order concerns. 3. There are many existing basic technologies that can be applied to engineering resilient space systems. Through the discussions during the study, we found many varied approaches and research that address the various facets of resilience, some within NASA, and many more beyond. Examples from civil architecture, Department of Defense (DoD) / Defense Advanced Research Projects Agency (DARPA) initiatives, ‘smart’ power grid control, cyber-physical systems, software architecture, and application of formal verification methods for software were identified and discussed. The variety and scope of related efforts is encouraging and presents many opportunities for collaboration and development, and we expect many collaborative proposals and joint research as a result of the study. 4. Use of principled architectural approaches is key to managing complexity and integrating disparate technologies. The main challenge inherent in considering highly resilient space systems is that the increase in capability can result in an increase in complexity with all of the 3 risks and costs associated with more complex systems. What is needed is a better way of conceiving space systems that enables incorporation of capabilities without increasing complexity. We believe principled architecting approaches provide the needed means to convey a unified understanding of the system to primary stakeholders, thereby controlling complexity in the conception and development of resilient systems, and enabling the integration of disparate approaches and technologies. A representative architectural example is included in Appendix F. 5. Developing trusted resilience capabilities will require a diverse yet strategically directed research program. Despite the interest in, and benefits of, deploying resilience space systems, to date, there has been a notable lack of meaningful demonstrated progress in systems capable of working in hazardous uncertain situations. The roadmaps completed during the study, and documented in this report, provide the basis for a real funded plan that considers the required fundamental work and evolution of needed capabilities. Exploring space is a challenging and difficult endeavor. Future space missions will require more resilience in order to perform the desired science in new environments under constraints of development and operations cost, acceptable risk, and communications delays. Development of space systems with resilient capabilities has the potential to expand the limits of possibility, revolutionizing space science by enabling as yet unforeseen missions and breakthrough science observations. Our KISS study provided an essential venue for the consideration of these challenges and goals. Additional work and future steps are needed to realize the potential of resilient systems—this study provided the necessary catalyst to begin this process

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    On reliability and performance analyses of IEC 61850 for digital SAS

    Get PDF
    fi=vertaisarvioitu|en=peerReviewed

    Investigation of transient and safety issues in gas insulated systems

    Get PDF
    This thesis investigates the occurrence, characteristics and effects of Very Fast Transients (VFTs) associated with disconnector switching operations in Gas Insulated Substations. VFTs are analysed and efforts are made to elucidate their behaviour through advanced simulation techniques. The initial motivation for this work was the occurrence of a surface flashover at a spacer, leading to a prolonged outage of the circuit in question and a significant repair effort. While post failure investigations were carried out by the manufacturer and yielded no significant observations, through modelling and measurements efforts while working towards this thesis, a phenomenon that could have led or contributed to the failure. VFTs at a live, operational 400kV Substation (un-named for confidentiality but termed throughout as Substation ‘A’) are quantified through both modelling and measurements. Significant progress in the modelling of VFTs and TEVs is demonstrated. Numerical Electromagnetic Analysis is shown to be most effective method in studying the behaviour of the GIS and earthing systems. Multiple NEA techniques are utilised, all solving a full-Maxwell’s equations through a Wave equation. The behaviour of the system (both internally and externally) is captured with great accuracy and lucidity, without the need to use analytic approximations or assumed parameters, which has traditionally been the case. Detailed models were built using equipment drawings from Substation ‘A’ for the GIB, spacer-flange assembly, double-elbow assembly, disconnector, gas to air bushing. Frequency and time domain behaviour is analysed and a potential contributor to the failure at Substation ‘A’ is identified. Furthermore, elements of the earthing system were evaluated for effectiveness in mitigating TEVs. The methods highlight some of flaws and inaccuracies that are present with existing ‘standard practice’ modelling efforts. The need for circuit-based modelling for VFT studies is apparent, as NEA techniques at very high frequencies are limited in their interaction with the wider system. Efforts are therefore made to enhance circuit-based models; utilising NEA methods and Vector Fitting to produce accurate, large bandwidth equivalent circuits, which demonstrate the computed frequency responses of the various GIS equipment types studied. Vector Fit models at lower orders of approximation are prone to unstable time domain responses, leading to numerical oscillations or even a complete divergence from a solution. A method was developed to identify model orders that demonstrate stability in the time domain, allowing the lowest model order of approximation to be selected, thereby reducing the additional computational requirements of very high orders of approximation, while retaining accuracy and stability in the time and frequency domains. The conversion process is augmented with a new method for identifying model orders that will be stable in the time domain. Several measurement techniques and sensors were developed to capture the entire cycle of transients associated with disconnector operations. Device prototypes were designed and optimised through NEA/circuit-based modelling, prior to undergoing laboratory-based measurements. Laboratory based testing was conducted using a custom built, half scale GIB, with impedance matching cones at each end to allow measurement and signal generating equipment to be connected with minimal interference. While, essential, laboratory-based measurements will never replicate the transient and high EMI environmental conditions seen at a live GIS, therefore, the bulk of the measurement efforts were focused on live measurements at Substation ‘A’. Throughout the course of this project several opportunities to undertake measurements were presented and a significant amount of data was recorded. Each measurement also identified areas for improvement of the measurement system
    • 

    corecore