640,696 research outputs found

    High-Performance and Time-Predictable Embedded Computing

    Get PDF
    Nowadays, the prevalence of computing systems in our lives is so ubiquitous that we live in a cyber-physical world dominated by computer systems, from pacemakers to cars and airplanes. These systems demand for more computational performance to process large amounts of data from multiple data sources with guaranteed processing times. Actuating outside of the required timing bounds may cause the failure of the system, being vital for systems like planes, cars, business monitoring, e-trading, etc. High-Performance and Time-Predictable Embedded Computing presents recent advances in software architecture and tools to support such complex systems, enabling the design of embedded computing devices which are able to deliver high-performance whilst guaranteeing the application required timing bounds. Technical topics discussed in the book include: Parallel embedded platforms Programming models Mapping and scheduling of parallel computations Timing and schedulability analysis Runtimes and operating systems The work reflected in this book was done in the scope of the European project P SOCRATES, funded under the FP7 framework program of the European Commission. High-performance and time-predictable embedded computing is ideal for personnel in computer/communication/embedded industries as well as academic staff and master/research students in computer science, embedded systems, cyber-physical systems and internet-of-things.info:eu-repo/semantics/publishedVersio

    High Performance Embedded Computing

    Get PDF
    Nowadays, the prevalence of computing systems in our lives is so ubiquitous that we live in a cyber-physical world dominated by computer systems, from pacemakers to cars and airplanes. These systems demand for more computational performance to process large amounts of data from multiple data sources with guaranteed processing times. Actuating outside of the required timing bounds may cause the failure of the system, being vital for systems like planes, cars, business monitoring, e-trading, etc. High-Performance and Time-Predictable Embedded Computing presents recent advances in software architecture and tools to support such complex systems, enabling the design of embedded computing devices which are able to deliver high-performance whilst guaranteeing the application required timing bounds. Technical topics discussed in the book include: Parallel embedded platforms Programming models Mapping and scheduling of parallel computations Timing and schedulability analysis Runtimes and operating systemsThe work reflected in this book was done in the scope of the European project P SOCRATES, funded under the FP7 framework program of the European Commission. High-performance and time-predictable embedded computing is ideal for personnel in computer/communication/embedded industries as well as academic staff and master/research students in computer science, embedded systems, cyber-physical systems and internet-of-things

    The future of trans-Atlantic collaboration in modelling and simulation of Cyber-Physical Systems - A strategic research agenda for collaboration

    Get PDF
    Smart systems, in which sophisticated software/hardware is embedded in physical systems, are part of everyday life. From simple products with embedded decision-making software, to massive systems in which hundreds of systems, each with hundreds or thousands of embedded processors, interoperate the use of Cyber-Physical Systems (CPS) will continue to expand. There has been substantial investment in CPS research in Europe and the United States. Through a series of workshops and other events, the TAMS4CPS project has established that there is mutual benefit in the European Union and US collaborating on CPS research. An agenda for collaborative research into modelling and simulation for CPS is thus set forth in the publication at hand. The agenda includes models for many different purposes, including fundamental concepts, design models (e.g. architectures), predictive techniques, real-time control, human-CPS interaction, and CPS governance. Within this framework, seven important themes have been identified where mutual benefits can be realised by EU-US cooperation. To actively advance research and innovation in these fields, a number of collaboration mechanisms is presented and concrete actions to encourage, enhance and implement trans-Atlantic collaboration in modelling and simulation of CPS are recommended

    HPC Platform for Railway Safety-Critical Functionalities Based on Artificial Intelligence

    Get PDF
    The automation of railroad operations is a rapidly growing industry. In 2023, a new European standard for the automated Grade of Automation (GoA) 2 over European Train Control System (ETCS) driving is anticipated. Meanwhile, railway stakeholders are already planning their research initiatives for driverless and unattended autonomous driving systems. As a result, the industry is particularly active in research regarding perception technologies based on Computer Vision (CV) and Artificial Intelligence (AI), with outstanding results at the application level. However, executing high-performance and safety-critical applications on embedded systems and in real-time is a challenge. There are not many commercially available solutions, since High-Performance Computing (HPC) platforms are typically seen as being beyond the business of safety-critical systems. This work proposes a novel safety-critical and high-performance computing platform for CV- and AI-enhanced technology execution used for automatic accurate stopping and safe passenger transfer railway functionalities. The resulting computing platform is compatible with the majority of widely-used AI inference methodologies, AI model architectures, and AI model formats thanks to its design, which enables process separation, redundant execution, and HW acceleration in a transparent manner. The proposed technology increases the portability of railway applications into embedded systems, isolates crucial operations, and effectively and securely maintains system resources.The novel approach presented in this work is being developed as a specific railway use case for autonomous train operation into SELENE European research project. This project has received funding from RIA—Research and Innovation action under grant agreement No. 871467

    TRANSMIT: Training Research and Applications Network to Support the Mitigation of Ionospheric Threats

    Get PDF
    TRANSMIT is an initiative funded by the European Commission through a Marie Curie Initial Training Network (ITN). Main aim of such networks is to improve the career perspectives of researchers who are in the first five years of their research career in both public and private sectors. In particular TRANSMIT will provide a coordinated program of academic and industrial training, focused on atmospheric phenomena that can significantly impair a wide range of systems and applications that are at the core of several activities embedded in our daily life. TRANSMIT deals with the harmful effects of the ionosphere on these systems, which will become increasingly significant as we approach the next solar maximum, predicted for 2013. Main aim of the project is to develop real time integrated state of the art tools to mitigate ionospheric threats to Global Navigation Satellite Systems (GNSS) and several related applications, such as civil aviation, marine navigation and land transportation. The project will provide Europe with the next generation of researchers in this field, equipping them with skills developed through a comprehensive and coordinated training program. Theirs research projects will develop real time integrated state of the art tools to mitigate these ionospheric threats to GNSS and several applications that rely on these systems. The main threat to the reliable and safe operation of GNSS is the variable propagation conditions encountered by GNSS signals as they pass through the ionosphere. At a COST 296 MIERS (Mitigation of Ionospheric Effects on Radio Systems) workshop held at the University of Nottingham in 2008, the establishment of a sophisticated Ionospheric Perturbation Detection and Monitoring (IPDM) network (http://ipdm.nottingham.ac.uk/) was proposed by European experts and supported by the European Space Agency (ESA) as the way forward to deliver the state of the art to protect the range of essential systems vulnerable to these ionospheric threats. Through a set of carefully designed research work packages TRANSMIT will be the enabler of the IPDM network. The goal of TRANSMIT is therefore to provide a concerted training programme including taught courses, research training projects, secondments at the leading European institutions, and a set of network wide events, with summer schools, workshops and a conference, which will arm the researchers of tomorrow with the necessary skills and knowledge to set up and run the proposed service. TRANSMIT will count on an exceptional set of partners, encompassing both academia and end users, including the aerospace and satellite communications sectors, as well as GNSS system designers and service providers, major user operators and receiver manufacturers. TRANSMIT's objectives are: A. Develop new techniques to detect and monitor ionospheric threats, with the introduction of new prediction and forecasting models, mitigation tools and improved system design; B. Advance the physical modeling of the underlying processes associated with the ionospheric plasma environment and the knowledge of its influences on human activity; C. Establish a prototype of a real time system to monitor the ionosphere, capable of providing useful assistance to users, which exploits all available resources and adds value for European services and products; D. Incorporate solutions to this system that respond to all end user needs and that are applicable in all geographical regions of European interest (polar, high and mid-latitudes, equatorial region). TRANSMIT will pave the way to establish in Europe a system capable of mitigating ionospheric threats on GNSS signals in real tim

    Execution time distributions in embedded safety-critical systems using extreme value theory

    Get PDF
    Several techniques have been proposed to upper-bound the worst-case execution time behaviour of programs in the domain of critical real-time embedded systems. These computing systems have strong requirements regarding the guarantees that the longest execution time a program can take is bounded. Some of those techniques use extreme value theory (EVT) as their main prediction method. In this paper, EVT is used to estimate a high quantile for different types of execution time distributions observed for a set of representative programs for the analysis of automotive applications. A major challenge appears when the dataset seems to be heavy tailed, because this contradicts the previous assumption of embedded safety-critical systems. A methodology based on the coefficient of variation is introduced for a threshold selection algorithm to determine the point above which the distribution can be considered generalised Pareto distribution. This methodology also provides an estimation of the extreme value index and high quantile estimates. We have applied these methods to execution time observations collected from the execution of 16 representative automotive benchmarks to predict an upper-bound to the maximum execution time of this program. Several comparisons with alternative approaches are discussed.The research leading to these results has received funding from the European Community’s Seventh Framework Programme [FP7/2007-2013] under the PROXIMA Project (grant agreement 611085). This study was also partially supported by the Spanish Ministry of Science and Innovation under grants MTM2012-31118 (2013-2015) and TIN2015-65316-P. Jaume Abella is partially supported by the Ministry of Economy and Competitiveness under Ramon y Cajal postdoctoral fellowship number RYC-2013- 14717.Peer ReviewedPostprint (author's final draft

    An approach for detecting power peaks during testing and breaking systematic pathological behavior

    Get PDF
    The verification and validation process of embedded critical systems requires providing evidence of their functional correctness and also that their non-functional behavior stays within limits. In this work, we focus on power peaks, which may cause voltage droops and thus, challenge performance to preserve correct operation upon droops. In this line, the use of complex software and hardware in critical embedded systems jeopardizes the confidence that can be placed on the tests carried out during the campaigns performed at analysis. This is so because it is unknown whether tests have triggered the highest power peaks that can occur during operation and whether any such peak can occur systematically. In this paper we propose the use of randomization, already used for timing analysis of real-time systems, as an enabler to guarantee that (1) tests expose those peaks that can arise during operation and (2) peaks cannot occur systematically inadvertently.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P, the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 772773), and the HiPEAC Network of Excellence. MINECO partially supported Jaume Abella under Ramon y Cajal postdoctoral fellowship (RYC-2013-14717).Peer ReviewedPostprint (author's final draft

    A Framework for Evaluating Security in the Presence of Signal Injection Attacks

    Full text link
    Sensors are embedded in security-critical applications from medical devices to nuclear power plants, but their outputs can be spoofed through electromagnetic and other types of signals transmitted by attackers at a distance. To address the lack of a unifying framework for evaluating the effects of such transmissions, we introduce a system and threat model for signal injection attacks. We further define the concepts of existential, selective, and universal security, which address attacker goals from mere disruptions of the sensor readings to precise waveform injections. Moreover, we introduce an algorithm which allows circuit designers to concretely calculate the security level of real systems. Finally, we apply our definitions and algorithm in practice using measurements of injections against a smartphone microphone, and analyze the demodulation characteristics of commercial Analog-to-Digital Converters (ADCs). Overall, our work highlights the importance of evaluating the susceptibility of systems against signal injection attacks, and introduces both the terminology and the methodology to do so.Comment: This article is the extended technical report version of the paper presented at ESORICS 2019, 24th European Symposium on Research in Computer Security (ESORICS), Luxembourg, Luxembourg, September 201

    Design Ltd.: Renovated Myths for the Development of Socially Embedded Technologies

    Full text link
    This paper argues that traditional and mainstream mythologies, which have been continually told within the Information Technology domain among designers and advocators of conceptual modelling since the 1960s in different fields of computing sciences, could now be renovated or substituted in the mould of more recent discourses about performativity, complexity and end-user creativity that have been constructed across different fields in the meanwhile. In the paper, it is submitted that these discourses could motivate IT professionals in undertaking alternative approaches toward the co-construction of socio-technical systems, i.e., social settings where humans cooperate to reach common goals by means of mediating computational tools. The authors advocate further discussion about and consolidation of some concepts in design research, design practice and more generally Information Technology (IT) development, like those of: task-artifact entanglement, universatility (sic) of End-User Development (EUD) environments, bricolant/bricoleur end-user, logic of bricolage, maieuta-designers (sic), and laissez-faire method to socio-technical construction. Points backing these and similar concepts are made to promote further discussion on the need to rethink the main assumptions underlying IT design and development some fifty years later the coming of age of software and modern IT in the organizational domain.Comment: This is the peer-unreviewed of a manuscript that is to appear in D. Randall, K. Schmidt, & V. Wulf (Eds.), Designing Socially Embedded Technologies: A European Challenge (2013, forthcoming) with the title "Building Socially Embedded Technologies: Implications on Design" within an EUSSET editorial initiative (www.eusset.eu/
    • …
    corecore