7,866 research outputs found

    FPGA-Based PUF Designs: A Comprehensive Review and Comparative Analysis

    Get PDF
    Field-programmable gate arrays (FPGAs) have firmly established themselves as dynamic platforms for the implementation of physical unclonable functions (PUFs). Their intrinsic reconfigurability and profound implications for enhancing hardware security make them an invaluable asset in this realm. This groundbreaking study not only dives deep into the universe of FPGA-based PUF designs but also offers a comprehensive overview coupled with a discerning comparative analysis. PUFs are the bedrock of device authentication and key generation and the fortification of secure cryptographic protocols. Unleashing the potential of FPGA technology expands the horizons of PUF integration across diverse hardware systems. We set out to understand the fundamental ideas behind PUF and how crucially important it is to current security paradigms. Different FPGA-based PUF solutions, including static, dynamic, and hybrid systems, are closely examined. Each design paradigm is painstakingly examined to reveal its special qualities, functional nuances, and weaknesses. We closely assess a variety of performance metrics, including those related to distinctiveness, reliability, and resilience against hostile threats. We compare various FPGA-based PUF systems against one another to expose their unique advantages and disadvantages. This study provides system designers and security professionals with the crucial information they need to choose the best PUF design for their particular applications. Our paper provides a comprehensive view of the functionality, security capabilities, and prospective applications of FPGA-based PUF systems. The depth of knowledge gained from this research advances the field of hardware security, enabling security practitioners, researchers, and designers to make wise decisions when deciding on and implementing FPGA-based PUF solutions.publishedVersio

    An Architectural Approach to Autonomics and Self-management of Automotive Embedded Electronic Systems

    Get PDF
    International audienceEmbedded electronic systems in vehicles are of rapidly increasing commercial importance for the automotive industry. While current vehicular embedded systems are extremely limited and static, a more dynamic configurable system would greatly simplify the integration work and increase quality of vehicular systems. This brings in features like separation of concerns, customised software configuration for individual vehicles, seamless connectivity, and plug-and-play capability. Furthermore, such a system can also contribute to increased dependability and resource optimization due to its inherent ability to adjust itself dynamically to changes in software, hardware resources, and environment condition. This paper describes the architectural approach to achieving the goals of dynamically self-configuring automotive embedded electronic systems by the EU research project DySCAS. The architecture solution outlined in this paper captures the application and operational contexts, expected features, middleware services, functions and behaviours, as well as the basic mechanisms and technologies. The paper also covers the architecture conceptualization by presenting the rationale, concerning the architecture structuring, control principles, and deployment concept. In this paper, we also present the adopted architecture V&V strategy and discuss some open issues in regards to the industrial acceptance

    Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications

    Get PDF
    Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial

    Edge AI for Internet of Energy: Challenges and Perspectives

    Full text link
    The digital landscape of the Internet of Energy (IoE) is on the brink of a revolutionary transformation with the integration of edge Artificial Intelligence (AI). This comprehensive review elucidates the promise and potential that edge AI holds for reshaping the IoE ecosystem. Commencing with a meticulously curated research methodology, the article delves into the myriad of edge AI techniques specifically tailored for IoE. The myriad benefits, spanning from reduced latency and real-time analytics to the pivotal aspects of information security, scalability, and cost-efficiency, underscore the indispensability of edge AI in modern IoE frameworks. As the narrative progresses, readers are acquainted with pragmatic applications and techniques, highlighting on-device computation, secure private inference methods, and the avant-garde paradigms of AI training on the edge. A critical analysis follows, offering a deep dive into the present challenges including security concerns, computational hurdles, and standardization issues. However, as the horizon of technology ever expands, the review culminates in a forward-looking perspective, envisaging the future symbiosis of 5G networks, federated edge AI, deep reinforcement learning, and more, painting a vibrant panorama of what the future beholds. For anyone vested in the domains of IoE and AI, this review offers both a foundation and a visionary lens, bridging the present realities with future possibilities

    병렬 및 λΆ„μ‚° μž„λ² λ””λ“œ μ‹œμŠ€ν…œμ„ μœ„ν•œ λͺ¨λΈ 기반 μ½”λ“œ 생성 ν”„λ ˆμž„μ›Œν¬

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사)--μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› :κ³΅κ³ΌλŒ€ν•™ 컴퓨터곡학뢀,2020. 2. ν•˜μˆœνšŒ.μ†Œν”„νŠΈμ›¨μ–΄ 섀계 생산성 및 μœ μ§€λ³΄μˆ˜μ„±μ„ ν–₯μƒμ‹œν‚€κΈ° μœ„ν•΄ λ‹€μ–‘ν•œ μ†Œν”„νŠΈμ›¨μ–΄ 개발 방법둠이 μ œμ•ˆλ˜μ—ˆμ§€λ§Œ, λŒ€λΆ€λΆ„μ˜ μ—°κ΅¬λŠ” μ‘μš© μ†Œν”„νŠΈμ›¨μ–΄λ₯Ό ν•˜λ‚˜μ˜ ν”„λ‘œμ„Έμ„œμ—μ„œ λ™μž‘μ‹œν‚€λŠ” 데에 μ΄ˆμ μ„ λ§žμΆ”κ³  μžˆλ‹€. λ˜ν•œ, μž„λ² λ””λ“œ μ‹œμŠ€ν…œμ„ κ°œλ°œν•˜λŠ” 데에 ν•„μš”ν•œ μ§€μ—°μ΄λ‚˜ μžμ› μš”κ΅¬ 사항에 λŒ€ν•œ λΉ„κΈ°λŠ₯적 μš”κ΅¬ 사항을 κ³ λ €ν•˜μ§€ μ•Šκ³  있기 λ•Œλ¬Έμ— 일반적인 μ†Œν”„νŠΈμ›¨μ–΄ 개발 방법둠을 μž„λ² λ””λ“œ μ†Œν”„νŠΈμ›¨μ–΄λ₯Ό κ°œλ°œν•˜λŠ” 데에 μ μš©ν•˜λŠ” 것은 μ ν•©ν•˜μ§€ μ•Šλ‹€. 이 λ…Όλ¬Έμ—μ„œλŠ” 병렬 및 λΆ„μ‚° μž„λ² λ””λ“œ μ‹œμŠ€ν…œμ„ λŒ€μƒμœΌλ‘œ ν•˜λŠ” μ†Œν”„νŠΈμ›¨μ–΄λ₯Ό λͺ¨λΈλ‘œ ν‘œν˜„ν•˜κ³ , 이λ₯Ό μ†Œν”„νŠΈμ›¨μ–΄ λΆ„μ„μ΄λ‚˜ κ°œλ°œμ— ν™œμš©ν•˜λŠ” 개발 방법둠을 μ†Œκ°œν•œλ‹€. 우리의 λͺ¨λΈμ—μ„œ μ‘μš© μ†Œν”„νŠΈμ›¨μ–΄λŠ” κ³„μΈ΅μ μœΌλ‘œ ν‘œν˜„ν•  수 μžˆλŠ” μ—¬λŸ¬ 개의 νƒœμŠ€ν¬λ‘œ 이루어져 있으며, ν•˜λ“œμ›¨μ–΄ ν”Œλž«νΌκ³Ό λ…λ¦½μ μœΌλ‘œ λͺ…μ„Έν•œλ‹€. νƒœμŠ€ν¬ κ°„μ˜ 톡신 및 λ™κΈ°ν™”λŠ” λͺ¨λΈμ΄ μ •μ˜ν•œ κ·œμ•½μ΄ μ •ν•΄μ Έ 있고, μ΄λŸ¬ν•œ κ·œμ•½μ„ 톡해 μ‹€μ œ ν”„λ‘œκ·Έλž¨μ„ μ‹€ν–‰ν•˜κΈ° 전에 μ†Œν”„νŠΈμ›¨μ–΄ μ—λŸ¬λ₯Ό 정적 뢄석을 톡해 확인할 수 있고, μ΄λŠ” μ‘μš©μ˜ 검증 λ³΅μž‘λ„λ₯Ό μ€„μ΄λŠ” 데에 κΈ°μ—¬ν•œλ‹€. μ§€μ •ν•œ ν•˜λ“œμ›¨μ–΄ ν”Œλž«νΌμ—μ„œ λ™μž‘ν•˜λŠ” ν”„λ‘œκ·Έλž¨μ€ νƒœμŠ€ν¬λ“€μ„ ν”„λ‘œμ„Έμ„œμ— λ§€ν•‘ν•œ 이후에 μžλ™μ μœΌλ‘œ ν•©μ„±ν•  수 μžˆλ‹€. μœ„μ˜ λͺ¨λΈ 기반 μ†Œν”„νŠΈμ›¨μ–΄ 개발 λ°©λ²•λ‘ μ—μ„œ μ‚¬μš©ν•˜λŠ” ν”„λ‘œκ·Έλž¨ ν•©μ„±κΈ°λ₯Ό λ³Έ λ…Όλ¬Έμ—μ„œ μ œμ•ˆν•˜μ˜€λŠ”λ°, λͺ…μ„Έν•œ ν”Œλž«νΌ μš”κ΅¬ 사항을 λ°”νƒ•μœΌλ‘œ 병렬 및 λΆ„μ‚° μž„λ² λ””λ“œ μ‹œμŠ€ν…œμ„μ—μ„œ λ™μž‘ν•˜λŠ” μ½”λ“œλ₯Ό μƒμ„±ν•œλ‹€. μ—¬λŸ¬ 개의 μ •ν˜•μ  λͺ¨λΈλ“€μ„ κ³„μΈ΅μ μœΌλ‘œ ν‘œν˜„ν•˜μ—¬ μ‘μš©μ˜ 동적 ν–‰νƒœλ₯Ό λ‚˜νƒ€κ³ , ν•©μ„±κΈ°λŠ” μ—¬λŸ¬ λͺ¨λΈλ‘œ κ΅¬μ„±λœ 계측적인 λͺ¨λΈλ‘œλΆ€ν„° 병렬성을 κ³ λ €ν•˜μ—¬ νƒœμŠ€ν¬λ₯Ό μ‹€ν–‰ν•  수 μžˆλ‹€. λ˜ν•œ, ν”„λ‘œκ·Έλž¨ ν•©μ„±κΈ°μ—μ„œ λ‹€μ–‘ν•œ ν”Œλž«νΌμ΄λ‚˜ λ„€νŠΈμ›Œν¬λ₯Ό 지원할 수 μžˆλ„λ‘ μ½”λ“œλ₯Ό κ΄€λ¦¬ν•˜λŠ” 방법도 보여주고 μžˆλ‹€. λ³Έ λ…Όλ¬Έμ—μ„œ μ œμ‹œν•˜λŠ” μ†Œν”„νŠΈμ›¨μ–΄ 개발 방법둠은 6개의 ν•˜λ“œμ›¨μ–΄ ν”Œλž«νΌκ³Ό 3 μ’…λ₯˜μ˜ λ„€νŠΈμ›Œν¬λ‘œ κ΅¬μ„±λ˜μ–΄ μžˆλŠ” μ‹€μ œ κ°μ‹œ μ†Œν”„νŠΈμ›¨μ–΄ μ‹œμŠ€ν…œ μ‘μš© μ˜ˆμ œμ™€ 이쒅 λ©€ν‹° ν”„λ‘œμ„Έμ„œλ₯Ό ν™œμš©ν•˜λŠ” 원격 λ”₯ λŸ¬λ‹ 예제λ₯Ό μˆ˜ν–‰ν•˜μ—¬ 개발 λ°©λ²•λ‘ μ˜ 적용 κ°€λŠ₯성을 μ‹œν—˜ν•˜μ˜€λ‹€. λ˜ν•œ, ν”„λ‘œκ·Έλž¨ ν•©μ„±κΈ°κ°€ μƒˆλ‘œμš΄ ν”Œλž«νΌμ΄λ‚˜ λ„€νŠΈμ›Œν¬λ₯Ό μ§€μ›ν•˜κΈ° μœ„ν•΄ ν•„μš”λ‘œ ν•˜λŠ” 개발 λΉ„μš©λ„ μ‹€μ œ μΈ‘μ • 및 μ˜ˆμΈ‘ν•˜μ—¬ μƒλŒ€μ μœΌλ‘œ 적은 λ…Έλ ₯으둜 μƒˆλ‘œμš΄ ν”Œλž«νΌμ„ 지원할 수 μžˆμŒμ„ ν™•μΈν•˜μ˜€λ‹€. λ§Žμ€ μž„λ² λ””λ“œ μ‹œμŠ€ν…œμ—μ„œ μ˜ˆμƒμΉ˜ λͺ»ν•œ ν•˜λ“œμ›¨μ–΄ μ—λŸ¬μ— λŒ€ν•΄ 결함을 κ°λ‚΄ν•˜λŠ” 것을 ν•„μš”λ‘œ ν•˜κΈ° λ•Œλ¬Έμ— 결함 감내에 λŒ€ν•œ μ½”λ“œλ₯Ό μžλ™μœΌλ‘œ μƒμ„±ν•˜λŠ” 연ꡬ도 μ§„ν–‰ν•˜μ˜€λ‹€. λ³Έ κΈ°λ²•μ—μ„œ 결함 감내 섀정에 따라 νƒœμŠ€ν¬ κ·Έλž˜ν”„λ₯Ό μˆ˜μ •ν•˜λŠ” 방식을 ν™œμš©ν•˜μ˜€μœΌλ©°, 결함 κ°λ‚΄μ˜ λΉ„κΈ°λŠ₯적 μš”κ΅¬ 사항을 μ‘μš© κ°œλ°œμžκ°€ μ‰½κ²Œ μ μš©ν•  수 μžˆλ„λ‘ ν•˜μ˜€λ‹€. λ˜ν•œ, 결함 감내 μ§€μ›ν•˜λŠ” 것과 κ΄€λ ¨ν•˜μ—¬ μ‹€μ œ μˆ˜λ™μœΌλ‘œ κ΅¬ν˜„ν–ˆμ„ κ²½μš°μ™€ λΉ„κ΅ν•˜μ˜€κ³ , 결함 μ£Όμž… 도ꡬλ₯Ό μ΄μš©ν•˜μ—¬ 결함 λ°œμƒ μ‹œλ‚˜λ¦¬μ˜€λ₯Ό μž¬ν˜„ν•˜κ±°λ‚˜, μž„μ˜λ‘œ 결함을 μ£Όμž…ν•˜λŠ” μ‹€ν—˜μ„ μˆ˜ν–‰ν•˜μ˜€λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ 결함 감내λ₯Ό μ‹€ν—˜ν•  λ•Œμ— ν™œμš©ν•œ 결함 μ£Όμž… λ„κ΅¬λŠ” λ³Έ λ…Όλ¬Έμ˜ 또 λ‹€λ₯Έ κΈ°μ—¬ 사항 쀑 ν•˜λ‚˜λ‘œ λ¦¬λˆ…μŠ€ ν™˜κ²½μœΌλ‘œ λŒ€μƒμœΌλ‘œ μ‘μš© μ˜μ—­ 및 컀널 μ˜μ—­μ— 결함을 μ£Όμž…ν•˜λŠ” 도ꡬλ₯Ό κ°œλ°œν•˜μ˜€λ‹€. μ‹œμŠ€ν…œμ˜ 견고성을 κ²€μ¦ν•˜κΈ° μœ„ν•΄ 결함을 μ£Όμž…ν•˜μ—¬ 결함 μ‹œλ‚˜λ¦¬μ˜€λ₯Ό μž¬ν˜„ν•˜λŠ” 것은 널리 μ‚¬μš©λ˜λŠ” λ°©λ²•μœΌλ‘œ, λ³Έ λ…Όλ¬Έμ—μ„œ 개발된 결함 μ£Όμž… λ„κ΅¬λŠ” μ‹œμŠ€ν…œμ΄ λ™μž‘ν•˜λŠ” 도쀑에 μž¬ν˜„ κ°€λŠ₯ν•œ 결함을 μ£Όμž…ν•  수 μžˆλŠ” 도ꡬ이닀. 컀널 μ˜μ—­μ—μ„œμ˜ 결함 μ£Όμž…μ„ μœ„ν•΄ 두 μ’…λ₯˜μ˜ 결함 μ£Όμž… 방법을 μ œκ³΅ν•˜λ©°, ν•˜λ‚˜λŠ” 컀널 GNU 디버거λ₯Ό μ΄μš©ν•œ 방법이고, λ‹€λ₯Έ ν•˜λ‚˜λŠ” ARM ν•˜λ“œμ›¨μ–΄ 브레이크포인트λ₯Ό ν™œμš©ν•œ 방법이닀. μ‘μš© μ˜μ—­μ—μ„œ 결함을 μ£Όμž…ν•˜κΈ° μœ„ν•΄ GDB 기반 결함 μ£Όμž… 방법을 μ΄μš©ν•˜μ—¬ 동일 μ‹œμŠ€ν…œ ν˜Ήμ€ 원격 μ‹œμŠ€ν…œμ˜ μ‘μš©μ— 결함을 μ£Όμž…ν•  수 μžˆλ‹€. 결함 μ£Όμž… 도ꡬ에 λŒ€ν•œ μ‹€ν—˜μ€ ODROID-XU4 λ³΄λ“œμ—μ„œ μ§„ν–‰ν•˜μ˜€λ‹€.While various software development methodologies have been proposed to increase the design productivity and maintainability of software, they usually focus on the development of application software running on a single processing element, without concern about the non-functional requirements of an embedded system such as latency and resource requirements. In this thesis, we present a model-based software development method for parallel and distributed embedded systems. An application is specified as a set of tasks that follow a set of given rules for communication and synchronization in a hierarchical fashion, independently of the hardware platform. Having such rules enables us to perform static analysis to check some software errors at compile time to reduce the verification difficulty. Platform-specific program is synthesized automatically after mapping of tasks onto processing elements is determined. The program synthesizer is also proposed to generate codes which satisfies platform requirements for parallel and distributed embedded systems. As multiple models which can express dynamic behaviors can be depicted hierarchically, the synthesizer supports to manage multiple task graphs with a different hierarchy to run tasks with parallelism. Also, the synthesizer shows methods of managing codes for heterogeneous platforms and generating various communication methods. The viability of the proposed software development method is verified with a real-life surveillance application that runs on six processing elements with three remote communication methods, and remote deep learning example is conducted to use heterogeneous multiprocessing components on distributed systems. Also, supporting a new platform and network requires a small effort by measuring and estimating development costs. Since tolerance to unexpected errors is a required feature of many embedded systems, we also support an automatic fault-tolerant code generation. Fault tolerance can be applied by modifying the task graph based on the selected fault tolerance configurations, so the non-functional requirement of fault tolerance can be easily adopted by an application developer. To compare the effort of supporting fault tolerance, manual implementation of fault tolerance is performed. Also, the fault tolerance method is tested with the fault injection tool to emulate fault scenarios and inject faults randomly. Our fault injection tool, which has used for testing our fault-tolerance method, is another work of this thesis. Emulating fault scenarios by intentionally injecting faults is commonly used to test and verify the robustness of a system. To emulate faults on an embedded system, we present a run-time fault injection framework that can inject a fault on both a kernel and application layer of Linux-based systems. For injecting faults on a kernel layer, two complementary fault injection techniques are used. One is based on Kernel GNU Debugger, and the other is using a hardware breakpoint supported by the ARM architecture. For application-level fault injection, the GDB-based fault injection method is used to inject a fault on a remote application. The viability of the proposed fault injection tool is proved by real-life experiments with an ODROID-XU4 system.Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Contribution 6 1.3 Dissertation Organization 8 Chapter 2 Background 9 2.1 HOPES: Hope of Parallel Embedded Software 9 2.1.1 Software Development Procedure 9 2.1.2 Components of HOPES 12 2.2 Universal Execution Model 13 2.2.1 Task Graph Specification 13 2.2.2 Dataflow specification of an Application 15 2.2.3 Task Code Specification and Generic APIs 21 2.2.4 Meta-data Specification 23 Chapter 3 Program Synthesis for Parallel and Distributed Embedded Systems 24 3.1 Motivational Example 24 3.2 Program Synthesis Overview 26 3.3 Program Synthesis from Hierarchically-mixed Models 30 3.4 Platform Code Synthesis 33 3.5 Communication Code Synthesis 36 3.6 Experiments 40 3.6.1 Development Cost of Supporting New Platforms and Networks 40 3.6.2 Program Synthesis for the Surveillance System Example 44 3.6.3 Remote GPU-accelerated Deep Learning Example 46 3.7 Document Generation 48 3.8 Related Works 49 Chapter 4 Model Transformation for Fault-tolerant Code Synthesis 56 4.1 Fault-tolerant Code Synthesis Techniques 56 4.2 Applying Fault Tolerance Techniques in HOPES 61 4.3 Experiments 62 4.3.1 Development Cost of Applying Fault Tolerance 62 4.3.2 Fault Tolerance Experiments 62 4.4 Random Fault Injection Experiments 65 4.5 Related Works 68 Chapter 5 Fault Injection Framework for Linux-based Embedded Systems 70 5.1 Background 70 5.1.1 Fault Injection Techniques 70 5.1.2 Kernel GNU Debugger 71 5.1.3 ARM Hardware Breakpoint 72 5.2 Fault Injection Framework 74 5.2.1 Overview 74 5.2.2 Architecture 75 5.2.3 Fault Injection Techniques 79 5.2.4 Implementation 83 5.3 Experiments 90 5.3.1 Experiment Setup 90 5.3.2 Performance Comparison of Two Fault Injection Methods 90 5.3.3 Bit-flip Fault Experiments 92 5.3.4 eMMC Controller Fault Experiments 94 Chapter 6 Conclusion 97 Bibliography 99 μš” μ•½ 108Docto

    TinyML: Tools, Applications, Challenges, and Future Research Directions

    Full text link
    In recent years, Artificial Intelligence (AI) and Machine learning (ML) have gained significant interest from both, industry and academia. Notably, conventional ML techniques require enormous amounts of power to meet the desired accuracy, which has limited their use mainly to high-capability devices such as network nodes. However, with many advancements in technologies such as the Internet of Things (IoT) and edge computing, it is desirable to incorporate ML techniques into resource-constrained embedded devices for distributed and ubiquitous intelligence. This has motivated the emergence of the TinyML paradigm which is an embedded ML technique that enables ML applications on multiple cheap, resource- and power-constrained devices. However, during this transition towards appropriate implementation of the TinyML technology, multiple challenges such as processing capacity optimization, improved reliability, and maintenance of learning models' accuracy require timely solutions. In this article, various avenues available for TinyML implementation are reviewed. Firstly, a background of TinyML is provided, followed by detailed discussions on various tools supporting TinyML. Then, state-of-art applications of TinyML using advanced technologies are detailed. Lastly, various research challenges and future directions are identified.Comment: 12 pags, 3 tables, 4 figure
    • …
    corecore