5,807 research outputs found

    LunaNet: a Flexible and Extensible Lunar Exploration Communications and Navigation Infrastructure

    Get PDF
    NASA has set the ambitious goal of establishing a sustainable human presence on the Moon. Diverse commercial and international partners are engaged in this effort to catalyze scientific discovery, lunar resource utilization and economic development on both the Earth and at the Moon. Lunar development will serve as a critical proving ground for deeper exploration into the solar system. Space communications and navigation infrastructure will play an integral part in realizing this goal. This paper provides a high-level description of an extensible and scalable lunar communications and navigation architecture, known as LunaNet. LunaNet is a services network to enable lunar operations. Three LunaNet service types are defined: networking services, position, navigation and timing services, and science utilization services. The LunaNet architecture encompasses a wide variety of topology implementations, including surface and orbiting provider nodes. In this paper several systems engineering considerations within the service architecture are highlighted. Additionally, several alternative LunaNet instantiations are presented. Extensibility of the LunaNet architecture to the solar system internet is discussed

    Enhancing Real-time Embedded Image Processing Robustness on Reconfigurable Devices for Critical Applications

    Get PDF
    Nowadays, image processing is increasingly used in several application fields, such as biomedical, aerospace, or automotive. Within these fields, image processing is used to serve both non-critical and critical tasks. As example, in automotive, cameras are becoming key sensors in increasing car safety, driving assistance and driving comfort. They have been employed for infotainment (non-critical), as well as for some driver assistance tasks (critical), such as Forward Collision Avoidance, Intelligent Speed Control, or Pedestrian Detection. The complexity of these algorithms brings a challenge in real-time image processing systems, requiring high computing capacity, usually not available in processors for embedded systems. Hardware acceleration is therefore crucial, and devices such as Field Programmable Gate Arrays (FPGAs) best fit the growing demand of computational capabilities. These devices can assist embedded processors by significantly speeding-up computationally intensive software algorithms. Moreover, critical applications introduce strict requirements not only from the real-time constraints, but also from the device reliability and algorithm robustness points of view. Technology scaling is highlighting reliability problems related to aging phenomena, and to the increasing sensitivity of digital devices to external radiation events that can cause transient or even permanent faults. These faults can lead to wrong information processed or, in the worst case, to a dangerous system failure. In this context, the reconfigurable nature of FPGA devices can be exploited to increase the system reliability and robustness by leveraging Dynamic Partial Reconfiguration features. The research work presented in this thesis focuses on the development of techniques for implementing efficient and robust real-time embedded image processing hardware accelerators and systems for mission-critical applications. Three main challenges have been faced and will be discussed, along with proposed solutions, throughout the thesis: (i) achieving real-time performances, (ii) enhancing algorithm robustness, and (iii) increasing overall system's dependability. In order to ensure real-time performances, efficient FPGA-based hardware accelerators implementing selected image processing algorithms have been developed. Functionalities offered by the target technology, and algorithm's characteristics have been constantly taken into account while designing such accelerators, in order to efficiently tailor algorithm's operations to available hardware resources. On the other hand, the key idea for increasing image processing algorithms' robustness is to introduce self-adaptivity features at algorithm level, in order to maintain constant, or improve, the quality of results for a wide range of input conditions, that are not always fully predictable at design-time (e.g., noise level variations). This has been accomplished by measuring at run-time some characteristics of the input images, and then tuning the algorithm parameters based on such estimations. Dynamic reconfiguration features of modern reconfigurable FPGA have been extensively exploited in order to integrate run-time adaptivity into the designed hardware accelerators. Tools and methodologies have been also developed in order to increase the overall system dependability during reconfiguration processes, thus providing safe run-time adaptation mechanisms. In addition, taking into account the target technology and the environments in which the developed hardware accelerators and systems may be employed, dependability issues have been analyzed, leading to the development of a platform for quickly assessing the reliability and characterizing the behavior of hardware accelerators implemented on reconfigurable FPGAs when they are affected by such faults

    A Survey of Green Networking Research

    Full text link
    Reduction of unnecessary energy consumption is becoming a major concern in wired networking, because of the potential economical benefits and of its expected environmental impact. These issues, usually referred to as "green networking", relate to embedding energy-awareness in the design, in the devices and in the protocols of networks. In this work, we first formulate a more precise definition of the "green" attribute. We furthermore identify a few paradigms that are the key enablers of energy-aware networking research. We then overview the current state of the art and provide a taxonomy of the relevant work, with a special focus on wired networking. At a high level, we identify four branches of green networking research that stem from different observations on the root causes of energy waste, namely (i) Adaptive Link Rate, (ii) Interface proxying, (iii) Energy-aware infrastructures and (iv) Energy-aware applications. In this work, we do not only explore specific proposals pertaining to each of the above branches, but also offer a perspective for research.Comment: Index Terms: Green Networking; Wired Networks; Adaptive Link Rate; Interface Proxying; Energy-aware Infrastructures; Energy-aware Applications. 18 pages, 6 figures, 2 table

    Symmetry-Adapted Machine Learning for Information Security

    Get PDF
    Symmetry-adapted machine learning has shown encouraging ability to mitigate the security risks in information and communication technology (ICT) systems. It is a subset of artificial intelligence (AI) that relies on the principles of processing future events by learning past events or historical data. The autonomous nature of symmetry-adapted machine learning supports effective data processing and analysis for security detection in ICT systems without the interference of human authorities. Many industries are developing machine-learning-adapted solutions to support security for smart hardware, distributed computing, and the cloud. In our Special Issue book, we focus on the deployment of symmetry-adapted machine learning for information security in various application areas. This security approach can support effective methods to handle the dynamic nature of security attacks by extraction and analysis of data to identify hidden patterns of data. The main topics of this Issue include malware classification, an intrusion detection system, image watermarking, color image watermarking, battlefield target aggregation behavior recognition model, IP camera, Internet of Things (IoT) security, service function chain, indoor positioning system, and crypto-analysis

    SANTO: Social Aerial NavigaTion in Outdoors

    Get PDF
    In recent years, the advances in remote connectivity, miniaturization of electronic components and computing power has led to the integration of these technologies in daily devices like cars or aerial vehicles. From these, a consumer-grade option that has gained popularity are the drones or unmanned aerial vehicles, namely quadrotors. Although until recently they have not been used for commercial applications, their inherent potential for a number of tasks where small and intelligent devices are needed is huge. However, although the integrated hardware has advanced exponentially, the refinement of software used for these applications has not beet yet exploited enough. Recently, this shift is visible in the improvement of common tasks in the field of robotics, such as object tracking or autonomous navigation. Moreover, these challenges can become bigger when taking into account the dynamic nature of the real world, where the insight about the current environment is constantly changing. These settings are considered in the improvement of robot-human interaction, where the potential use of these devices is clear, and algorithms are being developed to improve this situation. By the use of the latest advances in artificial intelligence, the human brain behavior is simulated by the so-called neural networks, in such a way that computing system performs as similar as possible as the human behavior. To this end, the system does learn by error which, in an akin way to the human learning, requires a set of previous experiences quite considerable, in order for the algorithm to retain the manners. Applying these technologies to robot-human interaction do narrow the gap. Even so, from a bird's eye, a noticeable time slot used for the application of these technologies is required for the curation of a high-quality dataset, in order to ensure that the learning process is optimal and no wrong actions are retained. Therefore, it is essential to have a development platform in place to ensure these principles are enforced throughout the whole process of creation and optimization of the algorithm. In this work, multiple already-existing handicaps found in pipelines of this computational gauge are exposed, approaching each of them in a independent and simple manner, in such a way that the solutions proposed can be leveraged by the maximum number of workflows. On one side, this project concentrates on reducing the number of bugs introduced by flawed data, as to help the researchers to focus on developing more sophisticated models. On the other side, the shortage of integrated development systems for this kind of pipelines is envisaged, and with special care those using simulated or controlled environments, with the goal of easing the continuous iteration of these pipelines.Thanks to the increasing popularity of drones, the research and development of autonomous capibilities has become easier. However, due to the challenge of integrating multiple technologies, the available software stack to engage this task is restricted. In this thesis, we accent the divergencies among unmanned-aerial-vehicle simulators and propose a platform to allow faster and in-depth prototyping of machine learning algorithms for this drones

    High-level services for networks-on-chip

    Get PDF
    Future technology trends envision that next-generation Multiprocessors Systems-on- Chip (MPSoCs) will be composed of a combination of a large number of processing and storage elements interconnected by complex communication architectures. Communication and interconnection between these basic blocks play a role of crucial importance when the number of these elements increases. Enabling reliable communication channels between cores becomes therefore a challenge for system designers. Networks-on-Chip (NoCs) appeared as a strategy for connecting and managing the communication between several design elements and IP blocks, as required in complex Systems-on-Chip (SoCs). The topic can be considered as a multidisciplinary synthesis of multiprocessing, parallel computing, networking, and on- chip communication domains. Networks-on-Chip, in addition to standard communication services, can be employed for providing support for the implementation of system-level services. This dissertation will demonstrate how high-level services can be added to an MPSoC platform by embedding appropriate hardware/software support in the network interfaces (NIs) of the NoC. In this dissertation, the implementation of innovative modules acting in parallel with protocol translation and data transmission in NIs is proposed and evaluated. The modules can support the execution of the high-level services in the NoC at a relatively low cost in terms of area and energy consumption. Three types of services will be addressed and discussed: security, monitoring, and fault tolerance. With respect to the security aspect, this dissertation will discuss the implementation of an innovative data protection mechanism for detecting and preventing illegal accesses to protected memory blocks and/or memory mapped peripherals. The second aspect will be addressed by proposing the implementation of a monitoring system based on programmable multipurpose monitoring probes aimed at detecting NoC internal events and run-time characteristics. As last topic, new architectural solutions for the design of fault tolerant network interfaces will be presented and discussed

    A Versatile Lifting Device for Lunar Surface Payload Handling, Inspection and Regolith Transport Operations

    Get PDF
    Devices for lifting and transporting payloads and material are critical for efficient Earth-based construction operations. Devices with similar functionality will be needed to support lunar-outpost construction, servicing, inspection, regolith excavation, grading and payload placement. Past studies have proposed that only a few carefully selected devices are required for a lunar outpost. One particular set of operations involves lifting and manipulating payloads in the 100 kg to 3,000 kg range, which are too large or massive to be handled by unassisted astronauts. This paper will review historical devices used for payload handling in space and on earth to derive a set of desirable features for a device that can be used on planetary surfaces. Next, an innovative concept for a lifting device is introduced, which includes many of the desirable features. The versatility of the device is discussed, including its application to lander unloading, servicing, inspection, regolith excavation and site preparation. Approximate rules, which can be used to size the device for specific payload mass and reach requirements, are provided. Finally, details of a test-bed implementation of the innovative concept, which will be used to validate the structural design and develop operational procedures, is provided

    Multi-LSTM Acceleration and CNN Fault Tolerance

    Get PDF
    This thesis addresses the following two problems related to the field of Machine Learning: the acceleration of multiple Long Short Term Memory (LSTM) models on FPGAs and the fault tolerance of compressed Convolutional Neural Networks (CNN). LSTMs represent an effective solution to capture long-term dependencies in sequential data, like sentences in Natural Language Processing applications, video frames in Scene Labeling tasks or temporal series in Time Series Forecasting. In order to further boost their efficacy, especially in presence of long sequences, multiple LSTM models are utilized in a Hierarchical and Stacked fashion. However, because of their memory-bounded nature, efficient mapping of multiple LSTMs on a computing device becomes even more challenging. The first part of this thesis addresses the problem of mapping multiple LSTM models to a FPGA device by introducing a framework that modifies their memory requirements according to the target architecture. For the similar accuracy loss, the proposed framework maps multiple LSTMs with a performance improvement of 3x to 5x over state-of-the-art approaches. In the second part of this thesis, we investigate the fault tolerance of CNNs, another effective deep learning architecture. CNNs represent a dominating solution in image classification tasks, but suffer from a high performance cost, due to their computational structure. In fact, due to their large parameter space, fetching their data from main memory typically becomes a performance bottleneck. In order to tackle the problem, various techniques for their parameters compression have been developed, such as weight pruning, weight clustering and weight quantization. However, reducing the memory footprint of an application can lead to its data becoming more sensitive to faults. For this thesis work, we have conducted an analysis to verify the conditions for applying OddECC, a mechanism that supports variable strength and size ECCs for different memory regions. Our experiments reveal that compressed CNNs, which have their memory footprint reduced up to 86.3x by utilizing the aforementioned compression schemes, exhibit accuracy drops up to 13.56% in presence of random single bit faults

    Secure Multi-Path Selection with Optimal Controller Placement Using Hybrid Software-Defined Networks with Optimization Algorithm

    Get PDF
    The Internet's growth in popularity requires computer networks for both agility and resilience. Recently, unable to satisfy the computer needs for traditional networking systems. Software Defined Networking (SDN) is known as a paradigm shift in the networking industry. Many organizations are used SDN due to their efficiency of transmission. Striking the right balance between SDN and legacy switching capabilities will enable successful network scenarios in architecture networks. Therefore, this object grand scenario for a hybrid network where the external perimeter transport device is replaced with an SDN device in the service provider network. With the moving away from older networks to SDN, hybrid SDN includes both legacy and SDN switches. Existing models of SDN have limitations such as overfitting, local optimal trapping, and poor path selection efficiency. This paper proposed a Deep Kronecker Neural Network (DKNN) to improve its efficiency with a moderate optimization method for multipath selection in SDN. Dynamic resource scheduling is used for the reward function the learning performance is improved by the deep reinforcement learning (DRL) technique. The controller for centralised SDN acts as a network brain in the control plane. Among the most important duties network is selected for the best SDN controller. It is vulnerable to invasions and the controller becomes a network bottleneck. This study presents an intrusion detection system (IDS) based on the SDN model that runs as an application module within the controller. Therefore, this study suggested the feature extraction and classification of contractive auto-encoder with a triple attention-based classifier. Additionally, this study leveraged the best performing SDN controllers on which many other SDN controllers are based on OpenDayLight (ODL) provides an open northbound API and supports multiple southbound protocols. Therefore, one of the main issues in the multi-controller placement problem (CPP) that addresses needed in the setting of SDN specifically when different aspects in interruption, ability, authenticity and load distribution are being considered. Introducing the scenario concept, CPP is formulated as a robust optimization problem that considers changes in network status due to power outages, controller’s capacity, load fluctuations and changes in switches demand. Therefore, to improve network performance, it is planned to improve the optimal amount of controller placements by simulated annealing using different topologies the modified Dragonfly optimization algorithm (MDOA)
    • …
    corecore