22,051 research outputs found

    Efficient Neural Network Implementations on Parallel Embedded Platforms Applied to Real-Time Torque-Vectoring Optimization Using Predictions for Multi-Motor Electric Vehicles

    Get PDF
    The combination of machine learning and heterogeneous embedded platforms enables new potential for developing sophisticated control concepts which are applicable to the field of vehicle dynamics and ADAS. This interdisciplinary work provides enabler solutions -ultimately implementing fast predictions using neural networks (NNs) on field programmable gate arrays (FPGAs) and graphical processing units (GPUs)- while applying them to a challenging application: Torque Vectoring on a multi-electric-motor vehicle for enhanced vehicle dynamics. The foundation motivating this work is provided by discussing multiple domains of the technological context as well as the constraints related to the automotive field, which contrast with the attractiveness of exploiting the capabilities of new embedded platforms to apply advanced control algorithms for complex control problems. In this particular case we target enhanced vehicle dynamics on a multi-motor electric vehicle benefiting from the greater degrees of freedom and controllability offered by such powertrains. Considering the constraints of the application and the implications of the selected multivariable optimization challenge, we propose a NN to provide batch predictions for real-time optimization. This leads to the major contribution of this work: efficient NN implementations on two intrinsically parallel embedded platforms, a GPU and a FPGA, following an analysis of theoretical and practical implications of their different operating paradigms, in order to efficiently harness their computing potential while gaining insight into their peculiarities. The achieved results exceed the expectations and additionally provide a representative illustration of the strengths and weaknesses of each kind of platform. Consequently, having shown the applicability of the proposed solutions, this work contributes valuable enablers also for further developments following similar fundamental principles.Some of the results presented in this work are related to activities within the 3Ccar project, which has received funding from ECSEL Joint Undertaking under grant agreement No. 662192. This Joint Undertaking received support from the European Union’s Horizon 2020 research and innovation programme and Germany, Austria, Czech Republic, Romania, Belgium, United Kingdom, France, Netherlands, Latvia, Finland, Spain, Italy, Lithuania. This work was also partly supported by the project ENABLES3, which received funding from ECSEL Joint Undertaking under grant agreement No. 692455-2

    Going Deeper with Convolutions

    Full text link
    We propose a deep convolutional neural network architecture codenamed "Inception", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection

    Multi-capacity bin packing with dependent items and its application to the packing of brokered workloads in virtualized environments

    Full text link
    Providing resource allocation with performance predictability guarantees is increasingly important in cloud platforms, especially for data-intensive applications, in which performance depends greatly on the available rates of data transfer between the various computing/storage hosts underlying the virtualized resources assigned to the application. Existing resource allocation solutions either assume that applications manage their data transfer between their virtualized resources, or that cloud providers manage their internal networking resources. With the increased prevalence of brokerage services in cloud platforms, there is a need for resource allocation solutions that provides predictability guarantees in settings, in which neither application scheduling nor cloud provider resources can be managed/controlled by the broker. This paper addresses this problem, as we define the Network-Constrained Packing (NCP) problem of finding the optimal mapping of brokered resources to applications with guaranteed performance predictability. We prove that NCP is NP-hard, and we define two special instances of the problem, for which exact solutions can be found efficiently. We develop a greedy heuristic to solve the general instance of the NCP problem , and we evaluate its efficiency using simulations on various application workloads, and network models.This work was done while author was at Boston University. It was partially supported by NSF CISE awards #1430145, #1414119, #1239021 and #1012798. (1430145 - NSF CISE; 1414119 - NSF CISE; 1239021 - NSF CISE; 1012798 - NSF CISE

    A knowledge-based system with learning for computer communication network design

    Get PDF
    Computer communication network design is well-known as complex and hard. For that reason, the most effective methods used to solve it are heuristic. Weaknesses of these techniques are listed and a new approach based on artificial intelligence for solving this problem is presented. This approach is particularly recommended for large packet switched communication networks, in the sense that it permits a high degree of reliability and offers a very flexible environment dealing with many relevant design parameters such as link cost, link capacity, and message delay

    Active Topology Inference using Network Coding

    Get PDF
    Our goal is to infer the topology of a network when (i) we can send probes between sources and receivers at the edge of the network and (ii) intermediate nodes can perform simple network coding operations, i.e., additions. Our key intuition is that network coding introduces topology-dependent correlation in the observations at the receivers, which can be exploited to infer the topology. For undirected tree topologies, we design hierarchical clustering algorithms, building on our prior work. For directed acyclic graphs (DAGs), first we decompose the topology into a number of two-source, two-receiver (2-by-2) subnetwork components and then we merge these components to reconstruct the topology. Our approach for DAGs builds on prior work on tomography, and improves upon it by employing network coding to accurately distinguish among all different 2-by-2 components. We evaluate our algorithms through simulation of a number of realistic topologies and compare them to active tomographic techniques without network coding. We also make connections between our approach and alternatives, including passive inference, traceroute, and packet marking

    Cross-layer Balanced and Reliable Opportunistic Routing Algorithm for Mobile Ad Hoc Networks

    Full text link
    For improving the efficiency and the reliability of the opportunistic routing algorithm, in this paper, we propose the cross-layer and reliable opportunistic routing algorithm (CBRT) for Mobile Ad Hoc Networks, which introduces the improved efficiency fuzzy logic and humoral regulation inspired topology control into the opportunistic routing algorithm. In CBRT, the inputs of the fuzzy logic system are the relative variance (rv) of the metrics rather than the values of the metrics, which reduces the number of fuzzy rules dramatically. Moreover, the number of fuzzy rules does not increase when the number of inputs increases. For reducing the control cost, in CBRT, the node degree in the candidate relays set is a range rather than a constant number. The nodes are divided into different categories based on their node degree in the candidate relays set. The nodes adjust their transmission range based on which categories that they belong to. Additionally, for investigating the effection of the node mobility on routing performance, we propose a link lifetime prediction algorithm which takes both the moving speed and moving direction into account. In CBRT, the source node determines the relaying priorities of the relaying nodes based on their utilities. The relaying node which the utility is large will have high priority to relay the data packet. By these innovations, the network performance in CBRT is much better than that in ExOR, however, the computation complexity is not increased in CBRT.Comment: 14 pages, 17 figures, 31 formulas, IEEE Sensors Journal, 201
    corecore