19,945 research outputs found

    Self-Supervised Learning to Prove Equivalence Between Straight-Line Programs via Rewrite Rules

    Full text link
    We target the problem of automatically synthesizing proofs of semantic equivalence between two programs made of sequences of statements. We represent programs using abstract syntax trees (AST), where a given set of semantics-preserving rewrite rules can be applied on a specific AST pattern to generate a transformed and semantically equivalent program. In our system, two programs are equivalent if there exists a sequence of application of these rewrite rules that leads to rewriting one program into the other. We propose a neural network architecture based on a transformer model to generate proofs of equivalence between program pairs. The system outputs a sequence of rewrites, and the validity of the sequence is simply checked by verifying it can be applied. If no valid sequence is produced by the neural network, the system reports the programs as non-equivalent, ensuring by design no programs may be incorrectly reported as equivalent. Our system is fully implemented for a given grammar which can represent straight-line programs with function calls and multiple types. To efficiently train the system to generate such sequences, we develop an original incremental training technique, named self-supervised sample selection. We extensively study the effectiveness of this novel training approach on proofs of increasing complexity and length. Our system, S4Eq, achieves 97% proof success on a curated dataset of 10,000 pairs of equivalent programsComment: 30 pages including appendi

    ENABLING EFFICIENT FLEET COMPOSITION SELECTION THROUGH THE DEVELOPMENT OF A RANK HEURISTIC FOR A BRANCH AND BOUND METHOD

    Get PDF
    In the foreseeable future, autonomous mobile robots (AMRs) will become a key enabler for increasing productivity and flexibility in material handling in warehousing facilities, distribution centers and manufacturing systems. The objective of this research is to develop and validate parametric models of AMRs, develop ranking heuristic using a physics-based algorithm within the framework of the Branch and Bound method, integrate the ranking algorithm into a Fleet Composition Optimization (FCO) tool, and finally conduct simulations under various scenarios to verify the suitability and robustness of the developed tool in a factory equipped with AMRs. Kinematic-based equations are used for computing both energy and time consumption. Multivariate linear regression, a data-driven method, is used for designing the ranking heuristic. The results indicate that the unique physical structures and parameters of each robot are the main factors contributing to differences in energy and time consumption. improvement on reducing computation time was achieved by comparing heuristic-based search and non-heuristic-based search. This research is expected to significantly improve the current nested fleet composition optimization tool by reducing computation time without sacrificing optimality. From a practical perspective, greater efficiency in reducing energy and time costs can be achieved.Ford Motor CompanyNo embargoAcademic Major: Aerospace Engineerin

    Intelligent Control Schemes for Maximum Power Extraction from Photovoltaic Arrays under Faults

    Get PDF
    Investigation of power output from PV arrays under different fault conditions is an essential task to enhance performance of a photovoltaic system under all operating conditions. Significant reduction in power output can occur during various PV faults such as module disconnection, bypass diode failure, bridge fault, and short circuit fault under non-uniform shading conditions. These PV faults may cause several peaks in the characteristics curve of PV arrays, which can lead to failure of the MPPT control strategy. In fact, impact of a fault can differ depending on the type of PV array, and it can make the control of the system more complex. Therefore, consideration of suitable PV arrays with an effective control design is necessary for maximum power output from a PV system. For this purpose, the proposed study presents a comparative study of two intelligent control schemes, i.e., fuzzy logic (FL) and particle swarm optimization (PSO), with a conventional control scheme known as perturb and observe (P&O) for power extraction from a PV system. The comparative analysis is based on the performance of the control strategies under several faults and the types of PV modules, i.e., monocrystalline and thin-film PV arrays. In this study, numerical analysis for complex fault scenarios like multiple faults under partial shading have also been performed. Different from the previous literature, this study will reveal the performance of FL-, PSO-, and P&O-based MPPT strategies to track maximum peak power during multiple severe fault conditions while considering the accuracy and fast-tracking efficiencies of the control techniques. A thorough analysis along with in-depth quantitative data are presented, confirming the superiority of intelligent control techniques under multiple faults and different PV types

    Hardware Acceleration of Neural Graphics

    Full text link
    Rendering and inverse-rendering algorithms that drive conventional computer graphics have recently been superseded by neural representations (NR). NRs have recently been used to learn the geometric and the material properties of the scenes and use the information to synthesize photorealistic imagery, thereby promising a replacement for traditional rendering algorithms with scalable quality and predictable performance. In this work we ask the question: Does neural graphics (NG) need hardware support? We studied representative NG applications showing that, if we want to render 4k res. at 60FPS there is a gap of 1.5X-55X in the desired performance on current GPUs. For AR/VR applications, there is an even larger gap of 2-4 OOM between the desired performance and the required system power. We identify that the input encoding and the MLP kernels are the performance bottlenecks, consuming 72%,60% and 59% of application time for multi res. hashgrid, multi res. densegrid and low res. densegrid encodings, respectively. We propose a NG processing cluster, a scalable and flexible hardware architecture that directly accelerates the input encoding and MLP kernels through dedicated engines and supports a wide range of NG applications. We also accelerate the rest of the kernels by fusing them together in Vulkan, which leads to 9.94X kernel-level performance improvement compared to un-fused implementation of the pre-processing and the post-processing kernels. Our results show that, NGPC gives up to 58X end-to-end application-level performance improvement, for multi res. hashgrid encoding on average across the four NG applications, the performance benefits are 12X,20X,33X and 39X for the scaling factor of 8,16,32 and 64, respectively. Our results show that with multi res. hashgrid encoding, NGPC enables the rendering of 4k res. at 30FPS for NeRF and 8k res. at 120FPS for all our other NG applications

    Trainable Variational Quantum-Multiblock ADMM Algorithm for Generation Scheduling

    Full text link
    The advent of quantum computing can potentially revolutionize how complex problems are solved. This paper proposes a two-loop quantum-classical solution algorithm for generation scheduling by infusing quantum computing, machine learning, and distributed optimization. The aim is to facilitate employing noisy near-term quantum machines with a limited number of qubits to solve practical power system optimization problems such as generation scheduling. The outer loop is a 3-block quantum alternative direction method of multipliers (QADMM) algorithm that decomposes the generation scheduling problem into three subproblems, including one quadratically unconstrained binary optimization (QUBO) and two non-QUBOs. The inner loop is a trainable quantum approximate optimization algorithm (T-QAOA) for solving QUBO on a quantum computer. The proposed T-QAOA translates interactions of quantum-classical machines as sequential information and uses a recurrent neural network to estimate variational parameters of the quantum circuit with a proper sampling technique. T-QAOA determines the QUBO solution in a few quantum-learner iterations instead of hundreds of iterations needed for a quantum-classical solver. The outer 3-block ADMM coordinates QUBO and non-QUBO solutions to obtain the solution to the original problem. The conditions under which the proposed QADMM is guaranteed to converge are discussed. Two mathematical and three generation scheduling cases are studied. Analyses performed on quantum simulators and classical computers show the effectiveness of the proposed algorithm. The advantages of T-QAOA are discussed and numerically compared with QAOA which uses a stochastic gradient descent-based optimizer.Comment: 11 page

    Linear to multi-linear algebra and systems using tensors

    Full text link
    In past few decades, tensor algebra also known as multi-linear algebra has been developed and customized as a tool to be used for various engineering applications. In particular, with the help of a special form of tensor contracted product, known as the Einstein Product and its properties, many of the known concepts from Linear Algebra could be extended to a multi-linear setting. This enables to define the notions of multi-linear system theory where the input, output signals and the system are multi-domain in nature. This paper provides an overview of tensor algebra tools which can be seen as an extension of linear algebra, at the same time highlighting the difference and advantages that the multi-linear setting brings forth. In particular, the notion of tensor inversion, tensor singular value and tensor Eigenvalue decomposition using the Einstein product is explained. In addition, this paper also introduces the notion of contracted convolution in both discrete and continuous multi-linear system tensors. Tensor Networks representation of various tensor operations is also presented. Also, application of tensor tools in developing transceiver schemes for multi-domain communication systems, with an example of MIMO CDMA systems, is presented. Thus this paper acts as an entry point tutorial for graduate students whose research involves multi-domain or multi-modal signals and systems

    Deep Transfer Learning Applications in Intrusion Detection Systems: A Comprehensive Review

    Full text link
    Globally, the external Internet is increasingly being connected to the contemporary industrial control system. As a result, there is an immediate need to protect the network from several threats. The key infrastructure of industrial activity may be protected from harm by using an intrusion detection system (IDS), a preventive measure mechanism, to recognize new kinds of dangerous threats and hostile activities. The most recent artificial intelligence (AI) techniques used to create IDS in many kinds of industrial control networks are examined in this study, with a particular emphasis on IDS-based deep transfer learning (DTL). This latter can be seen as a type of information fusion that merge, and/or adapt knowledge from multiple domains to enhance the performance of the target task, particularly when the labeled data in the target domain is scarce. Publications issued after 2015 were taken into account. These selected publications were divided into three categories: DTL-only and IDS-only are involved in the introduction and background, and DTL-based IDS papers are involved in the core papers of this review. Researchers will be able to have a better grasp of the current state of DTL approaches used in IDS in many different types of networks by reading this review paper. Other useful information, such as the datasets used, the sort of DTL employed, the pre-trained network, IDS techniques, the evaluation metrics including accuracy/F-score and false alarm rate (FAR), and the improvement gained, were also covered. The algorithms, and methods used in several studies, or illustrate deeply and clearly the principle in any DTL-based IDS subcategory are presented to the reader

    Autonomous Navigation in Rows of Trees and High Crops with Deep Semantic Segmentation

    Full text link
    Segmentation-based autonomous navigation has recently been proposed as a promising methodology to guide robotic platforms through crop rows without requiring precise GPS localization. However, existing methods are limited to scenarios where the centre of the row can be identified thanks to the sharp distinction between the plants and the sky. However, GPS signal obstruction mainly occurs in the case of tall, dense vegetation, such as high tree rows and orchards. In this work, we extend the segmentation-based robotic guidance to those scenarios where canopies and branches occlude the sky and hinder the usage of GPS and previous methods, increasing the overall robustness and adaptability of the control algorithm. Extensive experimentation on several realistic simulated tree fields and vineyards demonstrates the competitive advantages of the proposed solution
    corecore