15,466 research outputs found

    IPC: A Benchmark Data Set for Learning with Graph-Structured Data

    Get PDF
    Benchmark data sets are an indispensable ingredient of the evaluation of graph-based machine learning methods. We release a new data set, compiled from International Planning Competitions (IPC), for benchmarking graph classification, regression, and related tasks. Apart from the graph construction (based on AI planning problems) that is interesting in its own right, the data set possesses distinctly different characteristics from popularly used benchmarks. The data set, named IPC, consists of two self-contained versions, grounded and lifted, both including graphs of large and skewedly distributed sizes, posing substantial challenges for the computation of graph models such as graph kernels and graph neural networks. The graphs in this data set are directed and the lifted version is acyclic, offering the opportunity of benchmarking specialized models for directed (acyclic) structures. Moreover, the graph generator and the labeling are computer programmed; thus, the data set may be extended easily if a larger scale is desired. The data set is accessible from \url{https://github.com/IBM/IPC-graph-data}.Comment: ICML 2019 Workshop on Learning and Reasoning with Graph-Structured Data. The data set is accessible from https://github.com/IBM/IPC-graph-dat

    Online Planner Selection with Graph Neural Networks and Adaptive Scheduling

    Get PDF
    Automated planning is one of the foundational areas of AI. Since no single planner can work well for all tasks and domains, portfolio-based techniques have become increasingly popular in recent years. In particular, deep learning emerges as a promising methodology for online planner selection. Owing to the recent development of structural graph representations of planning tasks, we propose a graph neural network (GNN) approach to selecting candidate planners. GNNs are advantageous over a straightforward alternative, the convolutional neural networks, in that they are invariant to node permutations and that they incorporate node labels for better inference. Additionally, for cost-optimal planning, we propose a two-stage adaptive scheduling method to further improve the likelihood that a given task is solved in time. The scheduler may switch at halftime to a different planner, conditioned on the observed performance of the first one. Experimental results validate the effectiveness of the proposed method against strong baselines, both deep learning and non-deep learning based. The code is available at \url{https://github.com/matenure/GNN_planner}.Comment: AAAI 2020. Code is released at https://github.com/matenure/GNN_planner. Data set is released at https://github.com/IBM/IPC-graph-dat

    Single-growth embedded epitaxy AlGaAs injection lasers with extremely low threshold currents

    Get PDF
    A new type of strip-geometry AlGaAs double-heterostructure laser with an embedded optical waveguide has been developed. The new structure is fabricated using a single step of epitaxial growth. Lasers with threshold currents as low as 9.5 mA (150 µm long) were obtained. These lasers exhibit operation in a single spatial and longitudinal mode, have differential quantum efficiencies exceeding 45%, and a characteristic temperature of 175° C. They emit more than 12 mW/facet of optical power without any kinks

    Lifting Slepton Masses with a Non-universal, Non-anomalous U(1)'_{NAF} in Anomaly Mediated SUSY breaking

    Full text link
    We extend the Minimum Supersymmetry Standard Model by a non-anomalous family (NAF) U(1)'_{NAF} gauge symmetry. All gauge anomalies are cancelled with no additional exotics other than the three right-handed neutrinos. The FI D-terms associated with the U(1)'_{NAF} symmetry lead to additional positive contributions to slepton squared masses. In a RG invariant way, this thus solves the tachyonic slepton mass problem in Anomaly Mediated Supersymmetry Breaking. In addition, the U (1)'_{NAF} symmetry naturally gives rise to the fermion mass hierarchy and mixing angles, and determines the mass spectrum of the sparticles.Comment: 13 pages; v2: version to appear in Phys. Lett.

    Gravitational energy

    Full text link
    Observers at rest in a stationary spacetime flat at infinity can measure small amounts of rest-mass+internal energies+kinetic energies+pressure energy in a small volume of fluid attached to a local inertial frame. The sum of these small amounts is the total "matter energy" for those observers. The total mass-energy minus the matter energy is the binding gravitational energy. Misner, Thorne and Wheeler evaluated the gravitational energy of a spherically symmetric static spacetime. Here we show how to calculate gravitational energy in any static and stationary spacetime for isolated sources with a set of observers at rest. The result of MTW is recovered and we find that electromagnetic and gravitational 3-covariant energy densities in conformastatic spacetimes are of opposite signs. Various examples suggest that gravitational energy is negative in spacetimes with special symmetries or when the energy-momentum tensor satisfies usual energy conditions.Comment: 12 pages. Accepted for publication in Class. Quantum Gra

    Privacy-Preserving Outsourcing of Large-Scale Nonlinear Programming to the Cloud

    Full text link
    The increasing massive data generated by various sources has given birth to big data analytics. Solving large-scale nonlinear programming problems (NLPs) is one important big data analytics task that has applications in many domains such as transport and logistics. However, NLPs are usually too computationally expensive for resource-constrained users. Fortunately, cloud computing provides an alternative and economical service for resource-constrained users to outsource their computation tasks to the cloud. However, one major concern with outsourcing NLPs is the leakage of user's private information contained in NLP formulations and results. Although much work has been done on privacy-preserving outsourcing of computation tasks, little attention has been paid to NLPs. In this paper, we for the first time investigate secure outsourcing of general large-scale NLPs with nonlinear constraints. A secure and efficient transformation scheme at the user side is proposed to protect user's private information; at the cloud side, generalized reduced gradient method is applied to effectively solve the transformed large-scale NLPs. The proposed protocol is implemented on a cloud computing testbed. Experimental evaluations demonstrate that significant time can be saved for users and the proposed mechanism has the potential for practical use.Comment: Ang Li and Wei Du equally contributed to this work. This work was done when Wei Du was at the University of Arkansas. 2018 EAI International Conference on Security and Privacy in Communication Networks (SecureComm
    • …
    corecore