77 research outputs found

    Hierarchical Graph Structures for Congestion and ETA Prediction

    Full text link
    Traffic4cast is an annual competition to predict spatio temporal traffic based on real world data. We propose an approach using Graph Neural Networks that directly works on the road graph topology which was extracted from OpenStreetMap data. Our architecture can incorporate a hierarchical graph representation to improve the information flow between key intersections of the graph and the shortest paths connecting them. Furthermore, we investigate how the road graph can be compacted to ease the flow of information and make use of a multi-task approach to predict congestion classes and ETA simultaneously. Our code and models are released here: https://github.com/floriangroetschla/NeurIPS2022-traffic4cas

    Learning Graph Algorithms With Recurrent Graph Neural Networks

    Full text link
    Classical graph algorithms work well for combinatorial problems that can be thoroughly formalized and abstracted. Once the algorithm is derived, it generalizes to instances of any size. However, developing an algorithm that handles complex structures and interactions in the real world can be challenging. Rather than specifying the algorithm, we can try to learn it from the graph-structured data. Graph Neural Networks (GNNs) are inherently capable of working on graph structures; however, they struggle to generalize well, and learning on larger instances is challenging. In order to scale, we focus on a recurrent architecture design that can learn simple graph problems end to end on smaller graphs and then extrapolate to larger instances. As our main contribution, we identify three essential techniques for recurrent GNNs to scale. By using (i) skip connections, (ii) state regularization, and (iii) edge convolutions, we can guide GNNs toward extrapolation. This allows us to train on small graphs and apply the same model to much larger graphs during inference. Moreover, we empirically validate the extrapolation capabilities of our GNNs on algorithmic datasets.Comment: Accepted at GCLR@AAAI23, workshop on Graphs and more Complex structures for Learning and Reasonin

    CO2 Reduction Measures in the Aviation Industry: Current Measures and Outlook

    Get PDF
    This article gives a holistic overview of the current CO2 reduction measures and analyses the effectiveness of measures that are feasible for implementation in the future. To achieve the objectives of the Paris Agreement, the aviation industry needs to implement reduction measures because of its forecasted growth and contribution to global warming. The focus is set on CO2 reduction measures, categorized in technology, operations, infrastructure/air traffic management (ATM), and market-based measures. The most promising long-term technologies to reduce CO2 emissions are hydrogen-powered aircrafts and sustainable aviation fuels (SAF). In terms of operations, CO2 emissions can be reduced through weight savings, either from fuel or payload. For every additional ton on board of an aircraft, an extra 3%–25% of fuel is necessary, depending on the route distance. From an infrastructure/ATM perspective, the goal is to decrease flight times and avoid holdings because every kg of fuel burned produces 3.16 kg of CO2 emissions. Market-based measures have a low impact as a reduction measure, but revenues may be used to accelerate the research and development of more promising reduction measures. The implementation of just one reduction measure is not reasonable. A global approach with reasonable incentives to support CO2 reduction measures is preferable

    SALSA-CLRS: A Sparse and Scalable Benchmark for Algorithmic Reasoning

    Full text link
    We introduce an extension to the CLRS algorithmic learning benchmark, prioritizing scalability and the utilization of sparse representations. Many algorithms in CLRS require global memory or information exchange, mirrored in its execution model, which constructs fully connected (not sparse) graphs based on the underlying problem. Despite CLRS's aim of assessing how effectively learned algorithms can generalize to larger instances, the existing execution model becomes a significant constraint due to its demanding memory requirements and runtime (hard to scale). However, many important algorithms do not demand a fully connected graph; these algorithms, primarily distributed in nature, align closely with the message-passing paradigm employed by Graph Neural Networks. Hence, we propose SALSA-CLRS, an extension of the current CLRS benchmark specifically with scalability and sparseness in mind. Our approach includes adapted algorithms from the original CLRS benchmark and introduces new problems from distributed and randomized algorithms. Moreover, we perform a thorough empirical evaluation of our benchmark. Code is publicly available at https://github.com/jkminder/SALSA-CLRS.Comment: (Extended Abstract) Presented at the Second Learning on Graphs Conference (LoG 2023

    SURF: A Generalization Benchmark for GNNs Predicting Fluid Dynamics

    Full text link
    Simulating fluid dynamics is crucial for the design and development process, ranging from simple valves to complex turbomachinery. Accurately solving the underlying physical equations is computationally expensive. Therefore, learning-based solvers that model interactions on meshes have gained interest due to their promising speed-ups. However, it is unknown to what extent these models truly understand the underlying physical principles and can generalize rather than interpolate. Generalization is a key requirement for a general-purpose fluid simulator, which should adapt to different topologies, resolutions, or thermodynamic ranges. We propose SURF, a benchmark designed to test the generalization\textit{generalization} of learned graph-based fluid simulators. SURF comprises individual datasets and provides specific performance and generalization metrics for evaluating and comparing different models. We empirically demonstrate the applicability of SURF by thoroughly investigating the two state-of-the-art graph-based models, yielding new insights into their generalization.Comment: Accepted at LoG 2023, Learning on Graphs Conferenc

    Liebe deinen NĂ€chsten wie dich selbst. Untersuchungen zum alttestamentlichen Gebot der NĂ€chstenliebe (Lev 19,18)

    Get PDF
    Die vorliegende Studie bĂŒndelt die Auslegungsgeschichte zu einem Bibelvers, der in seiner Rezeption stark nachgewirkt hat. Hinzu treten zahlreiche weiterfĂŒhrende Erkenntnisse, die Verfasser durch das Integrieren biblischer und ausserbiblischer Perspektiven formulieren konnte. Neben intensiven Übersetzungsstudien zum Liebesgebot, wird vor allem durch die Diskussion der Kontextbegriffe das theologische Feld umrissen, das zu einer adĂ€quaten Deutung und zum theologischen VerstĂ€ndnis beitrĂ€gt. Studien zum VerhĂ€ltnis von Liebesgebot und Heiligkeitsgesetz sowie zur historischen Verortung des Liebesgebotes beleuchten die literarischen Kontexte und die Entstehungsgeschichte eindrĂŒcklich. Abschliessend nimmt Verf. eine systematische Verortung des Liebesgebotes vor: In diesem Rahmen gelingt es ihm, eine weiterfĂŒhrende theologische Perspektive zu prĂ€sentieren, die letztlich anknĂŒpfungsfĂ€hig fĂŒr die umliegenden theologischen Disziplinen ist

    Modeling subjective relevance in schizophrenia and its relation to aberrant salience

    Full text link
    In schizophrenia, increased aberrant salience to irrelevant events and reduced learning of relevant information may relate to an underlying deficit in relevance detection. So far, subjective estimates of relevance have not been probed in schizophrenia patients. The mechanisms underlying belief formation about relevance and their translation into decisions are unclear. Using novel computational methods, we investigated relevance detection during implicit learning in 42 schizophrenia patients and 42 healthy individuals. Participants underwent functional magnetic resonance imaging while detecting the outcomes in a learning task. These were preceded by cues differing in color and shape, which were either relevant or irrelevant for outcome prediction. We provided a novel definition of relevance based on Bayesian precision and modeled reaction times as a function of relevance weighted unsigned prediction errors (UPE). For aberrant salience, we assessed responses to subjectively irrelevant cue manifestations. Participants learned the contingencies and slowed down their responses following unexpected events. Model selection revealed that individuals inferred the relevance of cue features and used it for behavioral adaption to the relevant cue feature. Relevance weighted UPEs correlated with dorsal anterior cingulate cortex activation and hippocampus deactivation. In patients, the aberrant salience bias to subjectively task-irrelevant information was increased and correlated with decreased striatal UPE activation and increased negative symptoms. This study shows that relevance estimates based on Bayesian precision can be inferred from observed behavior. This underscores the importance of relevance detection as an underlying mechanism for behavioral adaptation in complex environments and enhances the understanding of aberrant salience in schizophrenia

    Traffic4cast at NeurIPS 2022 -- Predict Dynamics along Graph Edges from Sparse Node Data: Whole City Traffic and ETA from Stationary Vehicle Detectors

    Full text link
    The global trends of urbanization and increased personal mobility force us to rethink the way we live and use urban space. The Traffic4cast competition series tackles this problem in a data-driven way, advancing the latest methods in machine learning for modeling complex spatial systems over time. In this edition, our dynamic road graph data combine information from road maps, 101210^{12} probe data points, and stationary vehicle detectors in three cities over the span of two years. While stationary vehicle detectors are the most accurate way to capture traffic volume, they are only available in few locations. Traffic4cast 2022 explores models that have the ability to generalize loosely related temporal vertex data on just a few nodes to predict dynamic future traffic states on the edges of the entire road graph. In the core challenge, participants are invited to predict the likelihoods of three congestion classes derived from the speed levels in the GPS data for the entire road graph in three cities 15 min into the future. We only provide vehicle count data from spatially sparse stationary vehicle detectors in these three cities as model input for this task. The data are aggregated in 15 min time bins for one hour prior to the prediction time. For the extended challenge, participants are tasked to predict the average travel times on super-segments 15 min into the future - super-segments are longer sequences of road segments in the graph. The competition results provide an important advance in the prediction of complex city-wide traffic states just from publicly available sparse vehicle data and without the need for large amounts of real-time floating vehicle data.Comment: Pre-print under review, submitted to Proceedings of Machine Learning Researc

    Measurement of the cosmic ray spectrum above 4×10184{\times}10^{18} eV using inclined events detected with the Pierre Auger Observatory

    Full text link
    A measurement of the cosmic-ray spectrum for energies exceeding 4×10184{\times}10^{18} eV is presented, which is based on the analysis of showers with zenith angles greater than 60∘60^{\circ} detected with the Pierre Auger Observatory between 1 January 2004 and 31 December 2013. The measured spectrum confirms a flux suppression at the highest energies. Above 5.3×10185.3{\times}10^{18} eV, the "ankle", the flux can be described by a power law E−γE^{-\gamma} with index Îł=2.70±0.02 (stat)±0.1 (sys)\gamma=2.70 \pm 0.02 \,\text{(stat)} \pm 0.1\,\text{(sys)} followed by a smooth suppression region. For the energy (EsE_\text{s}) at which the spectral flux has fallen to one-half of its extrapolated value in the absence of suppression, we find Es=(5.12±0.25 (stat)−1.2+1.0 (sys))×1019E_\text{s}=(5.12\pm0.25\,\text{(stat)}^{+1.0}_{-1.2}\,\text{(sys)}){\times}10^{19} eV.Comment: Replaced with published version. Added journal reference and DO
    • 

    corecore