31 research outputs found
Exploring manycore architectures for next-generation HPC systems through the MANGO approach
[EN] The Horizon 2020 MANGO project aims at exploring deeply heterogeneous accelerators for use in High-Performance Computing systems running multiple applications with different Quality of Service (QoS) levels. The main goal of the project is to exploit customization to adapt computing resources to reach the desired QoS. For this purpose, it explores different but interrelated mechanisms across the architecture and system software. In particular, in this paper we focus on the runtime resource management, the thermal management, and support provided for parallel programming, as well as introducing three applications on which the project foreground will be validated.This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 671668.Flich Cardo, J.; Agosta, G.; Ampletzer, P.; Atienza-Alonso, D.; Brandolese, C.; Cappe, E.; Cilardo, A.... (2018). Exploring manycore architectures for next-generation HPC systems through the MANGO approach. Microprocessors and Microsystems. 61:154-170. https://doi.org/10.1016/j.micpro.2018.05.011S1541706
The Strong Perfect Graph Conjecture: 40 years of Attempts, and its Resolution
International audienceThe Strong Perfect Graph Conjecture (SPGC) was certainly one of the most challenging conjectures in graph theory. During more than four decades, numerous attempts were made to solve it, by combinatorial methods, by linear algebraic methods, or by polyhedral methods. The first of these three approaches yielded the first (and to date only) proof of the SPGC; the other two remain promising to consider in attempting an alternative proof. This paper is an unbalanced survey of the attempts to solve the SPGC; unbalanced, because (1) we devote a signicant part of it to the 'primitive graphs and structural faults' paradigm which led to the Strong Perfect Graph Theorem (SPGT); (2) we briefly present the other "direct" attempts, that is, the ones for which results exist showing one (possible) way to the proof; (3) we ignore entirely the "indirect" approaches whose aim was to get more information about the properties and structure of perfect graphs, without a direct impact on the SPGC. Our aim in this paper is to trace the path that led to the proof of the SPGT as completely as possible. Of course, this implies large overlaps with the recent book on perfect graphs [J.L. Ramirez-Alfonsin and B.A. Reed, eds., Perfect Graphs (Wiley & Sons, 2001).], but it also implies a deeper analysis (with additional results) and another viewpoint on the topic
Semi-supervised Eigenvectors for Large-scale Locally-biased Learning
In many applications, one has side information, e.g., labels that are
provided in a semi-supervised manner, about a specific target region of a large
data set, and one wants to perform machine learning and data analysis tasks
"nearby" that prespecified target region. For example, one might be interested
in the clustering structure of a data graph near a prespecified "seed set" of
nodes, or one might be interested in finding partitions in an image that are
near a prespecified "ground truth" set of pixels. Locally-biased problems of
this sort are particularly challenging for popular eigenvector-based machine
learning and data analysis tools. At root, the reason is that eigenvectors are
inherently global quantities, thus limiting the applicability of
eigenvector-based methods in situations where one is interested in very local
properties of the data.
In this paper, we address this issue by providing a methodology to construct
semi-supervised eigenvectors of a graph Laplacian, and we illustrate how these
locally-biased eigenvectors can be used to perform locally-biased machine
learning. These semi-supervised eigenvectors capture
successively-orthogonalized directions of maximum variance, conditioned on
being well-correlated with an input seed set of nodes that is assumed to be
provided in a semi-supervised manner. We show that these semi-supervised
eigenvectors can be computed quickly as the solution to a system of linear
equations; and we also describe several variants of our basic method that have
improved scaling properties. We provide several empirical examples
demonstrating how these semi-supervised eigenvectors can be used to perform
locally-biased learning; and we discuss the relationship between our results
and recent machine learning algorithms that use global eigenvectors of the
graph Laplacian
Transparency Helps Reveal When Language Models Learn Meaning
AbstractMany current NLP systems are built from language models trained to optimize unsupervised objectives on large amounts of raw text. Under what conditions might such a procedure acquire meaning? Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations (i.e., languages with strong transparency), both autoregressive and masked language models successfully learn to emulate semantic relations between expressions. However, when denotations are changed to be context-dependent with the language otherwise unmodified, this ability degrades. Turning to natural language, our experiments with a specific phenomenon—referential opacity—add to the growing body of evidence that current language models do not represent natural language semantics well. We show this failure relates to the context-dependent nature of natural language form-meaning mappings
Geometric and Algebraic Combinatorics
The 2015 Oberwolfach meeting “Geometric and Algebraic Combinatorics” was organized by Gil Kalai (Jerusalem), Isabella Novik (Seattle), Francisco Santos (Santander), and Volkmar Welker (Marburg). It covered a wide variety of aspects of Discrete Geometry, Algebraic Combinatorics with geometric flavor, and Topological Combinatorics. Some of the highlights of the conference included (1) counterexamples to the topological Tverberg conjecture, and (2) the latest results around the Heron-Rota-Welsh conjecture
A Graph-Transformation Modelling Framework for Supervisory Control
Formal design methodologies have the potential to accelerate the development and increase the
reliability of supervisory controllers designed within industry. One promising design framework
which has been shown to do so is known as supervisory control synthesis (SCS).
In SCS, instead of manually designing the supervisory controller itself, one designs models of
the uncontrolled system and its control requirements. These models are then provided as input to
a special synthesis algorithm which uses them to automatically generate a model of the supervisory
controller. This outputted model is guaranteed to be correct as long as the models of the uncontrolled
system and its control requirements are valid. This accelerates development by removing
the need to verify and rectify the model of the supervisory controller. Instead, only the models of
the uncontrolled system and its requirements must be validated.
To address problems of scale, SCS can be applied in modular fashion, and implemented in
hierarchical and decentralized architectures.
Despite the large body of research con rming the bene ts of integrating SCS within the development
process of supervisory controllers, it has still not yet found widespread application within
industry. In the author's opinion, this is partly attributed to the non-user-friendly nature of the
automaton-based modelling framework used create the models of the uncontrolled system (and
control requirements in even-based SCS). It is believed that in order for SCS to become more accessible
to a wider range of non experts, modelling within SCS must be made more intuitive and
user-friendly.
To improve the usability of SCS, this work illustrates how a graph transformation-based modelling
approach can be employed to generate the automaton models required for supervisory control
synthesis. Furthermore, it is demonstrated how models of the speci cation can be intuitively represented
within our proposed modelling framework for both event- and state-based supervisory
control synthesis. Lastly, this thesis assesses the relative advantages brought about by the proposed
graph transformation-based modelling framework over the conventional automaton based modelling
approach
Mining Twitter: Graph Analysis of Interactions among Users
Starting from early 2000s, social network websites became very popular; these social media allow users to interact and share content using social links. Users of these platforms often have the possibility to establish hundreds or thousands of social links with other users. While initial studies have focused on social networks topology, a natural and important aspect of networks has been neglected: the focus on user interactions. These links can be monitored to generate knowledge on said users as well as their relationships with others. There has been, lately, an increasing interest on examining the activity network - network able to provide, once traversed, the actual user interactions rather than friendships links - to filter and mine patterns or communities.
The goal of this work is to exploit the Twitter traffic in order to analyze the users interactions. In order to do so, our work models tweets posted by users as activities list in a graph called activity network. Then, we traverse it looking for
Direct (e.g. mentions by user, retweets, direct replies etc.) and indirect (list of users mentioned in a tweet, users retwitting the same tweet produced by another user, etc.) relationships among users in order to create the users interactions graph. We provide a weight schema by which assign a value to interactions found. The obtained graph shows the connections among users and, thanks to their weighted links, those users who have stronger links, such as Verified Accounts or "propaganda users" or cliques of users, clusters of users interacting with each other. Those entities may be interesting to investigate in several fields like Open Source Intelligence or Business Intelligence. This work has been developed on a distributed infrastructure able to perform these tasks efficiently.
The network analysis leads to some considerations: firstly, it is necessary to identify all meaningful interactions among users, which typically depend from the social network and the activities performed. Secondly, many nodes (profiles) with high indegree are associated to mass media and famous people, and thus a filtering phase is a crucial step. Finally, it is remarkable to see that experiments carried out at different moments could lead to very different results since many similar topics may not involve the same users in different moments.
This work will describe the state-of-the-art of the network analysis, and will introduce the architectural design of the system, as well as the analysis performed with the challenges encountered. Results collected by our analysis lead us to the conclusion that, despite being in its preliminary stages, focusing on social interactions is important because it may reveal connection of particular users willing to perform actual activities which may gain interest in intellingence organizations