1,025 research outputs found
SARA: Self-Aware Resource Allocation for Heterogeneous MPSoCs
In modern heterogeneous MPSoCs, the management of shared memory resources is
crucial in delivering end-to-end QoS. Previous frameworks have either focused
on singular QoS targets or the allocation of partitionable resources among CPU
applications at relatively slow timescales. However, heterogeneous MPSoCs
typically require instant response from the memory system where most resources
cannot be partitioned. Moreover, the health of different cores in a
heterogeneous MPSoC is often measured by diverse performance objectives. In
this work, we propose a Self-Aware Resource Allocation (SARA) framework for
heterogeneous MPSoCs. Priority-based adaptation allows cores to use different
target performance and self-monitor their own intrinsic health. In response,
the system allocates non-partitionable resources based on priorities. The
proposed framework meets a diverse range of QoS demands from heterogeneous
cores.Comment: Accepted by the 55th annual Design Automation Conference 2018
(DAC'18
On Technology Transfer to an Asymmetric Cournot Duopoly
This note studies the transfer of a cost-reducing innovation from an independent patent-holder to an asymmetric Cournot duopoly that has different unit costs of production. It is found that royalty licensing can be superior to fixed-fee licensing for the independent patent-holder.Cournot duopoly
Sprinklers: A Randomized Variable-Size Striping Approach to Reordering-Free Load-Balanced Switching
Internet traffic continues to grow exponentially, calling for switches that
can scale well in both size and speed. While load-balanced switches can achieve
such scalability, they suffer from a fundamental packet reordering problem.
Existing proposals either suffer from poor worst-case packet delays or require
sophisticated matching mechanisms. In this paper, we propose a new family of
stable load-balanced switches called "Sprinklers" that has comparable
implementation cost and performance as the baseline load-balanced switch, but
yet can guarantee packet ordering. The main idea is to force all packets within
the same virtual output queue (VOQ) to traverse the same "fat path" through the
switch, so that packet reordering cannot occur. At the core of Sprinklers are
two key innovations: a randomized way to determine the "fat path" for each VOQ,
and a way to determine its "fatness" roughly in proportion to the rate of the
VOQ. These innovations enable Sprinklers to achieve near-perfect load-balancing
under arbitrary admissible traffic. Proving this property rigorously using
novel worst-case large deviation techniques is another key contribution of this
work
A Nutritional Label for Rankings
Algorithmic decisions often result in scoring and ranking individuals to
determine credit worthiness, qualifications for college admissions and
employment, and compatibility as dating partners. While automatic and seemingly
objective, ranking algorithms can discriminate against individuals and
protected groups, and exhibit low diversity. Furthermore, ranked results are
often unstable --- small changes in the input data or in the ranking
methodology may lead to drastic changes in the output, making the result
uninformative and easy to manipulate. Similar concerns apply in cases where
items other than individuals are ranked, including colleges, academic
departments, or products.
In this demonstration we present Ranking Facts, a Web-based application that
generates a "nutritional label" for rankings. Ranking Facts is made up of a
collection of visual widgets that implement our latest research results on
fairness, stability, and transparency for rankings, and that communicate
details of the ranking methodology, or of the output, to the end user. We will
showcase Ranking Facts on real datasets from different domains, including
college rankings, criminal risk assessment, and financial services.Comment: 4 pages, SIGMOD demo, 3 figuress, ACM SIGMOD 201
Does a Fair Model Produce Fair Explanations? Relating Distributive and Procedural Fairness
We consider interactions between fairness and explanations in neural networks. Fair machine learning aims to achieve equitable allocation of resources --- distributive fairness --- by balancing accuracy and error rates across protected groups or among similar individuals. Methods shown to improve distributive fairness can induce different model behavior between majority and minority groups. This divergence in behavior can be perceived as disparate treatment, undermining acceptance of the system. In this paper, we use feature attribution methods to measure the average explanations for a protected group, and show that differences can occur even when the model is fair. We prove a surprising relationship between explanations (via feature attribution) and fairness (in a regression setting), demonstrating that under moderate assumptions, there are circumstances when controlling one can influence the other. We then study this relationship experimentally by designing a novel loss term for explanations called GroupWise Attribution Divergence (GWAD) and comparing its effects with an existing family of loss terms for (distributive) fairness. We show that controlling explanation loss tends to preserve accuracy. We also find that controlling distributive fairness loss tends to also reduce explanation loss empirically, even though it is not guaranteed to do so theoretically. We also show that there are additive improvements by including both loss terms. We conclude by considering the implications for trust and policy of reasoning about fairness as manipulations of explanations
- âŠ