19,182 research outputs found
An Analysis Tool for Push-Sum Based Distributed Optimization
The push-sum algorithm is probably the most important distributed averaging
approach over directed graphs, which has been applied to various problems
including distributed optimization. This paper establishes the explicit
absolute probability sequence for the push-sum algorithm, and based on which,
constructs quadratic Lyapunov functions for push-sum based distributed
optimization algorithms. As illustrative examples, the proposed novel analysis
tool can improve the convergence rates of the subgradient-push and stochastic
gradient-push, two important algorithms for distributed convex optimization
over unbalanced directed graphs. Specifically, the paper proves that the
subgradient-push algorithm converges at a rate of for general
convex functions and stochastic gradient-push algorithm converges at a rate of
for strongly convex functions, over time-varying unbalanced directed
graphs. Both rates are respectively the same as the state-of-the-art rates of
their single-agent counterparts and thus optimal, which closes the theoretical
gap between the centralized and push-sum based (sub)gradient methods. The paper
further proposes a heterogeneous push-sum based subgradient algorithm in which
each agent can arbitrarily switch between subgradient-push and
push-subgradient. The heterogeneous algorithm thus subsumes both
subgradient-push and push-subgradient as special cases, and still converges to
an optimal point at an optimal rate. The proposed tool can also be extended to
analyze distributed weighted averaging.Comment: arXiv admin note: substantial text overlap with arXiv:2203.16623,
arXiv:2303.1706
Rank-based linkage I: triplet comparisons and oriented simplicial complexes
Rank-based linkage is a new tool for summarizing a collection of objects
according to their relationships. These objects are not mapped to vectors, and
``similarity'' between objects need be neither numerical nor symmetrical. All
an object needs to do is rank nearby objects by similarity to itself, using a
Comparator which is transitive, but need not be consistent with any metric on
the whole set. Call this a ranking system on . Rank-based linkage is applied
to the -nearest neighbor digraph derived from a ranking system. Computations
occur on a 2-dimensional abstract oriented simplicial complex whose faces are
among the points, edges, and triangles of the line graph of the undirected
-nearest neighbor graph on . In steps it builds an
edge-weighted linkage graph where
is called the in-sway between objects and . Take to be
the links whose in-sway is at least , and partition into components of
the graph , for varying . Rank-based linkage is a
functor from a category of out-ordered digraphs to a category of partitioned
sets, with the practical consequence that augmenting the set of objects in a
rank-respectful way gives a fresh clustering which does not ``rip apart`` the
previous one. The same holds for single linkage clustering in the metric space
context, but not for typical optimization-based methods. Open combinatorial
problems are presented in the last section.Comment: 37 pages, 12 figure
Towards Advantages of Parameterized Quantum Pulses
The advantages of quantum pulses over quantum gates have attracted increasing
attention from researchers. Quantum pulses offer benefits such as flexibility,
high fidelity, scalability, and real-time tuning. However, while there are
established workflows and processes to evaluate the performance of quantum
gates, there has been limited research on profiling parameterized pulses and
providing guidance for pulse circuit design. To address this gap, our study
proposes a set of design spaces for parameterized pulses, evaluating these
pulses based on metrics such as expressivity, entanglement capability, and
effective parameter dimension. Using these design spaces, we demonstrate the
advantages of parameterized pulses over gate circuits in the aspect of duration
and performance at the same time thus enabling high-performance quantum
computing. Our proposed design space for parameterized pulse circuits has shown
promising results in quantum chemistry benchmarks.Comment: 11 Figures, 4 Table
Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference
We propose Conditional Adapter (CoDA), a parameter-efficient transfer
learning method that also improves inference efficiency. CoDA generalizes
beyond standard adapter approaches to enable a new way of balancing speed and
accuracy using conditional computation. Starting with an existing dense
pretrained model, CoDA adds sparse activation together with a small number of
new parameters and a light-weight training phase. Our experiments demonstrate
that the CoDA approach provides an unexpectedly efficient way to transfer
knowledge. Across a variety of language, vision, and speech tasks, CoDA
achieves a 2x to 8x inference speed-up compared to the state-of-the-art Adapter
approach with moderate to no accuracy loss and the same parameter efficiency
Manipulating Federated Recommender Systems: Poisoning with Synthetic Users and Its Countermeasures
Federated Recommender Systems (FedRecs) are considered privacy-preserving
techniques to collaboratively learn a recommendation model without sharing user
data. Since all participants can directly influence the systems by uploading
gradients, FedRecs are vulnerable to poisoning attacks of malicious clients.
However, most existing poisoning attacks on FedRecs are either based on some
prior knowledge or with less effectiveness. To reveal the real vulnerability of
FedRecs, in this paper, we present a new poisoning attack method to manipulate
target items' ranks and exposure rates effectively in the top-
recommendation without relying on any prior knowledge. Specifically, our attack
manipulates target items' exposure rate by a group of synthetic malicious users
who upload poisoned gradients considering target items' alternative products.
We conduct extensive experiments with two widely used FedRecs (Fed-NCF and
Fed-LightGCN) on two real-world recommendation datasets. The experimental
results show that our attack can significantly improve the exposure rate of
unpopular target items with extremely fewer malicious users and fewer global
epochs than state-of-the-art attacks. In addition to disclosing the security
hole, we design a novel countermeasure for poisoning attacks on FedRecs.
Specifically, we propose a hierarchical gradient clipping with sparsified
updating to defend against existing poisoning attacks. The empirical results
demonstrate that the proposed defending mechanism improves the robustness of
FedRecs.Comment: This paper has been accepted by SIGIR202
ADS_UNet: A Nested UNet for Histopathology Image Segmentation
The UNet model consists of fully convolutional network (FCN) layers arranged
as contracting encoder and upsampling decoder maps. Nested arrangements of
these encoder and decoder maps give rise to extensions of the UNet model, such
as UNete and UNet++. Other refinements include constraining the outputs of the
convolutional layers to discriminate between segment labels when trained end to
end, a property called deep supervision. This reduces feature diversity in
these nested UNet models despite their large parameter space. Furthermore, for
texture segmentation, pixel correlations at multiple scales contribute to the
classification task; hence, explicit deep supervision of shallower layers is
likely to enhance performance. In this paper, we propose ADS UNet, a stage-wise
additive training algorithm that incorporates resource-efficient deep
supervision in shallower layers and takes performance-weighted combinations of
the sub-UNets to create the segmentation model. We provide empirical evidence
on three histopathology datasets to support the claim that the proposed ADS
UNet reduces correlations between constituent features and improves performance
while being more resource efficient. We demonstrate that ADS_UNet outperforms
state-of-the-art Transformer-based models by 1.08 and 0.6 points on CRAG and
BCSS datasets, and yet requires only 37% of GPU consumption and 34% of training
time as that required by Transformers.Comment: To be published in Expert Systems With Application
Fair Grading Algorithms for Randomized Exams
This paper studies grading algorithms for randomized exams. In a randomized
exam, each student is asked a small number of random questions from a large
question bank. The predominant grading rule is simple averaging, i.e.,
calculating grades by averaging scores on the questions each student is asked,
which is fair ex-ante, over the randomized questions, but not fair ex-post, on
the realized questions. The fair grading problem is to estimate the average
grade of each student on the full question bank. The maximum-likelihood
estimator for the Bradley-Terry-Luce model on the bipartite student-question
graph is shown to be consistent with high probability when the number of
questions asked to each student is at least the cubed-logarithm of the number
of students. In an empirical study on exam data and in simulations, our
algorithm based on the maximum-likelihood estimator significantly outperforms
simple averaging in prediction accuracy and ex-post fairness even with a small
class and exam size
Grasping nothing: a study of minimal ontologies and the sense of music
If music were to have a proper sense – one in which it is truly given – one might reasonably place this in sound and aurality. I contend, however, that no such sense exists; rather, the sense of music takes place, and it does so with the impossible. To this end, this thesis – which is a work of philosophy and music – advances an ontology of the impossible (i.e., it thinks the being of what, properly speaking, can have no being) and considers its implications for music, articulating how ontological aporias – of the event, of thinking the absolute, and of sovereignty’s dismemberment – imply senses of music that are anterior to sound. John Cage’s Silent Prayer, a nonwork he never composed, compels a rerethinking of silence on the basis of its contradictory status of existence; Florian Hecker et al.’s Speculative Solution offers a basis for thinking absolute music anew to the precise extent that it is a discourse of meaninglessness; and Manfred Werder’s [yearn] pieces exhibit exemplarily that music’s sense depends on the possibility of its counterfeiting. Inso-much as these accounts produce musical senses that take the place of sound, they are also understood to be performances of these pieces. Here, then, thought is music’s organon and its instrument
Efficiency measurement based on novel performance measures in total productive maintenance (TPM) using a fuzzy integrated COPRAS and DEA method
Total Productive Maintenance (TPM) has been widely recognized as a strategic tool and lean manufacturing practice for improving manufacturing performance and sustainability, and therefore it has been successfully implemented in many organizations. The evaluation of TPM efficiency can assist companies in improving their operations across a variety of dimensions. This paper aims to propose a comprehensive and systematic framework for the evaluation of TPM performance. The proposed total productive maintenance performance measurement system (TPM PMS) is divided into four phases (e.g., design, evaluate, implement, and review): i) the design of new performance measures, ii) the evaluation of the new performance measures, iii) the implementation of the new performance measures to evaluate TPM performance, and iv) the reviewing of the TPM PMS. In the design phase, different types of performance measures impacting TPM are defined and analyzed by decision-makers. In the evaluation phase, novel performance measures are evaluated using the Fuzzy COmplex Proportional Assessment (FCOPRAS) method. In the implementation phase, a modified fuzzy data envelopment analysis (FDEA) is used to determine efficient and inefficient TPM performance with novel performance measures. In the review phase, TPM performance is periodically monitored, and the proposed TPM PMS is reviewed for successful implementation of TPM. A real-world case study from an international manufacturing company operating in the automotive industry is presented to demonstrate the applicability of the proposed TPM PMS. The main findings from the real-world case study showed that the proposed TPM PMS allows measuring TPM performance with different indicators especially soft ones, e.g., human-related, and supports decision makers by comparing the TPM performances of production lines and so prioritizing the most important preventive/predictive decisions and actions according to production lines, especially the ineffective ones in TPM program implementation. Therefore, this system can be considered a powerful monitoring tool and reliable evidence to make the implementation process of TPM more efficient in the real-world production environment
Neural Architecture Search: Insights from 1000 Papers
In the past decade, advances in deep learning have resulted in breakthroughs
in a variety of areas, including computer vision, natural language
understanding, speech recognition, and reinforcement learning. Specialized,
high-performing neural architectures are crucial to the success of deep
learning in these areas. Neural architecture search (NAS), the process of
automating the design of neural architectures for a given task, is an
inevitable next step in automating machine learning and has already outpaced
the best human-designed architectures on many tasks. In the past few years,
research in NAS has been progressing rapidly, with over 1000 papers released
since 2020 (Deng and Lindauer, 2021). In this survey, we provide an organized
and comprehensive guide to neural architecture search. We give a taxonomy of
search spaces, algorithms, and speedup techniques, and we discuss resources
such as benchmarks, best practices, other surveys, and open-source libraries
- …