16,692 research outputs found
Rank-based linkage I: triplet comparisons and oriented simplicial complexes
Rank-based linkage is a new tool for summarizing a collection of objects
according to their relationships. These objects are not mapped to vectors, and
``similarity'' between objects need be neither numerical nor symmetrical. All
an object needs to do is rank nearby objects by similarity to itself, using a
Comparator which is transitive, but need not be consistent with any metric on
the whole set. Call this a ranking system on . Rank-based linkage is applied
to the -nearest neighbor digraph derived from a ranking system. Computations
occur on a 2-dimensional abstract oriented simplicial complex whose faces are
among the points, edges, and triangles of the line graph of the undirected
-nearest neighbor graph on . In steps it builds an
edge-weighted linkage graph where
is called the in-sway between objects and . Take to be
the links whose in-sway is at least , and partition into components of
the graph , for varying . Rank-based linkage is a
functor from a category of out-ordered digraphs to a category of partitioned
sets, with the practical consequence that augmenting the set of objects in a
rank-respectful way gives a fresh clustering which does not ``rip apart`` the
previous one. The same holds for single linkage clustering in the metric space
context, but not for typical optimization-based methods. Open combinatorial
problems are presented in the last section.Comment: 37 pages, 12 figure
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
Soliton Gas: Theory, Numerics and Experiments
The concept of soliton gas was introduced in 1971 by V. Zakharov as an
infinite collection of weakly interacting solitons in the framework of
Korteweg-de Vries (KdV) equation. In this theoretical construction of a diluted
soliton gas, solitons with random parameters are almost non-overlapping. More
recently, the concept has been extended to dense gases in which solitons
strongly and continuously interact. The notion of soliton gas is inherently
associated with integrable wave systems described by nonlinear partial
differential equations like the KdV equation or the one-dimensional nonlinear
Schr\"odinger equation that can be solved using the inverse scattering
transform. Over the last few years, the field of soliton gases has received a
rapidly growing interest from both the theoretical and experimental points of
view. In particular, it has been realized that the soliton gas dynamics
underlies some fundamental nonlinear wave phenomena such as spontaneous
modulation instability and the formation of rogue waves. The recently
discovered deep connections of soliton gas theory with generalized
hydrodynamics have broadened the field and opened new fundamental questions
related to the soliton gas statistics and thermodynamics. We review the main
recent theoretical and experimental results in the field of soliton gas. The
key conceptual tools of the field, such as the inverse scattering transform,
the thermodynamic limit of finite-gap potentials and the Generalized Gibbs
Ensembles are introduced and various open questions and future challenges are
discussed.Comment: 35 pages, 8 figure
Efficiency measurement based on novel performance measures in total productive maintenance (TPM) using a fuzzy integrated COPRAS and DEA method
Total Productive Maintenance (TPM) has been widely recognized as a strategic tool and lean manufacturing practice for improving manufacturing performance and sustainability, and therefore it has been successfully implemented in many organizations. The evaluation of TPM efficiency can assist companies in improving their operations across a variety of dimensions. This paper aims to propose a comprehensive and systematic framework for the evaluation of TPM performance. The proposed total productive maintenance performance measurement system (TPM PMS) is divided into four phases (e.g., design, evaluate, implement, and review): i) the design of new performance measures, ii) the evaluation of the new performance measures, iii) the implementation of the new performance measures to evaluate TPM performance, and iv) the reviewing of the TPM PMS. In the design phase, different types of performance measures impacting TPM are defined and analyzed by decision-makers. In the evaluation phase, novel performance measures are evaluated using the Fuzzy COmplex Proportional Assessment (FCOPRAS) method. In the implementation phase, a modified fuzzy data envelopment analysis (FDEA) is used to determine efficient and inefficient TPM performance with novel performance measures. In the review phase, TPM performance is periodically monitored, and the proposed TPM PMS is reviewed for successful implementation of TPM. A real-world case study from an international manufacturing company operating in the automotive industry is presented to demonstrate the applicability of the proposed TPM PMS. The main findings from the real-world case study showed that the proposed TPM PMS allows measuring TPM performance with different indicators especially soft ones, e.g., human-related, and supports decision makers by comparing the TPM performances of production lines and so prioritizing the most important preventive/predictive decisions and actions according to production lines, especially the ineffective ones in TPM program implementation. Therefore, this system can be considered a powerful monitoring tool and reliable evidence to make the implementation process of TPM more efficient in the real-world production environment
Corporate Social Responsibility: the institutionalization of ESG
Understanding the impact of Corporate Social Responsibility (CSR) on firm performance as it relates to industries reliant on technological innovation is a complex and perpetually evolving challenge. To thoroughly investigate this topic, this dissertation will adopt an economics-based structure to address three primary hypotheses. This structure allows for each hypothesis to essentially be a standalone empirical paper, unified by an overall analysis of the nature of impact that ESG has on firm performance. The first hypothesis explores the evolution of CSR to the modern quantified iteration of ESG has led to the institutionalization and standardization of the CSR concept. The second hypothesis fills gaps in existing literature testing the relationship between firm performance and ESG by finding that the relationship is significantly positive in long-term, strategic metrics (ROA and ROIC) and that there is no correlation in short-term metrics (ROE and ROS). Finally, the third hypothesis states that if a firm has a long-term strategic ESG plan, as proxied by the publication of CSR reports, then it is more resilience to damage from controversies. This is supported by the finding that pro-ESG firms consistently fared better than their counterparts in both financial and ESG performance, even in the event of a controversy. However, firms with consistent reporting are also held to a higher standard than their nonreporting peers, suggesting a higher risk and higher reward dynamic. These findings support the theory of good management, in that long-term strategic planning is both immediately economically beneficial and serves as a means of risk management and social impact mitigation. Overall, this contributes to the literature by fillings gaps in the nature of impact that ESG has on firm performance, particularly from a management perspective
Rational-approximation-based model order reduction of Helmholtz frequency response problems with adaptive finite element snapshots
We introduce several spatially adaptive model order reduction approaches tailored to non-coercive elliptic boundary value problems, specifically, parametric-in-frequency Helmholtz problems. The offline information is computed by means of adaptive finite elements, so that each snapshot lives in a different discrete space that resolves the local singularities of the analytical solution and is adjusted to the considered frequency value. A rational surrogate is then assembled adopting either a least squares or an interpolatory approach, yielding a function-valued version of the standard rational interpolation method (V-SRI) and the minimal rational interpolation method (MRI). In the context of building an approximation for linear or quadratic functionals of the Helmholtz solution, we perform several numerical experiments to compare the proposed methodologies. Our simulations show that, for interior resonant problems (whose singularities are encoded by poles on the V-SRI and MRI work comparably well. Instead, when dealing with exterior scattering problems, whose frequency response is mostly smooth, the V-SRI method seems to be the best performing one
When to be critical? Performance and evolvability in different regimes of neural Ising agents
It has long been hypothesized that operating close to the critical state is
beneficial for natural, artificial and their evolutionary systems. We put this
hypothesis to test in a system of evolving foraging agents controlled by neural
networks that can adapt agents' dynamical regime throughout evolution.
Surprisingly, we find that all populations that discover solutions, evolve to
be subcritical. By a resilience analysis, we find that there are still benefits
of starting the evolution in the critical regime. Namely, initially critical
agents maintain their fitness level under environmental changes (for example,
in the lifespan) and degrade gracefully when their genome is perturbed. At the
same time, initially subcritical agents, even when evolved to the same fitness,
are often inadequate to withstand the changes in the lifespan and degrade
catastrophically with genetic perturbations. Furthermore, we find the optimal
distance to criticality depends on the task complexity. To test it we introduce
a hard and simple task: for the hard task, agents evolve closer to criticality
whereas more subcritical solutions are found for the simple task. We verify
that our results are independent of the selected evolutionary mechanisms by
testing them on two principally different approaches: a genetic algorithm and
an evolutionary strategy. In summary, our study suggests that although optimal
behaviour in the simple task is obtained in a subcritical regime, initializing
near criticality is important to be efficient at finding optimal solutions for
new tasks of unknown complexity.Comment: arXiv admin note: substantial text overlap with arXiv:2103.1218
Path integrals and stochastic calculus
Path integrals are a ubiquitous tool in theoretical physics. However, their
use is sometimes hindered by the lack of control on various manipulations --
such as performing a change of the integration path -- one would like to carry
out in the light-hearted fashion that physicists enjoy. Similar issues arise in
the field of stochastic calculus, which we review to prepare the ground for a
proper construction of path integrals. At the level of path integration, and in
arbitrary space dimension, we not only report on existing Riemannian
geometry-based approaches that render path integrals amenable to the standard
rules of calculus, but also bring forth new routes, based on a fully
time-discretized approach, that achieve the same goal. We illustrate these
various definitions of path integration on simple examples such as the
diffusion of a particle on a sphere.Comment: 96 pages, 4 figures. New title, expanded introduction and additional
references. Version accepted in Advandes in Physic
The -property for right-angled Artin groups and their nilpotent quotients
It is proven that every non-abelian right-angled Artin group has the
-property and bounds are given on the -nilpotency index. In
case the graph is transposition-free, which is true for almost all graphs, it
is shown that the -nilpotency index is equal to 2.Comment: 21 page
Model Diagnostics meets Forecast Evaluation: Goodness-of-Fit, Calibration, and Related Topics
Principled forecast evaluation and model diagnostics are vital in fitting probabilistic models and forecasting outcomes of interest. A common principle is that fitted or predicted distributions ought to be calibrated, ideally in the sense that the outcome is indistinguishable from a random draw from the posited distribution. Much of this thesis is centered on calibration properties of various types of forecasts.
In the first part of the thesis, a simple algorithm for exact multinomial goodness-of-fit tests is proposed. The algorithm computes exact -values based on various test statistics, such as the log-likelihood ratio and Pearson\u27s chi-square. A thorough analysis shows improvement on extant methods. However, the runtime of the algorithm grows exponentially in the number of categories and hence its use is limited.
In the second part, a framework rooted in probability theory is developed, which gives rise to hierarchies of calibration, and applies to both predictive distributions and stand-alone point forecasts. Based on a general notion of conditional T-calibration, the thesis introduces population versions of T-reliability diagrams and revisits a score decomposition into measures of miscalibration, discrimination, and uncertainty. Stable and efficient estimators of T-reliability diagrams and score components arise via nonparametric isotonic regression and the pool-adjacent-violators algorithm. For in-sample model diagnostics, a universal coefficient of determination is introduced that nests and reinterprets the classical in least squares regression.
In the third part, probabilistic top lists are proposed as a novel type of prediction in classification, which bridges the gap between single-class predictions and predictive distributions. The probabilistic top list functional is elicited by strictly consistent evaluation metrics, based on symmetric proper scoring rules, which admit comparison of various types of predictions
- …