60,167 research outputs found
Partner selection in green supply chains using PSO â a practical approach
Partner selection is crucial to green supply chain management as the focal firm is responsible for the environmental performance of the whole supply chain. The construction of appropriate selection criteria is an essential, but often neglected pre-requisite in the partner selection process. This paper proposes a three-stage model that combines Dempster-Shafer belief acceptability theory and particle swarm optimization technique for the first time in this application. This enables optimization of both effectiveness, in its consideration of the inter-dependence of a broad range of quantitative and qualitative selection criteria, and efficiency in its use of scarce resources during the criteria construction process to be achieved simultaneously. This also enables both operational and strategic attributes can be selected at different levels of hierarchy criteria in different decision-making environments. The practical efficacy of the model is demonstrated by an application in Company ABC, a large Chinese electronic equipment and instrument manufacturer
Preference fusion and Condorcet's Paradox under uncertainty
Facing an unknown situation, a person may not be able to firmly elicit
his/her preferences over different alternatives, so he/she tends to express
uncertain preferences. Given a community of different persons expressing their
preferences over certain alternatives under uncertainty, to get a collective
representative opinion of the whole community, a preference fusion process is
required. The aim of this work is to propose a preference fusion method that
copes with uncertainty and escape from the Condorcet paradox. To model
preferences under uncertainty, we propose to develop a model of preferences
based on belief function theory that accurately describes and captures the
uncertainty associated with individual or collective preferences. This work
improves and extends the previous results. This work improves and extends the
contribution presented in a previous work. The benefits of our contribution are
twofold. On the one hand, we propose a qualitative and expressive preference
modeling strategy based on belief-function theory which scales better with the
number of sources. On the other hand, we propose an incremental distance-based
algorithm (using Jousselme distance) for the construction of the collective
preference order to avoid the Condorcet Paradox.Comment: International Conference on Information Fusion, Jul 2017, Xi'an,
Chin
Blending Learning and Inference in Structured Prediction
In this paper we derive an efficient algorithm to learn the parameters of
structured predictors in general graphical models. This algorithm blends the
learning and inference tasks, which results in a significant speedup over
traditional approaches, such as conditional random fields and structured
support vector machines. For this purpose we utilize the structures of the
predictors to describe a low dimensional structured prediction task which
encourages local consistencies within the different structures while learning
the parameters of the model. Convexity of the learning task provides the means
to enforce the consistencies between the different parts. The
inference-learning blending algorithm that we propose is guaranteed to converge
to the optimum of the low dimensional primal and dual programs. Unlike many of
the existing approaches, the inference-learning blending allows us to learn
efficiently high-order graphical models, over regions of any size, and very
large number of parameters. We demonstrate the effectiveness of our approach,
while presenting state-of-the-art results in stereo estimation, semantic
segmentation, shape reconstruction, and indoor scene understanding
Apperceptive patterning: Artefaction, extensional beliefs and cognitive scaffolding
In âPsychopower and Ordinary Madnessâ my ambition, as it relates to Bernard Stieglerâs recent literature, was twofold: 1) critiquing Stieglerâs work on exosomatization and artefactual posthumanismâor, more specifically, nonhumanismâto problematize approaches to media archaeology that rely upon technical exteriorization; 2) challenging how Stiegler engages with Giuseppe Longo and Francis Baillyâs conception of negative entropy. These efforts were directed by a prevalent techno-cultural qualifier: the rise of Synthetic Intelligence (including neural nets, deep learning, predictive processing and Bayesian models of cognition). This paper continues this project but first directs a critical analytic lens at the Derridean practice of the ontologization of grammatization from which Stiegler emerges while also distinguishing how metalanguages operate in relation to object-oriented environmental interaction by way of inferentialism. Stalking continental (Kapp, Simondon, Leroi-Gourhan, etc.) and analytic traditions (e.g., Carnap, Chalmers, Clark, Sutton, Novaes, etc.), we move from artefacts to AI and Predictive Processing so as to link theories related to technicity with philosophy of mind. Simultaneously drawing forth Robert Brandomâs conceptualization of the roles that commitments play in retrospectively reconstructing the social experiences that lead to our endorsement(s) of norms, we compliment this account with Reza Negarestaniâs deprivatized account of intelligence while analyzing the equipollent role between language and media (both digital and analog)
- âŠ