46,514 research outputs found
An optimal feedback model to prevent manipulation behaviours in consensus under social network group decision making
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.A novel framework to prevent manipulation behaviour
in consensus reaching process under social network
group decision making is proposed, which is based on a theoretically
sound optimal feedback model. The manipulation
behaviour classification is twofold: (1) âindividual manipulationâ
where each expert manipulates his/her own behaviour to achieve
higher importance degree (weight); and (2) âgroup manipulationâ
where a group of experts force inconsistent experts to adopt
specific recommendation advices obtained via the use of fixed
feedback parameter. To counteract âindividual manipulationâ, a
behavioural weights assignment method modelling sequential
attitude ranging from âdictatorshipâ to âdemocracyâ is developed,
and then a reasonable policy for group minimum adjustment cost
is established to assign appropriate weights to experts. To prevent
âgroup manipulationâ, an optimal feedback model with objective
function the individual adjustments cost and constraints related
to the threshold of group consensus is investigated. This approach
allows the inconsistent experts to balance group consensus and
adjustment cost, which enhances their willingness to adopt the
recommendation advices and consequently the group reaching
consensus on the decision making problem at hand. A numerical
example is presented to illustrate and verify the proposed optimal
feedback model
Do the citizens of Europe trust their police?
Purpose - The maintenance of public order and the control of crime are clearly amongst the primary objectives of global law enforcement agencies. An important antecedent to this is the consideration of public trust in their police force. The purpose of this paper is to utilise data from the 5th Round European Social Survey (ESS), to investigate how public social indicators may highlight the level of trust in a countryâs police force. Design/methodology/approach â The results from the ESS are analysed using fuzzy-set Qualitative Comparative Analysis (fsQCA), multiply conjunctional causal configurations of the considered social indicators are then established and analysed. Findings - A consequence of using fsQCA, asymmetric causal configurations are identified for the relative high and low limiting levels of trust towards the police in the considered countries. The results offer novel insights into the relationship between social indicators and police trust, as well as expositing a nascent technique (fsQCA) that may offer future potential in this area. Originality/value â This paper introduces a novel technique to analyse a major European data set relating to citizens perceptions of the police. The findings might prove useful for policing organisations as they develop strategies to maintain/improve the level of trust and confidence of citizens in the policing services they provide
Introducing fuzzy trust for managing belief conflict over semantic web data
Interpreting Semantic Web Data by different human experts can end up in scenarios, where each expert comes up with different and conflicting ideas what a concept can mean and how they relate to other concepts. Software agents that operate on the Semantic Web have to deal with similar scenarios where the interpretation of Semantic Web data that describes the heterogeneous sources becomes contradicting. One such application area of the Semantic Web is ontology mapping where different similarities have to be combined into a more reliable and coherent view, which might easily become unreliable if the conflicting
beliefs in similarities are not managed effectively between the different agents. In this paper we propose a solution for managing this conflict by introducing trust between the mapping agents based on the fuzzy voting model
Intelligent XML Tag Classification Techniques for XML Encryption Improvement
Flexibility, friendliness, and adaptability have been key components to use XML to exchange information across different networks providing the needed common syntax for various messaging systems. However excess usage of XML as a communication medium shed the light on security standards used to protect exchanged messages achieving data confidentiality and privacy.
This research presents a novel approach to secure XML messages being used in various systems with efficiency providing high security measures and high performance. system model is based on two major modules, the first to classify XML messages and define which parts of the messages to be secured assigning an importance level for each tag presented in XML message and then using XML encryption standard proposed earlier by W3C [3] to perform a partial encryption on selected parts defined in classification stage.
As a result, study aims to improve both the performance of XML encryption process and bulk message handling to achieve data cleansing efficiently
What attracts vehicle consumersâ buying:A Saaty scale-based VIKOR (SSC-VIKOR) approach from after-sales textual perspective?
Purpose:
The increasingly booming e-commerce development has stimulated vehicle consumers to express individual reviews through online forum. The purpose of this paper is to probe into the vehicle consumer consumption behavior and make recommendations for potential consumers from textual comments viewpoint.
Design/methodology/approach:
A big data analytic-based approach is designed to discover vehicle consumer consumption behavior from online perspective. To reduce subjectivity of expert-based approaches, a parallel NaĂŻve Bayes approach is designed to analyze the sentiment analysis, and the Saaty scale-based (SSC) scoring rule is employed to obtain specific sentimental value of attribute class, contributing to the multi-grade sentiment classification. To achieve the intelligent recommendation for potential vehicle customers, a novel SSC-VIKOR approach is developed to prioritize vehicle brand candidates from a big data analytical viewpoint.
Findings:
The big data analytics argue that âcost-effectivenessâ characteristic is the most important factor that vehicle consumers care, and the data mining results enable automakers to better understand consumer consumption behavior.
Research limitations/implications:
The case study illustrates the effectiveness of the integrated method, contributing to much more precise operations management on marketing strategy, quality improvement and intelligent recommendation.
Originality/value:
Researches of consumer consumption behavior are usually based on survey-based methods, and mostly previous studies about comments analysis focus on binary analysis. The hybrid SSC-VIKOR approach is developed to fill the gap from the big data perspective
An empirical learning-based validation procedure for simulation workflow
Simulation workflow is a top-level model for the design and control of
simulation process. It connects multiple simulation components with time and
interaction restrictions to form a complete simulation system. Before the
construction and evaluation of the component models, the validation of
upper-layer simulation workflow is of the most importance in a simulation
system. However, the methods especially for validating simulation workflow is
very limit. Many of the existing validation techniques are domain-dependent
with cumbersome questionnaire design and expert scoring. Therefore, this paper
present an empirical learning-based validation procedure to implement a
semi-automated evaluation for simulation workflow. First, representative
features of general simulation workflow and their relations with validation
indices are proposed. The calculation process of workflow credibility based on
Analytic Hierarchy Process (AHP) is then introduced. In order to make full use
of the historical data and implement more efficient validation, four learning
algorithms, including back propagation neural network (BPNN), extreme learning
machine (ELM), evolving new-neuron (eNFN) and fast incremental gaussian mixture
model (FIGMN), are introduced for constructing the empirical relation between
the workflow credibility and its features. A case study on a landing-process
simulation workflow is established to test the feasibility of the proposed
procedure. The experimental results also provide some useful overview of the
state-of-the-art learning algorithms on the credibility evaluation of
simulation models
- âŠ