55 research outputs found
Advances and Applications of Dezert-Smarandache Theory (DSmT) for Information Fusion (Collected Works), Vol. 4
The fourth volume on Advances and Applications of Dezert-Smarandache Theory (DSmT) for information fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics. The contributions (see List of Articles published in this book, at the end of the volume) have been published or presented after disseminating the third volume (2009, http://fs.unm.edu/DSmT-book3.pdf) in international conferences, seminars, workshops and journals.
First Part of this book presents the theoretical advancement of DSmT, dealing with Belief functions, conditioning and deconditioning, Analytic Hierarchy Process, Decision Making, Multi-Criteria, evidence theory, combination rule, evidence distance, conflicting belief, sources of evidences with different importance and reliabilities, importance of sources, pignistic probability transformation, Qualitative reasoning under uncertainty, Imprecise belief
structures, 2-Tuple linguistic label, Electre Tri Method, hierarchical proportional redistribution, basic belief assignment, subjective probability measure, Smarandache codification, neutrosophic logic, Evidence theory, outranking methods, Dempster-Shafer Theory, Bayes fusion rule, frequentist probability, mean square error, controlling factor, optimal assignment solution, data association, Transferable Belief Model, and others.
More applications of DSmT have emerged in the past years since the apparition of the third book of DSmT 2009. Subsequently, the second part of this volume is about applications of DSmT in correlation with Electronic Support Measures, belief function, sensor networks, Ground Moving Target and Multiple target tracking, Vehicle-Born Improvised Explosive Device, Belief Interacting Multiple Model filter, seismic and acoustic sensor, Support Vector Machines, Alarm
classification, ability of human visual system, Uncertainty Representation and Reasoning Evaluation Framework, Threat Assessment, Handwritten Signature Verification, Automatic Aircraft Recognition, Dynamic Data-Driven Application System, adjustment of secure communication trust analysis, and so on.
Finally, the third part presents a List of References related with DSmT published or presented along the years since its inception in 2004, chronologically ordered
Multispace & Multistructure. Neutrosophic Transdisciplinarity (100 Collected Papers of Sciences), Vol. IV
The fourth volume, in my book series of “Collected Papers”, includes 100 published and unpublished articles, notes, (preliminary) drafts containing just ideas to be further investigated, scientific souvenirs, scientific blogs, project proposals, small experiments, solved and unsolved problems and conjectures, updated or alternative versions of previous papers, short or long humanistic essays, letters to the editors - all collected in the previous three decades (1980-2010) – but most of them are from the last decade (2000-2010), some of them being lost and found, yet others are extended, diversified, improved versions. This is an eclectic tome of 800 pages with papers in various fields of sciences, alphabetically listed, such as: astronomy, biology, calculus, chemistry, computer programming codification, economics and business and politics, education and administration, game theory, geometry, graph theory, information fusion, neutrosophic logic and set, non-Euclidean geometry, number theory, paradoxes, philosophy of science, psychology, quantum physics, scientific research methods, and statistics. It was my preoccupation and collaboration as author, co-author, translator, or cotranslator, and editor with many scientists from around the world for long time. Many topics from this book are incipient and need to be expanded in future explorations
Conditionals and modularity in general logics
In this work in progress, we discuss independence and interpolation and
related topics for classical, modal, and non-monotonic logics
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
Three essays on conjoint analysis : optimal design and estimation of endogenous consideration sets
Over many years conjoint analysis has become the favourite tool among marketing practitioners
and scholars for learning consumer preferences towards new products or services. Its wide
acceptance is substantiated by the high validity of conjoint results in numerous successful implementations
among a variety of industries and applications. Additionally, this experimental
method elicits respondents’ preference information in a natural and effective way.
One of the main challenges in conjoint analysis is to efficiently estimate consumer preferences
towards more and more complex products from a relatively small sample of observations because
respondent’s wear-out contaminates the data quality. Therefore the choice of sample products to
be evaluated by the respondent (the design) is as much as relevant as the efficient estimation.
This thesis contributes to both research areas, focusing on the optimal design of experiments
(essay one and two) and the estimation of random consideration sets (essay three).
Each of the essays addresses relevant research gaps and can be of interest to both marketing
managers as well as academicians. The main contributions of this thesis can be summarized as
follows:
• The first essay proposes a general flexible approach to build optimal designs for linear
conjoint models. We do not compute good designs, but the best ones according to the size
(trace or determinant) of the information matrix of the associated estimators. Additionally,
we propose the solution to the problem of repeated stimuli in optimal designs obtained
by numerical methods. In most of comparative examples our approach is faster than the
existing software for Conjoint Analysis, while achieving the same efficiency of designs.
This is an important quality for the applications in an online context. This approach is
also more flexible than traditional design methodology: it handles continuous, discrete and
mixed attribute types. We demonstrate the suitability of this approach for conjoint analysis
with rank data and ratings (a case of an individual respondent and a panel). Under certain
assumptions this approach can also be applied in the context of discrete choice experiments.
• In the essay 2 we propose a novel method to construct robust efficient designs for conjoint
iii
experiments, where design optimization is more problematic, because the covariance matrix
depends on the unknown parameter. In fact this occurs in many nonlinear models
commonly considered in conjoint analysis literature, including the preferred choice-based
conjoint analysis. In such cases the researcher is forced to make strong assumptions about
unknown parameters and to implement an experimental design not knowing its true efficiency.
We propose a solution to this puzzle, which is robust even if we do not have a good
prior guess about consumer preferences. We demonstrate that benchmark designs perform
well only if the assumed parameter is close to true values, which is rarely the case, otherwise
there is no need to implement the experiment. On the other hand, our worst-case
designs perform well under a variety of scenarios and are more robust to misspecification
of parameters.
• Essay 3 contributes with a method to estimate consideration sets which are endogenous
to respondent preferences. Consideration sets arise when consumers use decision rules
to simplify difficult choices, for example when evaluating a wide assortment of complex
products. This happens because rationally bounded respondents often skip potentially interesting
options, for example due to lack of information (brand unawareness), perceptual
limitations (low attention or low salience), or halo effect. Research in consumer behaviour
established that consumers choose in two stages: first they screen off products whose attributes
do not satisfy certain criteria, and then select the best alternative according to
their preference order (over the considered options). Traditional CA focuses on the second
step, but more recently methods incorporating both steps were developed. However, they
are always considered to be independent, while the halo effect clearly leads to endogeneity.
If the cognitive process is influenced by the overall affective impression of the product, we
cannot assume that the screening-off is independent from the evaluative step. To test this
behavior we conduct an online experiment of lunch menu entrees using Amazon MTurk
sample.A lo largo de los años, el “Análisis Conjunto” se ha convertido en una de las herramientas más extendidas
entre los profesionales y académicos de marketing. Se trata de un método experimental
para estudiar la función de utilidad que representa las preferencias de los consumidores sobre
productos o servicios definidos mediante diversos atributos. Su enorme popularidad se basa en
la validez y utilidad de los resultados obtenidos en multitud de estudios aplicados a todo tipo
de industrias. Se utiliza regularmente para problemas tales como diseño de nuevos productos,
análisis de segmentación, predicción de cuotas de mercado, o fijación de precios.
En el análisis conjunto, se mide la utilidad que uno o varios consumidores asocian a diversos
productos, y se estima un modelo paramétrico de la función de utilidad a partir de dichos datos
usando métodos de regresión en sus diversas variantes. Uno de los principales retos del análisis
conjunto es estimar eficientemente los parámetros de la función de utilidad del consumidor hacia
productos cada vez más complejos, y hacerlo a partir de una muestra relativamente pequeña de
observaciones debido a que en experimentos prolongados la fatiga de los encuestados contamina
la calidad de los datos. La eficiencia de los estimadores es esencial para ello, y dicha eficiencia
depende de los productos evaluados. Por tanto, la elección de los productos de la muestra que
serán evaluados por el encuestado (el diseño) es clave para el éxito del estudio. La primera parte
de esta tesis contribuye al diseño óptimo de experimentos (ensayos uno y dos, que se centran
respectivamente en modelos lineales en parámetros, y modelos no lineales). Pero la función de
utilidad puede presentar discontinuidades. A menudo el consumidor simplifica la decisión aplicando
reglas heurísticas, que de facto introducen una discontinuidad. Estas reglas se denominan
conjuntos de consideración: los productos que cumplen la regla son evaluados con la función de
utilidad usual, el resto son descartados o evaluados con una utilidad diferente (especialmente
baja) que tiende a descartarlos. La literatura ha estudiado la estimación de este tipo de modelos
suponiendo que la decisión de consideración está dada exógenamente. Pero sin embargo, las
reglas heurísticas pueden ser endógenas. Hay sesgos de percepción que relacionan utilidad y la
forma en se perciben los atributos. El tercer estudio de esta tesis considera modelos con conjuntos
v
de consideración endógenos.
Cada uno de los ensayos cubre problemas de investigación relevantes y puede resultar de
interés tanto para managers de marketing como para académicos. Las principales aportaciones
de esta tesis pueden resumirse en lo siguiente:
• El primer ensayo presenta una metodología general y flexible para generar diseños experimentales
óptimos exactos para modelos lineales, con aplicación a multitud de variantes
dentro del análisis conjunto. Se presentan algoritmos para calcular los diseños óptimos
mediante métodos de Newton, minimizando el tamaño (traza o determinante) de la matriz
de covarianzas de los estimadores asociados. En la mayoría de los ejemplos comparativos
nuestro enfoque resulta más rápido que los softwares existentes para Análisis Conjunto,
al tiempo que alcanza la misma eficiencia de los diseños. Nuestro enfoque es también
más flexible que la metodología de diseño tradicional: maneja tipos de atributos continuos,
discretos y mixtos. Demostramos la validez de este enfoque para el análisis conjunto con
datos de rango de preferencias y valoraciones (un caso de un encuestado individual y un
panel). Bajo ciertos supuestos, este enfoque puede también ser aplicado en el contexto de
experimentos de elección discreta.
• En el segundo ensayo nos centramos en modelos de preferencia cuyos estimadores tienen
matrices de covarianzas no pivotales (dependientes del parámetro a estimar). Esto sucede
por ejemplo en modelos de preferencia no lineales en parámetros, así como modelos de
elección como el popular Logit Multinomial. En tal caso la minimización de la matriz de
covarianzas no es posible. La literatura ha considerado algunas soluciones como suponer
una hipótesis acerca de este valor a fin de poder minimizar en el diseño la traza o determinante
de la matriz de covarianzas. Pero estos diseños de referencia funcionan bien solo si
el parámetro asumido es cercano a los valores reales (esto raramente sucede en la práctica,
o de lo contrario no hay necesidad de implementar el experimento). En este ensayo proponemos
un método para construir diseños robustos basados en algoritmos minimax, y los
comparamos con los que normalmente se aplican en una gran variedad de escenarios. Nuevi
stros diseños funcionan son más robustos a errores de los parámetros, reduciendo el riesgo
de estimadores altamente ineficientes (que en cambio está presente en los otros métodos).
• El ensayo 3 aporta un método para estimar conjuntos de consideración que son endógenos
a las preferencias de los encuestados. Conjuntos de consideración surgen cuando los consumidores
usan reglas de decisión para simplificar la dificultad de las elecciones, lo cual
requiere una significativa búsqueda de información y esfuerzos cognitivos (por ejemplo,
evaluar una amplia variedad de productos complejos). Esto ocurre porque racionalmente
limitados consumidores a menudo pasan por alto opciones potencialmente interesantes, por
ejemplo, debido a una falta de información (desconocimiento de la marca), limitaciones de
percepción (baja atención o prominencia), o efecto de halo. La investigación en el comportamiento
de los consumidores establece que los consumidores eligen en dos fases: primero
eliminan productos que no satisfacen ciertos criterios y luego seleccionan las mejores alternativas
de acuerdo a su orden de preferencia (de acuerdo a las opciones consideradas). El
Análisis Conjunto convencional, se centra en el segundo paso, pero recientemente, se han
desarrollado métodos incorporando ambos pasos. Sin embargo, son siempre considerados
independientes, mientras que el efecto de halo claramente lleva a la endogeneidad del proceso
de consideración. Si el proceso cognitivo está influenciado por una impresión general
afectiva del producto, no podemos asumir que la eliminación sea independiente del proceso
evaluativo. Para probar este comportamiento llevamos a cabo un experimento online sobre
entrantes en menús de comida usando una muestra desde Amazon MTurk
A generic framework for context-dependent fusion with application to landmine detection.
For complex detection and classification problems, involving data with large intra-class variations and noisy inputs, no single source of information can provide a satisfactory solution. As a result, combination of multiple classifiers is playing an increasing role in solving these complex pattern recognition problems, and has proven to be a viable alternative to using a single classifier. Over the past few years, a variety of schemes have been proposed for combining multiple classifiers. Most of these were global as they assign a degree of worthiness to each classifier, that is averaged over the entire training data. This may not be the optimal way to combine the different experts since the behavior of each one may not be uniform over the different regions of the feature space. To overcome this issue, few local methods have been proposed in the last few years. Local fusion methods aim to adapt the classifiers\u27 worthiness to different regions of the feature space. First, they partition the input samples. Then, they identify the best classifier for each partition and designate it as the expert for that partition. Unfortunately, current local methods are either computationally expensive and/or perform these two tasks independently of each other. However, feature space partition and algorithm selection are not independent and their optimization should be simultaneous. In this dissertation, we introduce a new local fusion approach, called Context Extraction for Local Fusion (CELF). CELF was designed to adapt the fusion to different regions of the feature space. It takes advantage of the strength of the different experts and overcome their limitations. First, we describe the baseline CELF algorithm. We formulate a novel objective function that combines context identification and multi-algorithm fusion criteria into a joint objective function. The context identification component thrives to partition the input feature space into different clusters (called contexts), while the fusion component thrives to learn the optimal fusion parameters within each cluster. Second, we propose several variations of CELF to deal with different applications scenario. In particular, we propose an extension that includes a feature discrimination component (CELF-FD). This version is advantageous when dealing with high dimensional feature spaces and/or when the number of features extracted by the individual algorithms varies significantly. CELF-CA is another extension of CELF that adds a regularization term to the objective function to introduce competition among the clusters and to find the optimal number of clusters in an unsupervised way. CELF-CA starts by partitioning the data into a large number of small clusters. As the algorithm progresses, adjacent clusters compete for data points, and clusters that lose the competition gradually become depleted and vanish. Third, we propose CELF-M that generalizes CELF to support multiple classes data sets. The baseline CELF and its extensions were formulated to use linear aggregation to combine the output of the different algorithms within each context. For some applications, this can be too restrictive and non-linear fusion may be needed. To address this potential drawback, we propose two other variations of CELF that use non-linear aggregation. The first one is based on Neural Networks (CELF-NN) and the second one is based on Fuzzy Integrals (CELF-FI). The latter one has the desirable property of assigning weights to subsets of classifiers to take into account the interaction between them. To test a new signature using CELF (or its variants), each algorithm would extract its set of features and assigns a confidence value. Then, the features are used to identify the best context, and the fusion parameters of this context are used to fuse the individual confidence values. For each variation of CELF, we formulate an objective function, derive the necessary conditions to optimize it, and construct an iterative algorithm. Then we use examples to illustrate the behavior of the algorithm, compare it to global fusion, and highlight its advantages. We apply our proposed fusion methods to the problem of landmine detection. We use data collected using Ground Penetration Radar (GPR) and Wideband Electro -Magnetic Induction (WEMI) sensors. We show that CELF (and its variants) can identify meaningful and coherent contexts (e.g. mines of same type, mines buried at the same site, etc.) and that different expert algorithms can be identified for the different contexts. In addition to the land mine detection application, we apply our approaches to semantic video indexing, image database categorization, and phoneme recognition. In all applications, we compare the performance of CELF with standard fusion methods, and show that our approach outperforms all these methods
- …