87 research outputs found
Examining our worst fears
Fifty five graduate students in the 2008 class of Smith College School for Social Work, on average, seek symbolic immortality through existential themes, excluding religion. Additionally, the more students identify with themes that define meaning in their lives, the greater their fear of encountering threats to that meaning. Further, students reported that they daydream about existential fears that threaten symbolic immortality more frequently than they dream, or have memories of them; and that they dream about primal fears more frequently than they daydream, or have memories of them. This study tested the hypotheses that people fear what they imagine might happen to them more than what has actually happened to them (Kunzendorf, et al., 2003-2004; 2006-2007); and that the imagined happening that each individual fears most is not death per se, but something that represents a threat to meaning of life as defined by each individual (Kunzendorf, et al., 2006-2007). Students were surveyed to examine if they preserve their immortality through their work as social workers. Students were invited via e-mail to anonymously participate in this quantitative study that explored an individual\u27s fears through utilization of three self-rating scales. This research may increase awareness that although social work graduate students dream about primal fears, they daydream about their meaningful lives, and that they hope to live on through their positive work with their clients. Additionally, findings suggest that it is important for social workers to attend to their clients\u27 worst fears by listening for existential themes that threaten meaning in life
The effect of freeze /thaw temperature fluctuations on microbial metabolism of petroleum hydrocarbon contaminated Antarctic soil
Petroleum contaminated soil exists at McMurdo Station in Antarctica. These soils were contaminated with historic releases of JP-8 jet fuel. Over time, there does not appear to have been significant reduction in the petroleum concentrations in these soils. This lack of reduction has been attributed to the extremely cold Antarctic environment and the lack of available moisture.
Cold temperatures and/or lack of moisture may not be the factors inhibiting biodegradation. Soil temperatures can exceed 20 degrees centigrade (°C) during the austral summer and melt water becomes available. However, the soil temperatures have also been reported to fluctuate rapidly. Swings of soil temperatures have ranged over a 25°C interval several times during a period of hours. Rapid changes in temperature may be most detrimental to microbial activity.
This research evaluated the biodegradation of petroleum contaminated Antarctic soil at a low stable temperature (7°C). It also evaluated temperature fluctuations. This was accomplished through experiments using contaminated soil from McMurdo Station, Antarctica.
The first experiment indicated that a statistically significant loss of petroleum hydrocarbons occurred at stable temperature. Approximately 440 mg/Kg of the starting average petroleum hydrocarbon concentration (38%) were lost by the 56th day of the experiment. Approximately 163 mg/Kg of was lost from the volatilization control reactors and 97 mg/Kg of from the fluctuating temperature reactors.
The second experiment showed a statistically significant 41% reduction of petroleum hydrocarbon concentrations at a stable temperature from a starting average concentration of approximately 13,000 to an ending average concentration of approximately 7,680 mg/Kg. Less than 650 mg/Kg was lost due to volatilization and approximately 333 mg/Kg from the fluctuating temperature reactors.
The bulk of the petroleum hydrocarbon loss was due to biotic processes, indicated by increased carbon dioxide in reactor effluent gas at stable temperature. The soils also showed significant growth of petroleum hydrocarbon-degrading microorganisms. No carbon dioxide above background concentrations was measured for the sterile, volatilization controls or the fluctuating temperature reactors. Therefore, it appears that temperature fluctuations have an inhibitory effect on biodegradation of petroleum hydrocarbons in Antarctic soils
Snowfall phases in analysis of a snow cover in Hornsund, Spitsbergen
Conditions influencing formation of a snow cover in southern Spitsbergen in
Homsund during the winters 1988/1989 and 1989/1990 are presented. Winter snow cover consists
of several overlaid layers which correspond to particular snowfall phases, distinguished on
the basis of analysis of occurrence of winter precipitation and development of a snow cover in
numerous snow pits. Five snowfall phases during the winter 1988/1989 and three during the
winter 1989/1990 were determined. A genetic classification, including specific features of
a snow cover in Spitsbergen, was applied to describe snow layers in pits. The classification covers
metamorphic changes of snow: dry metamorphosis, wet metamorphosis without freezing,
wet metamorphosis with freezing, and aeolian metamorphosis. Precipitation, strong winds, and
winter thaws are the factors which mostly influence formation of a snow cover in the Hornsund
region. Most winter precipitation is connected with inflow of relatively warm air masses from
the Norwegian Sea. Short term winter thaws which occur afterwards, result in formation of
a characteristic ice-crust on a snow cover. The ice-crust layer protects a snow cover against deflation.
It is also a marker band which enables dating of snow. Ice crust layers are almost always
the borders between particular snowfall phases. Strong winds (V > 8 m/s) significantly transform
a surface layer of snow. Snow deflation, which is locally quite intensive, occurs mainly at
seashore plains, mountain ridges and convex slopes
The thermal condition of the active layer in the permafrost at Hornsund, Spitsbergen
Ground temperature variations have been analysed to the depth of 160 cm,with
respect to meteorological elements and short−wave radiation balance. The database of the
ground temperature covers a thirteen month−long period (May 1992 – June 1993), which in−
cluded both the seasons of complete freezing of the ground and thaw. Special attention has
been given to the development of perennial permafrost and its spatial distribution. In summer,
the depth of thawing ground varied in different types of ground—at the Polish Polar Station,
this was ca. 130 cm. The ground froze completely in the first week of October. Its thawing
started in June. The snow cover restrained heat penetration in the ground, which hindered the
ground thawing process. Cross−correlation shows a significant influence of the radiation bal−
ance (K*) on the values of near−surface ground temperatures (r2 = 0.62 for summer)
Szacunek naturalnej stopy bezrobocia dla Polski
This paper presents alternative estimates of the natural rate of unemployment (NAWRU,
NAIRU) for Poland for the years 1990–2008. The estimation process utilizes sequentially
procedures based on the classical and the modified Phillips curve, the structural price-wage models
as well as approach that uses the reduced form of the Phillips curve.
The comparison of the results leads to the conclusion that the natural rates of unemployment
estimated by different methods are generally close to each other and do not differ significantly
from the observed values. The conducted analysis indicates that the relation of the natural rate
of unemployment and the rate of registered unemployment may signal a change of the inflation
pressures, which in turn can be used by the Monetary Policy Council
Optimal Experimental Designs for Nonlinear Conjoint Analysis
Estimators of choice-based multi-attribute preference models have a covariance matrix that depends on both the design matrix as well as the unknown parameters to be estimated from the data. As a consequence, researchers cannot optimally design the experiment (minimizing the variance). Several approaches have been considered in the literature, but they require prior assumptions about the values of the parameters that often are not available. Furthermore, the resulting design is neither optimal nor robust when the assumed values are far from the true parameters. In this paper, we develop efficient worst-case designs for the choice-based conjoint analysis which accounts for customer heterogeneity. The contributions of this method are manifold. First, we account for the uncertainty associated with ALL of the unknown parameters of the mixed logit model (both the mean and the elements in covariance matrix of the heterogeneity distribution). Second, we allow for the unknown parameters to be correlated. Third, this method is also computationally efficient, which in practical applications is an advantage over e.g. fully Bayesian designs. We conduct multiple simulations to evaluate the performance of this method. The worst case designs computed for the logit and mixed logit models are indeed more robust than the local and Bayesian benchmarks, when the prior guess about the parameters is far from their true values
Distribution of snow accumulation on some glaciers of Spitsbergen
We describe the spatial variability of snow accumulation on three selected gla−
ciers in Spitsbergen (Hansbreen, Werenskioldbreen and Aavatsmarkbreen) in the winter
seasons of 1988/89, 1998/99 and 2001/2002 respectively. The distribution of snow cover is
determined by the interrelationships between the direction of the glacier axes and the domi−
nant easterly winds. The snow distribution is regular on the glaciers located E−W, but is
more complicated on the glaciers located meridionally. The western part of glaciers is more
predisposed to the snow accumulation than the eastern. This is due to snowdrift intensity.
Statistical relationships between snow accumulation, deviation of accumulation from the
mean values and accumulation variability related to topographic parameters such as: alti−
tude, slope inclination, aspect, slope curvature and distance from the edge of the glacier
have been determined. The only significant relations occured between snow accumulation
and altitude (r = 0.64–0.91)
Three essays on conjoint analysis : optimal design and estimation of endogenous consideration sets
Over many years conjoint analysis has become the favourite tool among marketing practitioners
and scholars for learning consumer preferences towards new products or services. Its wide
acceptance is substantiated by the high validity of conjoint results in numerous successful implementations
among a variety of industries and applications. Additionally, this experimental
method elicits respondents’ preference information in a natural and effective way.
One of the main challenges in conjoint analysis is to efficiently estimate consumer preferences
towards more and more complex products from a relatively small sample of observations because
respondent’s wear-out contaminates the data quality. Therefore the choice of sample products to
be evaluated by the respondent (the design) is as much as relevant as the efficient estimation.
This thesis contributes to both research areas, focusing on the optimal design of experiments
(essay one and two) and the estimation of random consideration sets (essay three).
Each of the essays addresses relevant research gaps and can be of interest to both marketing
managers as well as academicians. The main contributions of this thesis can be summarized as
follows:
• The first essay proposes a general flexible approach to build optimal designs for linear
conjoint models. We do not compute good designs, but the best ones according to the size
(trace or determinant) of the information matrix of the associated estimators. Additionally,
we propose the solution to the problem of repeated stimuli in optimal designs obtained
by numerical methods. In most of comparative examples our approach is faster than the
existing software for Conjoint Analysis, while achieving the same efficiency of designs.
This is an important quality for the applications in an online context. This approach is
also more flexible than traditional design methodology: it handles continuous, discrete and
mixed attribute types. We demonstrate the suitability of this approach for conjoint analysis
with rank data and ratings (a case of an individual respondent and a panel). Under certain
assumptions this approach can also be applied in the context of discrete choice experiments.
• In the essay 2 we propose a novel method to construct robust efficient designs for conjoint
iii
experiments, where design optimization is more problematic, because the covariance matrix
depends on the unknown parameter. In fact this occurs in many nonlinear models
commonly considered in conjoint analysis literature, including the preferred choice-based
conjoint analysis. In such cases the researcher is forced to make strong assumptions about
unknown parameters and to implement an experimental design not knowing its true efficiency.
We propose a solution to this puzzle, which is robust even if we do not have a good
prior guess about consumer preferences. We demonstrate that benchmark designs perform
well only if the assumed parameter is close to true values, which is rarely the case, otherwise
there is no need to implement the experiment. On the other hand, our worst-case
designs perform well under a variety of scenarios and are more robust to misspecification
of parameters.
• Essay 3 contributes with a method to estimate consideration sets which are endogenous
to respondent preferences. Consideration sets arise when consumers use decision rules
to simplify difficult choices, for example when evaluating a wide assortment of complex
products. This happens because rationally bounded respondents often skip potentially interesting
options, for example due to lack of information (brand unawareness), perceptual
limitations (low attention or low salience), or halo effect. Research in consumer behaviour
established that consumers choose in two stages: first they screen off products whose attributes
do not satisfy certain criteria, and then select the best alternative according to
their preference order (over the considered options). Traditional CA focuses on the second
step, but more recently methods incorporating both steps were developed. However, they
are always considered to be independent, while the halo effect clearly leads to endogeneity.
If the cognitive process is influenced by the overall affective impression of the product, we
cannot assume that the screening-off is independent from the evaluative step. To test this
behavior we conduct an online experiment of lunch menu entrees using Amazon MTurk
sample.A lo largo de los años, el “Análisis Conjunto” se ha convertido en una de las herramientas más extendidas
entre los profesionales y académicos de marketing. Se trata de un método experimental
para estudiar la función de utilidad que representa las preferencias de los consumidores sobre
productos o servicios definidos mediante diversos atributos. Su enorme popularidad se basa en
la validez y utilidad de los resultados obtenidos en multitud de estudios aplicados a todo tipo
de industrias. Se utiliza regularmente para problemas tales como diseño de nuevos productos,
análisis de segmentación, predicción de cuotas de mercado, o fijación de precios.
En el análisis conjunto, se mide la utilidad que uno o varios consumidores asocian a diversos
productos, y se estima un modelo paramétrico de la función de utilidad a partir de dichos datos
usando métodos de regresión en sus diversas variantes. Uno de los principales retos del análisis
conjunto es estimar eficientemente los parámetros de la función de utilidad del consumidor hacia
productos cada vez más complejos, y hacerlo a partir de una muestra relativamente pequeña de
observaciones debido a que en experimentos prolongados la fatiga de los encuestados contamina
la calidad de los datos. La eficiencia de los estimadores es esencial para ello, y dicha eficiencia
depende de los productos evaluados. Por tanto, la elección de los productos de la muestra que
serán evaluados por el encuestado (el diseño) es clave para el éxito del estudio. La primera parte
de esta tesis contribuye al diseño óptimo de experimentos (ensayos uno y dos, que se centran
respectivamente en modelos lineales en parámetros, y modelos no lineales). Pero la función de
utilidad puede presentar discontinuidades. A menudo el consumidor simplifica la decisión aplicando
reglas heurísticas, que de facto introducen una discontinuidad. Estas reglas se denominan
conjuntos de consideración: los productos que cumplen la regla son evaluados con la función de
utilidad usual, el resto son descartados o evaluados con una utilidad diferente (especialmente
baja) que tiende a descartarlos. La literatura ha estudiado la estimación de este tipo de modelos
suponiendo que la decisión de consideración está dada exógenamente. Pero sin embargo, las
reglas heurísticas pueden ser endógenas. Hay sesgos de percepción que relacionan utilidad y la
forma en se perciben los atributos. El tercer estudio de esta tesis considera modelos con conjuntos
v
de consideración endógenos.
Cada uno de los ensayos cubre problemas de investigación relevantes y puede resultar de
interés tanto para managers de marketing como para académicos. Las principales aportaciones
de esta tesis pueden resumirse en lo siguiente:
• El primer ensayo presenta una metodología general y flexible para generar diseños experimentales
óptimos exactos para modelos lineales, con aplicación a multitud de variantes
dentro del análisis conjunto. Se presentan algoritmos para calcular los diseños óptimos
mediante métodos de Newton, minimizando el tamaño (traza o determinante) de la matriz
de covarianzas de los estimadores asociados. En la mayoría de los ejemplos comparativos
nuestro enfoque resulta más rápido que los softwares existentes para Análisis Conjunto,
al tiempo que alcanza la misma eficiencia de los diseños. Nuestro enfoque es también
más flexible que la metodología de diseño tradicional: maneja tipos de atributos continuos,
discretos y mixtos. Demostramos la validez de este enfoque para el análisis conjunto con
datos de rango de preferencias y valoraciones (un caso de un encuestado individual y un
panel). Bajo ciertos supuestos, este enfoque puede también ser aplicado en el contexto de
experimentos de elección discreta.
• En el segundo ensayo nos centramos en modelos de preferencia cuyos estimadores tienen
matrices de covarianzas no pivotales (dependientes del parámetro a estimar). Esto sucede
por ejemplo en modelos de preferencia no lineales en parámetros, así como modelos de
elección como el popular Logit Multinomial. En tal caso la minimización de la matriz de
covarianzas no es posible. La literatura ha considerado algunas soluciones como suponer
una hipótesis acerca de este valor a fin de poder minimizar en el diseño la traza o determinante
de la matriz de covarianzas. Pero estos diseños de referencia funcionan bien solo si
el parámetro asumido es cercano a los valores reales (esto raramente sucede en la práctica,
o de lo contrario no hay necesidad de implementar el experimento). En este ensayo proponemos
un método para construir diseños robustos basados en algoritmos minimax, y los
comparamos con los que normalmente se aplican en una gran variedad de escenarios. Nuevi
stros diseños funcionan son más robustos a errores de los parámetros, reduciendo el riesgo
de estimadores altamente ineficientes (que en cambio está presente en los otros métodos).
• El ensayo 3 aporta un método para estimar conjuntos de consideración que son endógenos
a las preferencias de los encuestados. Conjuntos de consideración surgen cuando los consumidores
usan reglas de decisión para simplificar la dificultad de las elecciones, lo cual
requiere una significativa búsqueda de información y esfuerzos cognitivos (por ejemplo,
evaluar una amplia variedad de productos complejos). Esto ocurre porque racionalmente
limitados consumidores a menudo pasan por alto opciones potencialmente interesantes, por
ejemplo, debido a una falta de información (desconocimiento de la marca), limitaciones de
percepción (baja atención o prominencia), o efecto de halo. La investigación en el comportamiento
de los consumidores establece que los consumidores eligen en dos fases: primero
eliminan productos que no satisfacen ciertos criterios y luego seleccionan las mejores alternativas
de acuerdo a su orden de preferencia (de acuerdo a las opciones consideradas). El
Análisis Conjunto convencional, se centra en el segundo paso, pero recientemente, se han
desarrollado métodos incorporando ambos pasos. Sin embargo, son siempre considerados
independientes, mientras que el efecto de halo claramente lleva a la endogeneidad del proceso
de consideración. Si el proceso cognitivo está influenciado por una impresión general
afectiva del producto, no podemos asumir que la eliminación sea independiente del proceso
evaluativo. Para probar este comportamiento llevamos a cabo un experimento online sobre
entrantes en menús de comida usando una muestra desde Amazon MTurk
- …