411 research outputs found

    A nod in the wrong direction : Does nonverbal feedback affect eyewitness confidence in interviews?

    Get PDF
    Eyewitnesses can be influenced by an interviewer's behaviour and report information with inflated confidence as a result. Previous research has shown that positive feedback administered verbally can affect the confidence attributed to testimony, but the effect of non-verbal influence in interviews has been given little attention. This study investigated whether positive or negative non-verbal feedback could affect the confidence witnesses attribute to their responses. Participants witnessed staged CCTV footage of a crime scene and answered 20 questions in a structured interview, during which they were given either positive feedback (a head nod), negative feedback (a head shake) or no feedback. Those presented with positive non-verbal feedback reported inflated confidence compared with those presented with negative non-verbal feedback regardless of accuracy, and this effect was most apparent when participants reported awareness of the feedback. These results provide further insight into the effects of interviewer behaviour in investigative interviewsPeer reviewedFinal Accepted Versio

    Translation And Cultural Adaptation Of The States Of Consciousness Questionnaire (socq) And Statistical Validation Of The Mystical Experience Questionnaire (meq30) In Brazilian Portuguese

    Get PDF
    The States of Consciousness Questionnaire (SOCQ) was developed to assess the occurrence features of the change in consciousness induced by psilocybin, and includes the Mystical Experience Questionnaire (MEQ), developed to assess the ocurrence of mystical experiences in altered states of consciousness. Objective: To translate the SOCQ to Brazilian Portuguese and validate the 30-item MEQ. Methods: The SOCQ was translated to Brazilian Portuguese and backtranslated into English. The two English versions were compared and differences corrected, resulting in a Brazilian translation. Using an internet-survey, 1504 Portuguese-speaking subjects answered the translated version of the SOCQ. The 4-factor version of MEQ30 was analyzed using confirmatory factor analysis and reliability analysis. Results: A Brazilian Portuguese version of the SOCQ was made available. Goodness-of-fit indexes indicated that data met the factorial structure proposed for the English MEQ30. Factors presented excellent to acceptable reliability according to Cronbach's alpha: mystical (0.95); positive mood (0.71); transcendence of time/space (0.83); and ineffability (0.81). Discussion: The Brazilian Portuguese version of the MEQ30 is validated and it fits in the factorial structure performed on the original English version. The SOCQ is also available to the Brazilian Portuguese speaking population, allowing studies in different languages to be conducted and compared systematically.4411

    A Model to Support IT Infrastructure Planning and the Allocation of IT Governance Authority

    Get PDF
    Information technology (IT) requires a significant investment, involving up to 10.5% of revenue for some firms. Managers responsible for aligning IT investments with their firm\u27s strategy seek to minimize technology costs, while ensuring that the IT infrastructure can accommodate increasing utilization, new software applications, and modifications to existing software applications. It becomes more challenging to align IT infrastructure and IT investments with firm strategy when firms operate in multiple geographic markets, because the firm faces different competitive positions and unique challenges in each market. We discussed these challenges with IT executives at four Forbes Global 2000 firms headquartered in Northern Europe. We build on interviews with these executives to develop a discrete-time, finite-horizon Markov decision model to identify the most economically-beneficial IT infrastructure configuration from a set of alternatives. While more flexibility is always better (all else equal) and lower cost is always better (all else equal), our model helps firms evaluate the tradeoff between flexibility and cost given their business strategy and corporate structure. Our model supports firms in the decision process by incorporating their data and allowing firms to include their expectations of how future business conditions may impact the need to make IT changes. Because the model is flexible enough to accept parameters across a range of business strategies and corporate structures, the model can help inform decisions and ensure that design choices are consistent with firm strategy

    Objective and quantitative definitions of modified food textures based on sensory and rheological methodology

    Get PDF
    Introduction: Patients who suffer from chewing and swallowing disorders, i.e. dysphagia, may have difficulties ingesting normal food and liquids. In these patients a texture modified diet may enable that the patient maintain adequate nutrition. However, there is no generally accepted definition of ‘texture’ that includes measurements describing different food textures. Objective: Objectively define and quantify categories of texture-modified food by conducting rheological measurements and sensory analyses. A further objective was to facilitate the communication and recommendations of appropriate food textures for patients with dysphagia. Design: About 15 food samples varying in texture qualities were characterized by descriptive sensory and rheological measurements. Results: Soups were perceived as homogenous; thickened soups were perceived as being easier to swallow, more melting and creamy compared with soups without thickener. Viscosity differed between the two types of soups. Texture descriptors for pâtés were characterized by high chewing resistance, firmness, and having larger particles compared with timbales and jellied products. Jellied products were perceived as wobbly, creamy, and easier to swallow. Concerning the rheological measurements, all solid products were more elastic than viscous (G′>G″), belonging to different G′ intervals: jellied products (low G′) and timbales together with pâtés (higher G′). Conclusion: By combining sensory and rheological measurements, a system of objective, quantitative, and well-defined food textures was developed that characterizes the different texture categories

    Reading faces: differential lateral gaze bias in processing canine and human facial expressions in dogs and 4-year-old children

    Get PDF
    Sensitivity to the emotions of others provides clear biological advantages. However, in the case of heterospecific relationships, such as that existing between dogs and humans, there are additional challenges since some elements of the expression of emotions are species-specific. Given that faces provide important visual cues for communicating emotional state in both humans and dogs, and that processing of emotions is subject to brain lateralisation, we investigated lateral gaze bias in adult dogs when presented with pictures of expressive human and dog faces. Our analysis revealed clear differences in laterality of eye movements in dogs towards conspecific faces according to the emotional valence of the expressions. Differences were also found towards human faces, but to a lesser extent. For comparative purpose, a similar experiment was also run with 4-year-old children and it was observed that they showed differential processing of facial expressions compared to dogs, suggesting a species-dependent engagement of the right or left hemisphere in processing emotions

    Constraining the Twomey effect from satellite observations: issues and perspectives

    Get PDF
    The Twomey effect describes the radiative forcing associated with a change in cloud albedo due to an increase in anthropogenic aerosol emissions. It is driven by the perturbation in cloud droplet number concentration (1Nd; ant) in liquid-water clouds and is currently understood to exert a cooling effect on climate. The Twomey effect is the key driver in the effective radiative forcing due to aerosol–cloud interactions, but rapid adjustments also contribute. These adjustments are essentially the responses of cloud fraction and liquid water path to 1Nd; ant and thus scale approximately with it. While the fundamental physics of the influence of added aerosol particles on the droplet concentration (Nd) is well described by established theory at the particle scale (micrometres), how this relationship is expressed at the large-scale (hundreds of kilometres) perturbation, 1Nd; ant, remains uncertain. The discrepancy between process understanding at particle scale and insufficient quantification at the climate-relevant large scale is caused by co-variability of aerosol particles and updraught velocity and by droplet sink processes. These operate at scales on the order of tens of metres at which only localised observations are available and at which no approach yet exists to quantify the anthropogenic perturbation. Different atmospheric models suggest diverse magnitudes of the Twomey effect even when applying the same anthropogenic aerosol emission perturbation. Thus, observational data are needed to quantify and constrain the Twomey effect. At the global scale, this means satellite data. There are four key uncertainties in determining 1Nd; ant, namely the quantification of (i) the cloud-active aerosol – the cloud condensation nuclei (CCN) concentrations at or above cloud base, (ii) Nd, (iii) the statistical approach for inferring the sensitivity of Nd to aerosol particles from the satellite data and (iv) uncertainty in the anthropogenic perturbation to CCN concentrations, which is not easily accessible from observational data. This review discusses deficiencies of current approaches for the different aspects of the problem and proposes several ways forward: in terms of CCN, retrievals of optical quantities such as aerosol optical depth suffer from a lack of vertical resolution, size and hygroscopicity information, non-direct relation to the concentration of aerosols, difficulty to quantify it within or below clouds, and the problem of insufficient sensitivity at low concentrations, in addition to retrieval errors. A future path forward can include utilising co-located polarimeter and lidar instruments, ideally including high-spectral-resolution lidar capability at two wavelengths to maximise vertically resolved size distribution information content. In terms of Nd, a key problem is the lack of operational retrievals of this quantity and the inaccuracy of the retrieval especially in broken-cloud regimes. As for the Nd-to-CCN sensitivity, key issues are the updraught distributions and the role of Nd sink processes, for which empirical assessments for specific cloud regimes are currently the best solutions. These considerations point to the conclusion that past studies using existing approaches have likely underestimated the true sensitivity and, thus, the radiative forcing due to the Twomey effect

    Spatial and seasonal variability of the air-sea equilibration timescale of carbon dioxide

    Get PDF
    The exchange of carbon dioxide between the ocean and the atmosphere tends to bring waters within the mixed layer toward equilibrium by reducing the partial pressure gradient across the air-water interface. However, the equilibration process is not instantaneous; in general, there is a lag between forcing and response. The timescale of air-sea equilibration depends on several factors involving the depth of the mixed layer, wind speed, and carbonate chemistry. We use a suite of observational data sets to generate climatological and seasonal composite maps of the air-sea equilibration timescale. The relaxation timescale exhibits considerable spatial and seasonal variations that are largely set by changes in mixed layer depth and wind speed. The net effect is dominated by the mixed layer depth; the gas exchange velocity and carbonate chemistry parameters only provide partial compensation. Broadly speaking, the adjustment timescale tends to increase with latitude. We compare the observationally derived air-sea gas exchange timescale with a model-derived surface residence time and a data-derived horizontal transport timescale, which allows us to define two nondimensional metrics of equilibration efficiency. These parameters highlight the tropics, subtropics, and northern North Atlantic as regions of inefficient air-sea equilibration where carbon anomalies are relatively likely to persist. The efficiency parameters presented here can serve as simple tools for understanding the large-scale persistence of air-sea disequilibrium of CO2 in both observations and models

    Carbon partitioning between oil and carbohydrates in developing oat (Avena sativa L.) seeds

    Get PDF
    Cereals accumulate starch in the endosperm as their major energy reserve in the grain. In most cereals the embryo, scutellum, and aleurone layer are high in oil, but these tissues constitute a very small part of the total seed weight. However, in oat (Avena sativa L.) most of the oil in kernels is deposited in the same endosperm cells that accumulate starch. Thus oat endosperm is a desirable model system to study the metabolic switches responsible for carbon partitioning between oil and starch synthesis. A prerequisite for such investigations is the development of an experimental system for oat that allows for metabolic flux analysis using stable and radioactive isotope labelling. An in vitro liquid culture system, developed for detached oat panicles and optimized to mimic kernel composition during different developmental stages in planta, is presented here. This system was subsequently used in analyses of carbon partitioning between lipids and carbohydrates by the administration of 14C-labelled sucrose to two cultivars having different amounts of kernel oil. The data presented in this study clearly show that a higher amount of oil in the high-oil cultivar compared with the medium-oil cultivar was due to a higher proportion of carbon partitioning into oil during seed filling, predominantly at the earlier stages of kernel development

    Specification of Neuronal Identities by Feedforward Combinatorial Coding

    Get PDF
    Neuronal specification is often seen as a multistep process: earlier regulators confer broad neuronal identity and are followed by combinatorial codes specifying neuronal properties unique to specific subtypes. However, it is still unclear whether early regulators are re-deployed in subtype-specific combinatorial codes, and whether early patterning events act to restrict the developmental potential of postmitotic cells. Here, we use the differential peptidergic fate of two lineage-related peptidergic neurons in the Drosophila ventral nerve cord to show how, in a feedforward mechanism, earlier determinants become critical players in later combinatorial codes. Amongst the progeny of neuroblast 5–6 are two peptidergic neurons: one expresses FMRFamide and the other one expresses Nplp1 and the dopamine receptor DopR. We show the HLH gene collier functions at three different levels to progressively restrict neuronal identity in the 5–6 lineage. At the final step, collier is the critical combinatorial factor that differentiates two partially overlapping combinatorial codes that define FMRFamide versus Nplp1/DopR identity. Misexpression experiments reveal that both codes can activate neuropeptide gene expression in vast numbers of neurons. Despite their partially overlapping composition, we find that the codes are remarkably specific, with each code activating only the proper neuropeptide gene. These results indicate that a limited number of regulators may constitute a potent combinatorial code that dictates unique neuronal cell fate, and that such codes show a surprising disregard for many global instructive cues
    corecore