477 research outputs found

    How do personality traits influence the experience of positive emotions?

    Get PDF
    Neuroticism is a personality trait known to be associated with negative emotions (Tellegan, 1985), but little is known about how it affects the experience of positive emotions, and more specifically, hope. Positive emotions can have long lasting effects on well-being, by broadening thought-action repertoires and undoing lingering negative emotions (Frederickson, 1998). Cognitive strategies strongly influence state positive emotions in individuals low in neuroticism, but have no effect on those with high neuroticism (Ng & Diener, 2009). We expected individuals high in neuroticism would experience a smaller increase in state hope following hope emotion induction. Participants completed an online survey involving a personality test and trait positive and negative emotion questionnaires. Prior to and following being randomly assigned to either the hope (n = 147) or nurturant love (n = 142) induction procedure, participants completed questionnaires on state positive and negative emotions. The results revealed the hope induction procedures of autobiographical recall and imagery were not effective at inducing state hope, but increased state positive emotions and decreased state negative emotions. Neuroticism was negatively correlated with trait hope and trait positive emotions and positively correlated with trait negative emotions. Exploratory analyses of control questions regarding how often participants thought of hopeful memories and whether they believe engaging in this daily to be beneficial were also performed. It was concluded that neuroticism is negatively associated with the experience of trait and state positive emotions, but not state hope. The limitations of the study design and future research are also discussed

    INTERNAL CONSISTENCIES: REGARDING WEIGHTS AND MEASURES

    Full text link
    Rather than seeing a distinction between theoretical discourse and the science of building Vitruvius, a Roman architect and engineer active in the 1st century BC, argued convincingly for the breadth of knowledge necessary to practice architecture with authority, that "knowledge is the child of practice and theory". The crux of his argument is that a sufficient breadth of training, to appreciate both the theoretical and practical sciences, is necessary to lend authority to creative vision. In like spirit, a series of workshops in UCD Architecture has sought to challenge the contemporary lack of sympathy between theoretical discourse and the science of building which disables the authority with which both students and practitioners practice. Embedded within each workshop are variations regarding intent, from the cultural discourse of the international collaboration of the North Atlantic Rim project, to the theoretical concerns of the Ateliers Series and environmental bias of the Irish Timber course, each drawing upon discourses external to architecture and leavening them against the inherent logic of material and structural imperatives. The resulting evolution in design process, linking both technological imperatives and conceptual intentions to the creative act, shatters the prevailing disjunction between theoretical concerns and technological explorations in the discipline of architecture

    ClassCut for Unsupervised Class Segmentation

    Get PDF
    Abstract. We propose a novel method for unsupervised class segmentation on a set of images. It alternates between segmenting object instances and learning a class model. The method is based on a segmentation energy defined over all images at the same time, which can be optimized efficiently by techniques used before in interactive segmentation. Over iterations, our method progressively learns a class model by integrating observations over all images. In addition to appearance, this model captures the location and shape of the class with respect to an automatically determined coordinate frame common across images. This frame allows us to build stronger shape and location models, similar to those used in object class detection. Our method is inspired by interactive segmentation methods [1], but it is fully automatic and learns models characteristic for the object class rather than specific to one particular object/image. We experimentally demonstrate on the Caltech4, Caltech101, and Weizmann horses datasets that our method (a) transfers class knowledge across images and this improves results compared to segmenting every image independently; (b) outperforms Grabcut [1] for the task of unsupervised segmentation; (c) offers competitive performance compared to the state-of-the-art in unsupervised segmentation and in particular it outperforms the topic model [2].

    A Unified Nanopublication Model for Effective and User-Friendly Access to the Elements of Scientific Publishing

    Get PDF
    Scientific publishing is the means by which we communicate and share scientific knowledge, but this process currently often lacks transparency and machine-interpretable representations. Scientific articles are published in long coarse-grained text with complicated structures, and they are optimized for human readers and not for automated means of organization and access. Peer reviewing is the main method of quality assessment, but these peer reviews are nowadays rarely published and their own complicated structure and linking to the respective articles is not accessible. In order to address these problems and to better align scientific publishing with the principles of the Web and Linked Data, we propose here an approach to use nanopublications as a unifying model to represent in a semantic way the elements of publications, their assessments, as well as the involved processes, actors, and provenance in general. To evaluate our approach, we present a dataset of 627 nanopublications representing an interlinked network of the elements of articles (such as individual paragraphs) and their reviews (such as individual review comments). Focusing on the specific scenario of editors performing a meta-review, we introduce seven competency questions and show how they can be executed as SPARQL queries. We then present a prototype of a user interface for that scenario that shows different views on the set of review comments provided for a given manuscript, and we show in a user study that editors find the interface useful to answer their competency questions. In summary, we demonstrate that a unified and semantic publication model based on nanopublications can make scientific communication more effective and user-friendly

    The influence of organic and conventional fertilisation and crop protection practices, preceding crop, harvest year and weather conditions on yield and quality of potato (Solanum tuberosum) in a long-term management trial

    Get PDF
    The effects of organic versus conventional crop management practices (fertilisation, crop protection) and preceding crop on potato tuber yield (total, marketable, tuber size grade distribution) and quality (proportion of diseased, green and damaged tubers, tuber macro-nutrient concentrations) parameters were investigated over six years (2004–2009) as part of a long-term factorial field trial in North East England. Inter-year variability (the effects of weather and preceding crop) was observed to have a profound effect on yields and quality parameters, and this variability was greater in organic fertility systems. Total and marketable yields were significantly reduced by the use of both organic crop protection and fertility management. However, the yield gap between organic and conventional fertilisation regimes was greater and more variable than that between crop protection practices. This appears to be attributable mainly to lower and less predictable nitrogen supply in organically fertilised crops. Increased incidence of late blight in organic crop protection systems only occurred when conventional fertilisation was applied. In organically fertilised crops yield was significantly higher following grass/red clover leys than winter wheat, but there was no pre-crop effect in conventionally fertilised crops. The results highlight that nitrogen supply from organic fertilisers rather than inefficient pest and disease control may be the major limiting factor for yields in organic potato production systems

    Deep Markov Random Field for Image Modeling

    Full text link
    Markov Random Fields (MRFs), a formulation widely used in generative image modeling, have long been plagued by the lack of expressive power. This issue is primarily due to the fact that conventional MRFs formulations tend to use simplistic factors to capture local patterns. In this paper, we move beyond such limitations, and propose a novel MRF model that uses fully-connected neurons to express the complex interactions among pixels. Through theoretical analysis, we reveal an inherent connection between this model and recurrent neural networks, and thereon derive an approximated feed-forward network that couples multiple RNNs along opposite directions. This formulation combines the expressive power of deep neural networks and the cyclic dependency structure of MRF in a unified model, bringing the modeling capability to a new level. The feed-forward approximation also allows it to be efficiently learned from data. Experimental results on a variety of low-level vision tasks show notable improvement over state-of-the-arts.Comment: Accepted at ECCV 201

    Virtual Immortality: Reanimating Characters from TV Shows.

    Get PDF
    The objective of this work is to build virtual talking avatars of characters fully automatically from TV shows. From this unconstrained data, we show how to capture a character's style of speech, visual appearance and language in an e ort to construct an interactive avatar of the person and e ectively immortalize them in a computational model. We make three contributions (i) a complete framework for producing a generative model of the audiovisual and language of characters from TV shows; (ii) a novel method for aligning transcripts to video using the audio; and (iii) a fast audio segmentation system for silencing nonspoken audio from TV shows. Our framework is demonstrated using all 236 episodes from the TV series Friends [34] ( 97hrs of video) and shown to generate novel sentences as well as character specific speech and video

    Direct core functionalisation of naphthalenediimides by iridium catalysed C–H borylation

    Get PDF
    We report the first boron-substituted naphthalenediimides (NDIs), prepared by iridium catalysed C–H activation. Both mono- and diborylated products are available, which have been further elaborated by Suzuki–Miyaura coupling.</p

    X-ray Absorption Spectroscopy as Process Analytical Technology: Reaction Studies for the Manufacture of Sulfonate-Stabilized Calcium Carbonate Particles

    Get PDF
    Process analytical technologies are widely used to inform process control by identifying relationships between reagents and products. Here, we present a novel process analytical technology system for operando XAS on multiphase multicomponent synthesis processes based on the combination of a conventional lab-scale agitated reactor with a liquid-jet cell. The preparation of sulfonate-stabilized CaCO3 particles from polyphasic Ca(OH)2 dispersions was monitored in real time by Ca K-edge XAS to identify changes in Ca speciation in the bulk solution/dispersion as a function of time and process conditions. Linear combination fitting of the spectra quantitatively resolved composition changes from the initial conversion of Ca(OH)2 to the Ca(R–SO3)2 surfactant to the ultimate formation of nCaCO3·mCa(R− SO3)2 particles. The system provides a novel tool with strong chemical specificity for probing multiphase synthesis processes at a molecular level, providing an avenue to establishing the relationships between critical quality attributes of a process and the quality and performance of the product

    Organising multi-dimensional biological image information: The BioImage Database

    Get PDF
    Nowadays it is possible to unravel complex information at all levels of cellular organization by obtaining multi-dimensional image information. at the macromolecular level, three-dimensional (3D) electron microscopy, together with other techniques, is able to reach resolutions at the nanometer or subnanometer level. The information is delivered in the form of 3D volumes containing samples of a given function, for example, the electron density distribution within a given macromolecule. The same situation happens at the cellular level with the new forms of light microscopy, particularly confocal microscopy, all of which produce biological 3D volume information. Furthermore, it is possible to record sequences of images over time (videos), as well as sequences of volumes, bringing key information on the dynamics of living biological systems. It is in this context that work on bioimage started two years ago, and that its first version is now presented here. In essence, Bioimage is a database specifically designed to contain multi-dimensional images, perform queries and interactively work with the resulting multi-dimensional information on the World Wide Web, as well as accomplish the required cross-database links. Two sister home pages of bioimage can be accessed at http://www.bioimage.org and http://www-embl.bioimage.or
    corecore