18 research outputs found

    Bringing 'place' back in: regional clusters, project governance, and new product outcomes

    Get PDF
    We examine new product outcomes in the context of regional clusters. Based on past research on marketing relationships, clusters, and social networks, we propose that the overall configuration of a cluster helps promote particular governance practices among its members. These practices have distinct value-creating properties, and when they are brought to bear on a specific new product development project within a cluster, they promote performance outcomes like product novelty and speed to market. Ultimately, these performance effects are reinforced by the configuration of the cluster itself. In general, we propose that new product outcomes follow from complex interactions between a cluster's macro-level configuration and its micro-level governance processes. More broadly, our framework points to the importance of geographical variables and to the role of “place” in marketing decision-making

    Pressed to quit

    No full text

    Understanding the design of warning signals: a predator’s view

    Get PDF
    This work was funded by BBSRC grants awarded to J.M.H. and O.P. (BB/N006569/1), C.R. and J.S (BB/N00602X/1), P.G.L (BB/N005945/1), and I.C.C. (BB/N007239/1). O.P. was also funded by a Maria Zambrano Fellowship for attraction of international talent for the requalification of the Spanish university system—NextGeneration EU (ALRC).1. Animal warning signals show remarkable diversity, yet subjectively appear to share certain visual features that make defended prey stand out and look different from more cryptic palatable species. For example, many (but far from all) warning signals involve high contrast elements, such as stripes and spots, and often involve the colours yellow and red. How exactly do aposematic species differ from non-aposematic ones in the eyes (and brains) of their predators? 2. Here, we develop a novel computational modelling approach, to quantify prey warning signals and establish what visual features they share. First, we develop a model visual system, made of artificial neurons with realistic receptive fields, to provide a quantitative estimate of the neural activity in the first stages of the visual system of a predator in response to a pattern. The system can be tailored to specific species. Second, we build a novel model that defines a ‘neural signature’, comprising quantitative metrics that measure the strength of stimulation of the population of neurons in response to patterns. This framework allows us to test how individual patterns stimulate the model predator visual system. 3. For the predator–prey system of birds foraging on lepidopteran prey, we compared the strength of stimulation of a modelled avian visual system in response to a novel database of hyperspectral images of aposematic and undefended butterflies and moths. Warning signals generate significantly stronger activity in the model visual system, setting them apart from the patterns of undefended species. The activity was also very different from that seen in response to natural scenes. Therefore, to their predators, lepidopteran warning patterns are distinct from their non-defended counterparts and stand out against a range of natural backgrounds. 4. For the first time, we present an objective and quantitative definition of warning signals based on how the pattern generates population activity in a neural model of the brain of the receiver. This opens new perspectives for understanding and testing how warning signals have evolved, and, more generally, how sensory systems constrain signal design.Publisher PDFPeer reviewe

    A computational neuroscience framework for quantifying warning signals

    No full text
    <p>Animal warning signals show remarkable diversity, yet subjectively appear to share certain visual features that make defended prey stand out and look different from more cryptic palatable species. For example, many (but far from all) warning signals involve high contrast elements, such as stripes and spots, and often involve the colours yellow and red. How exactly do aposematic species differ from non-aposematic ones in the eyes (and brains) of their predators?</p> <p>Here we develop a novel computational modelling approach, to quantify prey warning signals and establish what visual features they share. First, we develop a model visual system, made of artificial neurons with realistic receptive fields, to provide a quantitative estimate of the neural activity in the first stages of the visual system of a predator in response to a pattern. The system can be tailored to specific species. Second, we build a novel model that defines a 'neural signature', comprising quantitative metrics that measure the strength of stimulation of the population of neurons in response to patterns. This framework allows us to test how individual patterns stimulate the model predator visual system.</p> <p>For the predator-prey system of birds foraging on lepidopteran prey, we compared the strength of stimulation of a modelled avian visual system in response to a novel database of hyperspectral images of aposematic and undefended butterflies and moths. Warning signals generate significantly stronger activity in the model visual system, setting them apart from the patterns of undefended species. The activity was also very different from that seen in response to natural scenes. Therefore, to their predators, lepidopteran warning patterns are distinct from their non-defended counterparts, and stand out against a range of natural backgrounds.</p> <p>For the first time, we present an objective and quantitative definition of warning signals based on how the pattern generates population activity in a neural model of the brain of the receiver. This opens new perspectives for understanding and testing how warning signals have evolved, and, more generally, how sensory systems constrain signal design.</p><h3>Neural model and computation of metrics</h3> <p>The software required for using the model, extracting the metrics from the model response, and generating the figures is Matlab (proprietary; MATLAB and Statistics Toolbox Release 2019b, 9.7.0.1190202 (R2019b). Natick, Massachusetts, The MathWorks Inc.). An open-source alternative to run the Matlab routines is Octave (https://octave.org/).</p> <h3>Statistical analysis</h3> <p>The software required for the statistical analysis is R (free, open-source; Team, R. C. (2020). "R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria.) </p><p>Funding provided by: Maria Zambrano Fellowship for attraction of international talent for the requalification of the Spanish university system—NextGeneration EU (ALRC)*<br>Crossref Funder Registry ID: <br>Award Number: </p><p>Funding provided by: Biotechnology and Biological Sciences Research Council<br>Crossref Funder Registry ID: https://ror.org/00cwqg982<br>Award Number: BB/N006569/1</p><p>Funding provided by: Biotechnology and Biological Sciences Research Council<br>Crossref Funder Registry ID: https://ror.org/00cwqg982<br>Award Number: BB/N00602X/1</p><p>Funding provided by: Biotechnology and Biological Sciences Research Council<br>Crossref Funder Registry ID: https://ror.org/00cwqg982<br>Award Number: BB/N005945/1</p><p>Funding provided by: Biotechnology and Biological Sciences Research Council<br>Crossref Funder Registry ID: https://ror.org/00cwqg982<br>Award Number: BB/N007239/1</p><h3>Database construction</h3> <p>The novel database of lepidopteran patterns of aposematic and non-aposematic species consists of a representative set made of 125 species of Lepidoptera across 12 families (96 aposematic and 29 non-aposematic species, with a total of 676 hyperspectral images; see paper's Supplementary Material 1 for details). Samples of each species were located in museum collections (the Natural History Museum (BMNH), London, UK, the Manchester Museum (MMUE), Manchester, UK, and the American National Museum (AMNH), New York, USA). Their dorsal and ventral sides were photographed using an ultraviolet hyperspectral camera (Resonon Pika NUV, Resonon Inc., MT USA) covering the 350 nm – 800 nm spectral range, with a spectral resolution of 1 nm. The camera was fitted with a near ultraviolet 17 mm focal length objective lens. To maximize the homogeneity of the light field, the specimens were illuminated by four blue-enhanced halogen lamps (SoLux, 35W, 12V-MR16 GU5.3 4700K, EiKO Global, KS USA) placed 22 cm apart on a squared fixture light and oriented vertically toward the horizontal scanning plane. See the paper's Supplementary Methods 1 for details on the spatial and spectral calibration of the imaging system.</p> <p>The database is freely accessible at <a href="https://arts.st-andrews.ac.uk/lepidoptera/index.html">https://arts.st-andrews.ac.uk/lepidoptera/index.html</a> </p> <h3>Image analysis – neural model of predator vision – computation of metrics (summary statistics)</h3> <p>The neural model of a predator visual system and the computation of the metrics of the modelled neural activity were coded in Matlab (MATLAB and Statistics Toolbox Release 2019b, 9.7.0.1190202 (R2019b). Natick, Massachusetts, The MathWorks Inc.). Please see details in accompanying the README.md file and Supplementary Method 2 and 3.</p> <h3>Statistical analysis</h3> <p>The statistical analysis was done in R (R Development Core Team 2020) using generalized linear models (function glm) for the logistic regressions and the function glmer in the package lme4 (Bates et al. 2014) for fitting generalized linear mixed models. See README.md and Supplementary Method 4 for details.</p&gt

    A computational neuroscience framework for quantifying warning signals

    No full text
    Animal warning signals show remarkable diversity, yet subjectively appear to share certain visual features that make defended prey stand out and look different from more cryptic palatable species. For example, many (but far from all) warning signals involve high contrast elements, such as stripes and spots, and often involve the colours yellow and red. How exactly do aposematic species differ from non-aposematic ones in the eyes (and brains) of their predators? Here we develop a novel computational modelling approach, to quantify prey warning signals and establish what visual features they share. First, we develop a model visual system, made of artificial neurons with realistic receptive fields, to provide a quantitative estimate of the neural activity in the first stages of the visual system of a predator in response to a pattern. The system can be tailored to specific species. Second, we build a novel model that defines a ‘neural signature’, comprising quantitative metrics that measure the strength of stimulation of the population of neurons in response to patterns. This framework allows us to test how individual patterns stimulate the model predator visual system. For the predator-prey system of birds foraging on lepidopteran prey, we compared the strength of stimulation of a modelled avian visual system in response to a novel database of hyperspectral images of aposematic and undefended butterflies and moths. Warning signals generate significantly stronger activity in the model visual system, setting them apart from the patterns of undefended species. The activity was also very different from that seen in response to natural scenes. Therefore, to their predators, lepidopteran warning patterns are distinct from their non-defended counterparts, and stand out against a range of natural backgrounds. For the first time, we present an objective and quantitative definition of warning signals based on how the pattern generates population activity in a neural model of the brain of the receiver. This opens new perspectives for understanding and testing how warning signals have evolved, and, more generally, how sensory systems constrain signal design

    A computational neuroscience framework for quantifying warning signals (code)

    No full text
    Animal warning signals show remarkable diversity, yet subjectively appear to share certain visual features that make defended prey stand out and look different from more cryptic palatable species. For example, many (but far from all) warning signals involve high contrast elements, such as stripes and spots, and often involve the colours yellow and red. How exactly do aposematic species differ from non-aposematic ones in the eyes (and brains) of their predators? Here we develop a novel computational modelling approach, to quantify prey warning signals and establish what visual features they share. First, we develop a model visual system, made of artificial neurons with realistic receptive fields, to provide a quantitative estimate of the neural activity in the first stages of the visual system of a predator in response to a pattern. The system can be tailored to specific species. Second, we build a novel model that defines a 'neural signature', comprising quantitative metrics that measure the strength of stimulation of the population of neurons in response to patterns. This framework allows us to test how individual patterns stimulate the model predator visual system. For the predator-prey system of birds foraging on lepidopteran prey, we compared the strength of stimulation of a modelled avian visual system in response to a novel database of hyperspectral images of aposematic and undefended butterflies and moths. Warning signals generate significantly stronger activity in the model visual system, setting them apart from the patterns of undefended species. The activity was also very different from that seen in response to natural scenes. Therefore, to their predators, lepidopteran warning patterns are distinct from their non-defended counterparts, and stand out against a range of natural backgrounds. For the first time, we present an objective and quantitative definition of warning signals based on how the pattern generates population activity in a neural model of the brain of the receiver. This opens new perspectives for understanding and testing how warning signals have evolved, and, more generally, how sensory systems constrain signal design
    corecore