5 research outputs found

    Patents, secret innovations and firm's rate of return : differential effects of the innovation leader

    Get PDF
    This paper studies the dynamic interactions and the spillovers that exist among patent application intensity, secret innovation intensity and stock returns of a well-defined technological cluster of firms. We study the differential behavior when there is an Innovation Leader (IL) and the rest of the firms are Innovation Followers (IFs). The leader and the followers of the technological cluster are defined according to their patent innovation activity (stock of knowledge). We use data on stock returns and patent applications of a panel of technologically related firms of the United States (US) economy over the period 1979 to 2000. Most firms of the technological cluster are from the pharmaceutical-products industry. Interaction effects and spillovers are quantified by applying several Panel Vector Autoregressive (PVAR) market value models. Impulse Response Functions (IRFs) and dynamic interaction multipliers of the PVAR models are estimated. Secret patent innovations are estimated by using a recent Poisson-type patent count data model, which includes a set of dynamic latent variables. We show that firms’ stock returns, observable patent intensities and secret patent intensities have significant dynamic interaction effects for technologically related firms. The predictive absorptive capacity of the IL is the highest and this type of absorptive capacity is positively correlated with good firm performance measures. The innovation spillover effects that exist among firms, due to the imperfect appropriability of the returns of the investment in R&D, are specially important for secret innovations and less relevant for observed innovations. The flow of spillovers between followers and the leader is not symmetric being higher from the IL to the IFs.Patent count data model, Stock market value, Secret innovations, Absorptive capacity, Technological proximity, Panel Vector Autoregression (PVAR), Impulse Response Function (IRF), Efficient Importance Sampling (EIS)

    Monte Carlo comparison of six hierarchical clustering methods on random data

    No full text
    There is mounting evidence to suggest that the complete linkage method does the best clustering job among all hierarchical agglomerative techniques, particularly with respect to misclassification in samples from known multivariate normal distributions. However, clustering methods are notorious for discovering clusters on random data sets also. We compare six agglomerative hierarchical methods on univariate random data from uniform and standard normal distributions and find that the complete linkage method generally is best in not discovering false clusters. The criterion is the ratio of number of within-cluster distances to number of all distances at most equal to the maximum within-cluster distance

    Managing kangaroo grazing for the conservation of grassland and grassy woodland fauna

    No full text
    Large mammalian grazers are ecosystem engineers that alter the resources available to other species through selective consumption of plant matter, redistribution of nutrients and trampling. While some level of grazing is considered critical for maintaining species diversity, alteration to natural grazing regimes can have a severe impact on native biodiversity. Restoration of grazing regimes which promote conservation of biodiversity is a priority in many protected areas. However, the ability to achieve this goal is limited by a lack of understanding of what ‘appropriate’ grazing regimes for conservation of biodiversity are. In south-eastern Australia, high intensity grazing by the native eastern grey kangaroo (Macropus giganteus) has been linked to the decline of multiple taxa. While efforts to manage the impact of kangaroo grazing on other taxa have been undertaken, the effectiveness of these interventions are limited by a lack of knowledge of what constitutes optimal grazing levels. In this thesis, I used kangaroo population counts, tree canopy cover maps, ground vegetation structure, and reptile and birds counts to investigate the relationship between kangaroos, grass structure, and fauna. I found that: 1) there was a strong negative relationship between the abundance of kangaroos and grass structure (Paper I); 2) high intensity kangaroo grazing had a negative effect on the reptile community (Paper I); 3) birds with similar traits favoured similar grazing intensities, with different grazing intensities favoured by different trait groups (Paper II); 4) the occurrence of a threatened grassland reptile, the striped legless lizard (Delma impar) was positively related to fine scale grass complexity, and negatively related to kangaroo density at the broad scale (Paper III); 5) kangaroos selected forage habitat away from roads, in areas with a high cover of short grass (Paper IV); and 6) line transect sampling undertaken from vehicles driven along tracks can provide an accurate method to survey the kangaroo population provided knowledge of kangaroo distribution relative to tracks is known and accounted for (Paper V). My investigation into the relationships between kangaroos, grass structure and fauna indicated that grass structure has a strong effect on many reptiles and birds, and that intervention may be needed to change kangaroo habitat selection in a way that mimics natural foraging patterns in order to promote optimal vegetation structures for the conservation of native biodiversity. Therefore, to preserve a full-complement of species in these grassy habitats, I recommend that: 1) management of grazing is based on direct measures of grass structure, not herbivore abundance, 2) the extent and duration of intense grazing is limited; and 3) grazing pressure is rotated to create mosaics of different levels of grass structure in space and time. In making these recommendations, I emphasise that management of grazing by kangaroos will be necessary for ongoing conservation of biodiversity in grasslands and grassy woodland and that further research is needed on how to manage kangaroo grazing patterns for the conservation of biodiversity in grasslands and grassy woodlands in south-eastern Australia

    Aprimoramento do poder discriminatório de funções elipsoidais modificadas por cargas fatoriais rotacionadas na formação otimizada de agrupamentos

    Get PDF
    The technological advent provided the rise of data collection in companies, governments and various industrial segments. In this respect, techniques that seek to perform groupings and discrimination of clusters are widely used in datasets with multiple variables, bringing the need to use specific tools, which contemplate the existing variance-covariance structure. Based on this, this work presents a proposal to improve the discriminatory power of confidence regions in the formation and estimation of optimal clusters, using multivariate and experimental techniques to extract information in an optimized way in correlated datasets. Factor analysis was used as the exploratory multivariate method, tuning the rotation for factor loads through the mixture design, and agglutinating the total variance explained functions by the mean square error afterwards. The optimization of this step is performed through the sequential quadratic programming algorithm. Knowing the optimal scores, a multilevel factorial design is formed to contemplate all combinations of the linkage methods and the types of analysis, seeking to find the parameter that presents the least variability, generating confidence ellipses with better discrimination between groups. A strategy to analyze the levels of agreement and the inversions existence in the formation of clusters is proposed using the Kappa and Kendall indicators. Motivated by the need for strategies to classify substations in the face of voltage sag phenomena, which cause faults in the distribution of electricity, the method was applied to a set of real data, representing the power quality indexes of substations located in southeastern Brazil. Optimum values were found in the factor loads rotation and the parameterization “Wardanalysis of covariance” was defined as the ideal strategies to create the clusters in this dataset. Thus, low variability clusters and precise confidence ellipses were generated to estimate the voltage sag patterns, promoting a better discriminatory power in the clusters’ classification through the regions of confidence. The confirmatory analysis inferred that the “Ward” linkage proved to be the most robust method for this dataset, even under the influence of disturbances in the original data.Agência 1O advento tecnológico proporcionou a ascensão da coleta de dados em empresas, governos e diversos segmentos industriais. Nesse aspecto, técnicas que buscam realizar agrupamentos e discriminação de conglomerados são amplamente empregadas em dados que apresentam múltiplas variáveis, trazendo a necessidade de se utilizar ferramentas específicas, que contemplem a estrutura de variância-covariância existente. Com base nisso, esse trabalho apresenta uma proposta para aprimorar o poder discriminatório de regiões de confiança na formação e estimação de agrupamentos ótimos, utilizando técnicas multivariadas e experimentais para extrair informações de maneira otimizada em conjuntos de dados correlacionados. Como método multivariado exploratório, utilizou-se a análise fatorial, calibrando a rotação de cargas fatoriais através do arranjo de misturas e, em seguida, aglutinando as funções de variância total explicada pelo erro quadrático médio. A otimização dessa etapa é realizada através do algoritmo de programação quadrática sequencial. Conhecendo os escores ótimos, um arranjo fatorial multinível é formado para contemplar todas as combinações dos métodos de ligação e os tipos de análise, buscando encontrar a combinação de parâmetros que apresente a menor variabilidade e que, consequentemente, gere elipses de confiança com melhor discriminação entre os grupos. Uma estratégia para analisar os níveis de concordância e a existência de inversões na formação de clusters é proposta utilizando os indicadores de Kappa e Kendall. Motivado pela necessidade de estratégias para classificar subestações diante de fenômenos de afundamento de tensão, que causam quedas na distribuição de energia elétrica, o método foi aplicado em um conjunto de dados reais, representando os índices de qualidade de energia elétrica de subestações localizadas no sudeste do Brasil. Foram encontrados valores ótimos na rotação das cargas fatoriais e definiu-se a parametrização “Ward e análise de covariância” como as estratégias ideais para criar os clusters nesse conjunto de dados. Assim, gerou-se conglomerados de baixa variabilidade e elipses de confiança precisas para estimar os padrões de afundamentos de tensão, promovendo um melhor poder discriminatório na classificação dos clusters através das regiões de confiança. A análise confirmatória inferiu que o método de ligação “Ward” se mostrou o mais robusto para esse conjunto, mesmo sob influência de perturbações no conjunto original

    Rethinking the risk matrix

    Get PDF
    So far risk has been mostly defined as the expected value of a loss, mathematically PL (being P the probability of an adverse event and L the loss incurred as a consequence of the adverse event). The so called risk matrix follows from such definition. This definition of risk is justified in a long term “managerial” perspective, in which it is conceivable to distribute the effects of an adverse event on a large number of subjects or a large number of recurrences. In other words, this definition is mostly justified on frequentist terms. Moreover, according to this definition, in two extreme situations (high-probability/low-consequence and low-probability/high-consequence), the estimated risk is low. This logic is against the principles of sustainability and continuous improvement, which should impose instead both a continuous search for lower probabilities of adverse events (higher and higher reliability) and a continuous search for lower impact of adverse events (in accordance with the fail-safe principle). In this work a different definition of risk is proposed, which stems from the idea of safeguard: (1Risk)=(1P)(1L). According to this definition, the risk levels can be considered low only when both the probability of the adverse event and the loss are small. Such perspective, in which the calculation of safeguard is privileged to the calculation of risk, would possibly avoid exposing the Society to catastrophic consequences, sometimes due to wrong or oversimplified use of probabilistic models. Therefore, it can be seen as the citizen’s perspective to the definition of risk
    corecore