24,445 research outputs found

    VOCAL REPERTORY OF TWO SPECIES OF THE LEPTODACTYLUS PENTADACTYLUS GROUP (ANURA, LEPTODACTYLIDAE)

    Get PDF
    Among frogs, vocalizations play important roles in their social interactions. Herein we describefi ve new types of vocalizations for two foam-nesting species of the Leptodactylus pentadactylusgroup, L. syphax and L. labyrinthicus. Behavioral observations and recordings were done in fourlocalities within the Cerrado biome, at southeast and central Brazil. Before emitting advertisementcalls, males of L. syphax often started producing a sequence of notes, which gradually turned into theadvertisement call. These different notes may be an introductory call, which would serve to preparethe vocal structures for the emission of the high-frequency/amplitude advertisement calls. A male ofL. syphax was emitting advertisement calls when a female approached and started to emit brief andlow-amplitude calls; these vocalizations probably are reciprocation calls. Males of L. labyrinthicusinvolved in agonistic interactions can emit vocal cracks (encounter call) and deep rough sounds (territorialcalls). Five courting males of L. labyrinthicus released screams with their mouth slightly openedin response to the approach of human observers. We conclude that these screams do not representdistress or territorial calls

    A Light Higgs at the LHC and the B-Anomalies

    Full text link
    After the Higgs discovery, the LHC has been looking for new resonances, decaying into pairs of Standard Model (SM) particles. Recently, the CMS experiment observed an excess in the di-photon channel, with a di-photon invariant mass of about 96~GeV. This mass range is similar to the one of an excess observed in the search for the associated production of Higgs bosons with the ZZ neutral gauge boson at LEP, with the Higgs bosons decaying to bottom quark pairs. On the other hand, the LHCb experiment observed a discrepancy with respect to the SM expectations of the ratio of the decay of BB-mesons to KK-mesons and a pair of leptons, RK(∗)=BR(B→K(∗)μ+μ−)/BR(B→K(∗)e+e−)R_{K^{(*)}} = BR(B \to K^{(*)} \mu^+\mu^-)/BR(B\to K^{(*)} e^+e^-). This observation provides a hint of the violation of lepton-flavor universality in the charged lepton sector and may be explained by the existence of a vector boson originating form a U(1)Lμ−LτU(1)_{L_\mu - L_\tau} symmetry and heavy quarks that mix with the left-handed down quarks. Since the coupling to heavy quarks could lead to sizable Higgs di-photon rates in the gluon fusion channel, in this article we propose a common origin of these anomalies identifying a Higgs associated with the breakdown of the U(1)Lμ−LτU(1)_{L_\mu - L_\tau} symmetry and at the same time responsible to the quark mixing, with the one observed at the LHC. We also discuss the constraints on the identification of the same Higgs with the one associated with the bottom quark pair excess observed at LEP.Comment: 34 pages, 5 figures, 3 tables. v2: 1 figure added, motivation clarified, version matched to JHE

    Stabat Mater

    Get PDF
    Arrangement of Giovanni Pierluigi da Palestrina\u27s Stabat Mater for a double chorus of mixed voices (soprano, alto, tenor, and bass). The song was arranged by Richard Wagner.https://ecommons.udayton.edu/imri_sheetmusic/1077/thumbnail.jp

    Knowledge Elicitation in Deep Learning Models

    Get PDF
    Embora o aprendizado profundo (mais conhecido como deep learning) tenha se tornado uma ferramenta popular na solução de problemas modernos em vários domínios, ele apresenta um desafio significativo - a interpretabilidade. Esta tese percorre um cenário de elicitação de conhecimento em modelos de deep learning, lançando luz sobre a visualização de características, mapas de saliência e técnicas de destilação. Estas técnicas foram aplicadas a duas arquiteturas: redes neurais convolucionais (CNNs) e um modelo de pacote (Google Vision). A nossa investigação forneceu informações valiosas sobre a sua eficácia na elicitação e interpretação do conhecimento codificado. Embora tenham demonstrado potencial, também foram observadas limitações, sugerindo espaço para mais desenvolvimento neste campo. Este trabalho não só realça a necessidade de modelos de deep learning mais transparentes e explicáveis, como também impulsiona o desenvolvimento de técnicas para extrair conhecimento. Trata-se de garantir uma implementação responsável e enfatizar a importância da transparência e compreensão no aprendizado de máquina. Além de avaliar os métodos existentes, esta tese explora também o potencial de combinar múltiplas técnicas para melhorar a interpretabilidade dos modelos de deep learning. Uma mistura de visualização de características, mapas de saliência e técnicas de destilação de modelos foi usada de uma maneira complementar para extrair e interpretar o conhecimento das arquiteturas escolhidas. Os resultados experimentais destacam a utilidade desta abordagem combinada, revelando uma compreensão mais abrangente dos processos de tomada de decisão dos modelos. Além disso, propomos um novo modelo para a elicitação sistemática de conhecimento em deep learning, que integra de forma coesa estes métodos. Este quadro demonstra o valor de uma abordagem holística para a interpretabilidade do modelo, em vez de se basear num único método. Por fim, discutimos as implicações éticas do nosso trabalho. À medida que os modelos de deep learning continuam a permear vários setores, desde a saúde até às finanças, garantir que as suas decisões são explicáveis e justificadas torna-se cada vez mais crucial. A nossa investigação sublinha esta importância, preparando o terreno para a criação de sistemas de inteligência artificial mais transparentes e responsáveis no futuro.Though a buzzword in modern problem-solving across various domains, deep learning presents a significant challenge - interpretability. This thesis journeys through a landscape of knowledge elicitation in deep learning models, shedding light on feature visualization, saliency maps, and model distillation techniques. These techniques were applied to two deep learning architectures: convolutional neural networks (CNNs) and a black box package model (Google Vision). Our investigation provided valuable insights into their effectiveness in eliciting and interpreting the encoded knowledge. While they demonstrated potential, limitations were also observed, suggesting room for further development in this field. This work does not just highlight the need for more transparent, more explainable deep learning models, it gives a gentle nudge to developing innovative techniques to extract knowledge. It is all about ensuring responsible deployment and emphasizing the importance of transparency and comprehension in machine learning. In addition to evaluating existing methods, this thesis also explores the potential for combining multiple techniques to enhance the interpretability of deep learning models. A blend of feature visualization, saliency maps, and model distillation techniques was used in a complementary manner to extract and interpret the knowledge from our chosen architectures. Experimental results highlight the utility of this combined approach, revealing a more comprehensive understanding of the models' decision-making processes. Furthermore, we propose a novel framework for systematic knowledge elicitation in deep learning, which cohesively integrates these methods. This framework showcases the value of a holistic approach toward model interpretability rather than relying on a single method. Lastly, we discuss the ethical implications of our work. As deep learning models continue to permeate various sectors, from healthcare to finance, ensuring their decisions are explainable and justified becomes increasingly crucial. Our research underscores this importance, laying the groundwork for creating more transparent, accountable AI systems in the future
    • …
    corecore