1,774 research outputs found

    Major Indian spices- An introspection on variability in quality

    Get PDF
    Indian spices like black pepper, cardamom, ginger, turmeric and cinnamon are valued for their culinary and nutraceutical properties. The quality attributes that impart these properties are essential oil, oleoresin and the aroma/pungent principles. Variability in essential oil constituents of black pepper, relevance of bulk density, codex standards and role of phenolics in deciding quality traits possess great relevance in academic and industrial applications. Curing of turmeric and maturity at harvest play a crucial role in drying and curcumin content. Geographical location has great relevance in deciding the curcumin content of turmeric. Coumarin content of cinnamon and cassia has implications in industrial application. This article provides an introspection in to the research programmes on quality attributes of spices carried out at ICAR-Indian Institute of Spices Research for the last three decades in comparison with international scenario

    The Unity of \u3ci\u3eNormanitas\u3c/i\u3e: Norman Identity in Twelfth-Century Scotland and Southern Italy

    Get PDF
    Scholars have rigorously debated the extent to which the Normans remained a definitively identifiable group as they branched out from Normandy in endeavors of conquest and expansion. In the twentieth century, historians such as Charles Homer Haskins and David Douglas maintained the unity of Norman identity throughout the British Isles, southern Italy, and the crusader states. Other scholars like R. H. C. Davis argued that the Normans were merely extraordinary cultural assimilators and decried the notion of Norman unity, or Normanitas, as a myth propagated by chroniclers and historians dating back to the tenth century. Drawing upon recent scholarship, this thesis challenges the stark dichotomy of Norman unity/disunity posited by twentieth century historians. With the Norman identity debate in mind, this thesis yields a comparative examination of Norman identity, influence, and institutions in Scotland and southern Italy during the longue durée of the twelfth century. Through analyses of Norman martial identity and influence, administrative governance and state-making, and ethnicity and kinship, this thesis demonstrates how Norman identity, influence, and institutions were simultaneously evident and evolving in the peripheral areas of Europe, which Keith Stringer has styled the ‘Norman Edge.’ Thus, this analysis underscores that, although Norman identity indeed waned over time, Normanitas remained palpable on the peripheries of Europe until the final quarter of the twelfth century

    Prompt Electromagnetic Transients from Binary Black Hole Mergers

    Get PDF
    Binary black hole (BBH) mergers provide a prime source for current and future interferometric GW observatories. Massive BBH mergers may often take place in plasma-rich environments, leading to the exciting possibility of a concurrent electromagnetic (EM) signal observable by traditional astronomical facilities. However, many critical questions about the generation of such counterparts remain unanswered. We explore mechanisms that may drive EM counterparts with magnetohydrodynamic simulations treating a range of scenarios involving equal-mass black-hole binaries immersed in an initially homogeneous fluid with uniform, orbitally aligned magnetic fields. We find that the time development of Poynting luminosity, which may drive jet-like emissions, is relatively insensitive to aspects of the initial configuration. In particular, over a significant range of initial values, the central magnetic field strength is effectively regulated by the gas flow to yield a Poynting luminosity of 10451046ρ13M82ergs110^{45}-10^{46} \rho_{-13} M_8^2 \, {\rm erg}\,{\rm s}^{-1}, with BBH mass scaled to M8M/(108M)M_8 \equiv M/(10^8 M_{\odot}) and ambient density ρ13ρ/(1013gcm3)\rho_{-13} \equiv \rho/(10^{-13} \, {\rm g} \, {\rm cm}^{-3}). We also calculate the direct plasma synchrotron emissions processed through geodesic ray-tracing. Despite lensing effects and dynamics, we find the observed synchrotron flux varies little leading up to merger.Comment: 22 pages, 21 figures; additional reference + clarifying text added to match published versio

    On the Objective Evaluation of Post Hoc Explainers

    Full text link
    Many applications of data-driven models demand transparency of decisions, especially in health care, criminal justice, and other high-stakes environments. Modern trends in machine learning research have led to algorithms that are increasingly intricate to the degree that they are considered to be black boxes. In an effort to reduce the opacity of decisions, methods have been proposed to construe the inner workings of such models in a human-comprehensible manner. These post hoc techniques are described as being universal explainers - capable of faithfully augmenting decisions with algorithmic insight. Unfortunately, there is little agreement about what constitutes a "good" explanation. Moreover, current methods of explanation evaluation are derived from either subjective or proxy means. In this work, we propose a framework for the evaluation of post hoc explainers on ground truth that is directly derived from the additive structure of a model. We demonstrate the efficacy of the framework in understanding explainers by evaluating popular explainers on thousands of synthetic and several real-world tasks. The framework unveils that explanations may be accurate but misattribute the importance of individual features.Comment: 14 pages, 4 figures. Under revie

    Unfooling Perturbation-Based Post Hoc Explainers

    Full text link
    Monumental advancements in artificial intelligence (AI) have lured the interest of doctors, lenders, judges, and other professionals. While these high-stakes decision-makers are optimistic about the technology, those familiar with AI systems are wary about the lack of transparency of its decision-making processes. Perturbation-based post hoc explainers offer a model agnostic means of interpreting these systems while only requiring query-level access. However, recent work demonstrates that these explainers can be fooled adversarially. This discovery has adverse implications for auditors, regulators, and other sentinels. With this in mind, several natural questions arise - how can we audit these black box systems? And how can we ascertain that the auditee is complying with the audit in good faith? In this work, we rigorously formalize this problem and devise a defense against adversarial attacks on perturbation-based explainers. We propose algorithms for the detection (CAD-Detect) and defense (CAD-Defend) of these attacks, which are aided by our novel conditional anomaly detection approach, KNN-CAD. We demonstrate that our approach successfully detects whether a black box system adversarially conceals its decision-making process and mitigates the adversarial attack on real-world data for the prevalent explainers, LIME and SHAP.Comment: Accepted to AAAI-23. 9 pages (not including references and supplemental

    How Well Do Feature-Additive Explainers Explain Feature-Additive Predictors?

    Full text link
    Surging interest in deep learning from high-stakes domains has precipitated concern over the inscrutable nature of black box neural networks. Explainable AI (XAI) research has led to an abundance of explanation algorithms for these black boxes. Such post hoc explainers produce human-comprehensible explanations, however, their fidelity with respect to the model is not well understood - explanation evaluation remains one of the most challenging issues in XAI. In this paper, we ask a targeted but important question: can popular feature-additive explainers (e.g., LIME, SHAP, SHAPR, MAPLE, and PDP) explain feature-additive predictors? Herein, we evaluate such explainers on ground truth that is analytically derived from the additive structure of a model. We demonstrate the efficacy of our approach in understanding these explainers applied to symbolic expressions, neural networks, and generalized additive models on thousands of synthetic and several real-world tasks. Our results suggest that all explainers eventually fail to correctly attribute the importance of features, especially when a decision-making process involves feature interactions.Comment: Accepted to NeurIPS Workshop XAI in Action: Past, Present, and Future Applications. arXiv admin note: text overlap with arXiv:2106.0837
    corecore