523 research outputs found
Taraxacum mongolicum extract exhibits antimicrobial activity against respiratory tract bacterial strains in vitro and in neonatal rats by enhancing systemic Th1 immunity
Purpose: To study the antimicrobial activity of the Taraxacum mongolicum extract against respiratory infection-causing bacterial strains in vitro and in neonatal rats.Methods: The in vitro antibacterial activity was assessed by micro-dilution method. Antioxidant activity was determined by ferric reducing antioxidant power (FRAP), nitro blue tetrazolium (NBT) and 2, 2- diphenyl-1-picrylhydrazyl (DPPH) assays. In vivo antimicrobial activity was evaluated in neonatal rat model. Interleukin (IL)-2 (IL-2) and gamma interferon (IFN-γ) were estimated using enzyme-linked immunosorbent assay (ELISA).Results: The hydro-methanol extract of T. mongolicum contained high levels of phenolics and flavonoids, and exhibited strong antimicrobial activity against respiratory infection-causing bacterial species with MICs of 25 - 100 μg/ml, and MBCs of 55 - 215 μg/ml. The highest and lowest antimicrobial activities were observed against Streptococcus pneumonia and Haemophilus influenza, respectively. The extract at doses of 25 and 50 mg/kg exerted protective effects against Streptococcus pneumonia infected neonatal rats by boosting their Th1 immunity. It enhanced the production of interleukin (IL)-2, concomitant with decreased production of interferon (IFN)-γ in neonatal rats. The extract contained isoetin, hesperidin, naringenin, kaempferol, sinapinic and gallic acid.Conclusion: These results suggest that the hydro-methanolic extract of Taraxacum mongolicum and its constituents can be potentially developed for use in the management of respiratory bacterial infections.Keywords: Respiratory tract infection, Interleukin, Taraxacum mongolicum, Immunity, Neonatal rat
Extracting the Mass Radius of He from -Meson Photoproduction Data
We extract the mass radius of He, a light nucleus, from near-threshold
-meson photoproduction data of the LEPS Collaboration. We evaluate the
gravitational form factor using several function forms, including the monopole,
dipole, Gauss, and hard-sphere forms. Our analysis reveals that the Gauss and
hard-sphere form factors more accurately describe the differential cross
sections in the small- range near the production threshold. The extracted
mass radii based on the Gauss and hard-sphere form factors are fm
and fm, respectively, with the combined average being smaller
than the charge radius of He. More precise and wider -coverage
measurements of the coherent photoproductions of vector mesons off the nuclear
target are necessary to distinguish between different models for describing the
gravitational form factor and provide further insights into the mass radius of
the nucleus.Comment: 6 pages, 3 figure
Hybrid Causal Logic Methodology for Risk Assessment
Probabilistic Risk Assessment is being increasingly used in a number of industries such as nuclear, aerospace, chemical process, to name a few. Probabilistic Risk Assessment (PRA) characterizes risk in terms of three questions: (1) What can go wrong? (2) How likely is it? (3) What are the consequences? Probabilistic Risk Assessment studies answer these questions by systematically postulating and quantifying undesired scenarios in a highly integrated, top down fashion. The PRA process for technological systems typically includes the following steps: objective and scope definition, system familiarization, identification of initiating events, scenario modeling, quantification, uncertainty analysis, sensitivity analysis, importance ranking, and data analysis.
Fault trees and event trees are widely used tools for risk scenario analysis in PRAs of technological systems. This methodology is most suitable for systems made of hardware components. A more comprehensive treatment of risks of technical systems needs to consider the entire environment within which such systems are designed and operated. This environment includes the physical environment, the socio-economic environment, and in some cases the regulatory and oversight environment. The technical system, supported by an organization of people in charge of its operation, is at the cross-section of these environments.
In order to develop a more comprehensive risk model for these systems, an important step is to extend the modeling capabilities of the conventional Probabilistic Risk Assessment methodology to also include risks associated with human activities and organizational factors in addition to hardware and software failures and adverse conditions of the physical environment. The causal modeling should also extend to the influence of regulatory and oversight functions. This research offers such a methodology. It proposes a multi-layered modeling approach so that most the appropriate techniques are applied to different individual domains of the system. The approach is called the Hybrid Causal Logic (HCL) methodology. The main layers include: (a) A model to define safety/risk context. This is done using a technique known as event sequence diagram (ESD) method that helps define the kinds of accidents and incidents that can occur in relation to the system being considered; (b) A model that captures the behaviors of the physical system (hardware, software, and environmental factors) as possible causes or contributing factors to accidents and incidents delineated by the event sequence diagrams. This is done by common system modeling techniques such as fault tress (FT); and (c) A model to extend the causal chain of events to their potential human and organizational roots. This is done using Bayesian belief networks (BBN). Bayesian belief networks are particularly useful as they do not require complete knowledge of the relation between causes and effects. The integrated model is therefore a hybrid causal model with the corresponding sets of taxonomies and analytical and computational procedures.
In this research, a methodology to combine fault trees, event trees or event sequence diagrams, and Bayesian belief networks has been introduced. Since such hybrid models involve significant interdependencies, the nature of such dependencies are first determined to pave the way for developing proper algorithmic solutions of the logic model. Major achievements of this work are: (1) development of the Hybrid Causal Logic model concept and quantification algorithms; (2) development and testing of computer implementation of algorithms (collaborative work); (3) development and implementation of algorithms for HCL-based importance measures, an uncertainty propagation method the BBN models, and algorithms for qualitative-quantitative Bayesian belief networks; and (4) development and testing of the Integrated Risk Information System (IRIS) software based on HCL methodology
An Analysis of Parton Distribution Functions of the Pion and the Kaon with the Maximum Entropy Input
We present pion and kaon parton distribution functions from a global QCD
analysis of the experimental data within the framework of dynamical parton
model. We use the DGLAP equations with parton-parton recombination corrections
and the valence input of uniform distribution which maximizes the information
entropy. At our input scale , there are no sea quark and gluon
distributions. All the sea quarks and gluons of the pion and the kaon are
completely generated from the parton splitting processes. The mass-dependent
parton splitting kernel is applied for the strange quark distribution in the
kaon. The obtained valence quark and sea quark distributions at high
( GeV) are compatible with the existed experimental measurements.
Furthermore, the asymptotic behaviours of parton distribution functions at
small and large have been studied for both the pion and the kaon. Lastly,
the first three moments of parton distributions at high scale are
calculated, which are consistent with other theoretical predictions.Comment: 12 pages, 12 figures, minor modification
Statistics for Spatially Stratified Heterogeneous Data
Spatial statistics is dominated by spatial autocorrelation (SAC) based
Kriging and BHM, and spatial local heterogeneity based hotspots and
geographical regression methods, appraised as the first and second laws of
Geography (Tobler 1970; Goodchild 2004), respectively. Spatial stratified
heterogeneity (SSH), the phenomena of a partition that within strata is more
similar than between strata, examples are climate zones and landuse classes and
remote sensing classification, is prevalent in geography and understood since
ancient Greek, is surprisingly neglected in Spatial Statistics, probably due to
the existence of hundreds of classification algorithms. In this article, we go
beyond the classifications and disclose that SSH is the sources of sample bias,
statistic bias, modelling confounding and misleading CI, and recommend robust
solutions to overcome the negativity. In the meantime, we elaborate four
benefits from SSH: creating identical PDF or equivalent to random sampling in
stratum; the spatial pattern in strata, the borders between strata as a
specific information for nonlinear causation; and general interaction by
overlaying two spatial patterns. We developed the equation of SSH and discuss
its context. The comprehensive investigation formulates the statistics for SSH,
presenting a new principle and toolbox in spatial statistics
Wespeaker baselines for VoxSRC2023
This report showcases the results achieved using the wespeaker toolkit for
the VoxSRC2023 Challenge. Our aim is to provide participants, especially those
with limited experience, with clear and straightforward guidelines to develop
their initial systems. Via well-structured recipes and strong results, we hope
to offer an accessible and good enough start point for all interested
individuals. In this report, we describe the results achieved on the VoxSRC2023
dev set using the pretrained models, you can check the CodaLab evaluation
server for the results on the evaluation set
Incentive Compatibility for AI Alignment in Sociotechnical Systems: Positions and Prospects
The burgeoning integration of artificial intelligence (AI) into human society
brings forth significant implications for societal governance and safety. While
considerable strides have been made in addressing AI alignment challenges,
existing methodologies primarily focus on technical facets, often neglecting
the intricate sociotechnical nature of AI systems, which can lead to a
misalignment between the development and deployment contexts. To this end, we
posit a new problem worth exploring: Incentive Compatibility Sociotechnical
Alignment Problem (ICSAP). We hope this can call for more researchers to
explore how to leverage the principles of Incentive Compatibility (IC) from
game theory to bridge the gap between technical and societal components to
maintain AI consensus with human societies in different contexts. We further
discuss three classical game problems for achieving IC: mechanism design,
contract theory, and Bayesian persuasion, in addressing the perspectives,
potentials, and challenges of solving ICSAP, and provide preliminary
implementation conceptions.Comment: 13 pages, 2 figure
- …