91 research outputs found
A Theoretical Framework on the Peculiarity of Doing Business in China—An Extensive Review on HBSP China Business Cases
After reviewing 397 Asia-Pacific Region business cases studies, published by Harvard Business School Publishing (HBSP) from 2005 to 2013, and by comparing the 166 China cases and the 231 non-China cases, this paper proposes a theoretical framework, namely, the peculiarity of doing business in China. Despite their great contribution in fulfilling the urgent need for China case studies in business education, and revealing the pivotal role of business and government relationship as the vital challenge of doing business in China, however, the mechanism of how this relationship has been leveraged as a peculiar and decisive competitive advantage for indigenous business (inferior resources) to outperform those FDIs (superior resources) in China, has remained as an unanswered, or not even been acknowledged question. The combination of the three identified cognitive weaknesses has been the prevailed barrier, hindering Western scholars to acknowledge the peculiarity of doing business in China and to understand China politically-dominated and culturally-oriented business environment, and consequently, leading to the stereotyped application of Western framework of management in perceiving, observing and interpreting pseudo-socialist business environment and behaviors in China, in which, business is by nature NOT market-oriented like in those Western countries. The fact being ignored is that, the combination of government and Guanxi Network constitutes the backbone of business environment, in which, what you can do depends on who you know, is the core determinant of organizational and individual behaviors, supporting and protecting the peculiarly structured chain-of-beneficiaries in China. Lastly, given the lag between business education and practice, the proposed framework may timely serve to enrich the paradigm of international business management
APIS: accurate prediction of hot spots in protein interfaces by combining protrusion index with solvent accessibility
<p>Abstract</p> <p>Background</p> <p>It is well known that most of the binding free energy of protein interaction is contributed by a few key hot spot residues. These residues are crucial for understanding the function of proteins and studying their interactions. Experimental hot spots detection methods such as alanine scanning mutagenesis are not applicable on a large scale since they are time consuming and expensive. Therefore, reliable and efficient computational methods for identifying hot spots are greatly desired and urgently required.</p> <p>Results</p> <p>In this work, we introduce an efficient approach that uses support vector machine (SVM) to predict hot spot residues in protein interfaces. We systematically investigate a wide variety of 62 features from a combination of protein sequence and structure information. Then, to remove redundant and irrelevant features and improve the prediction performance, feature selection is employed using the F-score method. Based on the selected features, nine individual-feature based predictors are developed to identify hot spots using SVMs. Furthermore, a new ensemble classifier, namely APIS (A combined model based on Protrusion Index and Solvent accessibility), is developed to further improve the prediction accuracy. The results on two benchmark datasets, ASEdb and BID, show that this proposed method yields significantly better prediction accuracy than those previously published in the literature. In addition, we also demonstrate the predictive power of our proposed method by modelling two protein complexes: the calmodulin/myosin light chain kinase complex and the heat shock locus gene products U and V complex, which indicate that our method can identify more hot spots in these two complexes compared with other state-of-the-art methods.</p> <p>Conclusion</p> <p>We have developed an accurate prediction model for hot spot residues, given the structure of a protein complex. A major contribution of this study is to propose several new features based on the protrusion index of amino acid residues, which has been shown to significantly improve the prediction performance of hot spots. Moreover, we identify a compact and useful feature subset that has an important implication for identifying hot spot residues. Our results indicate that these features are more effective than the conventional evolutionary conservation, pairwise residue potentials and other traditional features considered previously, and that the combination of our and traditional features may support the creation of a discriminative feature set for efficient prediction of hot spot residues. The data and source code are available on web site <url>http://home.ustc.edu.cn/~jfxia/hotspot.html</url>.</p
Self-supervised Likelihood Estimation with Energy Guidance for Anomaly Segmentation in Urban Scenes
Robust autonomous driving requires agents to accurately identify unexpected
areas in urban scenes. To this end, some critical issues remain open: how to
design advisable metric to measure anomalies, and how to properly generate
training samples of anomaly data? Previous effort usually resorts to
uncertainty estimation and sample synthesis from classification tasks, which
ignore the context information and sometimes requires auxiliary datasets with
fine-grained annotations. On the contrary, in this paper, we exploit the strong
context-dependent nature of segmentation task and design an energy-guided
self-supervised frameworks for anomaly segmentation, which optimizes an anomaly
head by maximizing the likelihood of self-generated anomaly pixels. To this
end, we design two estimators for anomaly likelihood estimation, one is a
simple task-agnostic binary estimator and the other depicts anomaly likelihood
as residual of task-oriented energy model. Based on proposed estimators, we
further incorporate our framework with likelihood-guided mask refinement
process to extract informative anomaly pixels for model training. We conduct
extensive experiments on challenging Fishyscapes and Road Anomaly benchmarks,
demonstrating that without any auxiliary data or synthetic models, our method
can still achieves competitive performance to other SOTA schemes
Learning with Noisy labels via Self-supervised Adversarial Noisy Masking
Collecting large-scale datasets is crucial for training deep models,
annotating the data, however, inevitably yields noisy labels, which poses
challenges to deep learning algorithms. Previous efforts tend to mitigate this
problem via identifying and removing noisy samples or correcting their labels
according to the statistical properties (e.g., loss values) among training
samples. In this paper, we aim to tackle this problem from a new perspective,
delving into the deep feature maps, we empirically find that models trained
with clean and mislabeled samples manifest distinguishable activation feature
distributions. From this observation, a novel robust training approach termed
adversarial noisy masking is proposed. The idea is to regularize deep features
with a label quality guided masking scheme, which adaptively modulates the
input data and label simultaneously, preventing the model to overfit noisy
samples. Further, an auxiliary task is designed to reconstruct input data, it
naturally provides noise-free self-supervised signals to reinforce the
generalization ability of deep models. The proposed method is simple and
flexible, it is tested on both synthetic and real-world noisy datasets, where
significant improvements are achieved over previous state-of-the-art methods
T-Eval: Evaluating the Tool Utilization Capability of Large Language Models Step by Step
Large language models (LLM) have achieved remarkable performance on various
NLP tasks and are augmented by tools for broader applications. Yet, how to
evaluate and analyze the tool-utilization capability of LLMs is still
under-explored. In contrast to previous works that evaluate models
holistically, we comprehensively decompose the tool utilization into multiple
sub-processes, including instruction following, planning, reasoning, retrieval,
understanding, and review. Based on that, we further introduce T-Eval to
evaluate the tool utilization capability step by step. T-Eval disentangles the
tool utilization evaluation into several sub-domains along model capabilities,
facilitating the inner understanding of both holistic and isolated competency
of LLMs. We conduct extensive experiments on T-Eval and in-depth analysis of
various LLMs. T-Eval not only exhibits consistency with the outcome-oriented
evaluation but also provides a more fine-grained analysis of the capabilities
of LLMs, providing a new perspective in LLM evaluation on tool-utilization
ability. The benchmark will be available at
https://github.com/open-compass/T-Eval.Comment: Project: https://open-compass.github.io/T-Eva
DeepAIR: A deep learning framework for effective integration of sequence and 3D structure to enable adaptive immune receptor analysis
Structural docking between the adaptive immune receptors (AIRs), including T cell receptors (TCRs) and B cell receptors (BCRs), and their cognate antigens are one of the most fundamental processes in adaptive immunity. However, current methods for predicting AIR-antigen binding largely rely on sequence-derived features of AIRs, omitting the structure features that are essential for binding affinity. In this study, we present a deep learning framework, termed DeepAIR, for the accurate prediction of AIR-antigen binding by integrating both sequence and structure features of AIRs. DeepAIR achieves a Pearson’s correlation of 0.813 in predicting the binding affinity of TCR, and a median area under the receiver-operating characteristic curve (AUC) of 0.904 and 0.942 in predicting the binding reactivity of TCR and BCR, respectively. Meanwhile, using TCR and BCR repertoire, DeepAIR correctly identifies every patient with nasopharyngeal carcinoma and inflammatory bowel disease in test data. Thus, DeepAIR improves the AIR-antigen binding prediction that facilitates the study of adaptive immunity
Large expert-curated database for benchmarking document similarity detection in biomedical literature search
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe
- …