978,814 research outputs found
From mass to structure: An aromaticity index for high-resolution mass data of natural organic matter
Recent progress in Fourier transform ion cyclotron resonance mass spectrometry (FTICRMS) provided extensive molecular mass data for complex natural organic matter (NOM). Structural information can be deduced solely from the molecular masses for ions with extreme molecular element ratios, in particular low H/C ratios, which are abundant in thermally altered NOM (e.g. black carbon). In this communication we propose a general aromaticity index (AI) and two threshold values as unequivocal criteria for the existence of either aromatic (AI > 0.5) or condensed aromatic structures (AI >= 0.67) in NOM. AI can be calculated from molecular formulae which are derived from exact molecular masses of naturally occurring compounds containing C, H, O, N, S and P and is especially applicable for substances with aromatic cores and few alkylations. In order to test the validity of our model index, AI is applied to FTICRMS data of a NOM deep-water sample from the Weddell Sea (Antarctica), a fulvic acid standard and an artificial dataset of all theoretically possible molecular formulae. For graphical evaluation a ternary plot is suggested for four-dimensional data representation. The proposed aromaticity index is a step towards structural identification of NOM and the molecular identification of black carbon in the environment
LiveBot: Generating Live Video Comments Based on Visual and Textual Contexts
We introduce the task of automatic live commenting. Live commenting, which is
also called `video barrage', is an emerging feature on online video sites that
allows real-time comments from viewers to fly across the screen like bullets or
roll at the right side of the screen. The live comments are a mixture of
opinions for the video and the chit chats with other comments. Automatic live
commenting requires AI agents to comprehend the videos and interact with human
viewers who also make the comments, so it is a good testbed of an AI agent's
ability of dealing with both dynamic vision and language. In this work, we
construct a large-scale live comment dataset with 2,361 videos and 895,929 live
comments. Then, we introduce two neural models to generate live comments based
on the visual and textual contexts, which achieve better performance than
previous neural baselines such as the sequence-to-sequence model. Finally, we
provide a retrieval-based evaluation protocol for automatic live commenting
where the model is asked to sort a set of candidate comments based on the
log-likelihood score, and evaluated on metrics such as mean-reciprocal-rank.
Putting it all together, we demonstrate the first `LiveBot'
Model Cards for Model Reporting
Trained machine learning models are increasingly used to perform high-impact
tasks in areas such as law enforcement, medicine, education, and employment. In
order to clarify the intended use cases of machine learning models and minimize
their usage in contexts for which they are not well suited, we recommend that
released models be accompanied by documentation detailing their performance
characteristics. In this paper, we propose a framework that we call model
cards, to encourage such transparent model reporting. Model cards are short
documents accompanying trained machine learning models that provide benchmarked
evaluation in a variety of conditions, such as across different cultural,
demographic, or phenotypic groups (e.g., race, geographic location, sex,
Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex
and Fitzpatrick skin type) that are relevant to the intended application
domains. Model cards also disclose the context in which models are intended to
be used, details of the performance evaluation procedures, and other relevant
information. While we focus primarily on human-centered machine learning models
in the application fields of computer vision and natural language processing,
this framework can be used to document any trained machine learning model. To
solidify the concept, we provide cards for two supervised models: One trained
to detect smiling faces in images, and one trained to detect toxic comments in
text. We propose model cards as a step towards the responsible democratization
of machine learning and related AI technology, increasing transparency into how
well AI technology works. We hope this work encourages those releasing trained
machine learning models to accompany model releases with similar detailed
evaluation numbers and other relevant documentation
Studi Evaluatif Implementasi Standar Pelayanan Minimal (Spm) Pendidikan Dasar di Kabupaten Bantul
This research aimed to evaluate the implementation of Basic Education Minimum Service Standards (MSS) in Bantul District. The result were the description of 27 achievement indicators of Basic Education Minimum Service Standards (MSS) and the report of result evaluation implementation. The research method used quantitative and qualitative research, with the evaluation research type. The evaluation model used Robert E. Stake\u27s Countenance Evaluation Model which includes Antecedent (Context), Transaction (process), and Outcome (outcome) evaluation, while data analysis technique in this research is quantitative descriptive and qualitative analysis technique. The results showed that 27 of Achievement Indicators (AI) of Basic Education Minimum Service Standard (MSS) slightly increased on some indicators from the previous year. Description of the results of the study were: 1) for the level of elementary / MI level, AI 1-14 that has been fulfilled 100% that is AI 1, 12, and 13, while AI 15-27 has been fulfilled 100% that is AI 20 and 21; 2) for junior high school / MTs, AI 1-14 that has been fulfilled 100% that is AI 1, 12, and 13, while AI 15-27 that has been fulfilled 100% that have been reached was 100% such as 20 AI; 3) to reach the MSS achievement gap, the roadmap for fulfilling the Minimum Service Standard for Basic Education in 2017-2018
- …
