11,205 research outputs found
Explicating peer feedback quality and its impact on feedback implementation in EFL writing
IntroductionAlthough it is commonly acknowledged that peer feedback quality is crucial to the success of peer review, there is a lack of consensus on how it could be determined. More importantly, how feedback quality interacts with other factors like feedback features and focus, and ultimately influences peer feedback implementation remains insufficiently investigated.MethodsThe present study examined peer feedback quality and its impact on Chinese students’ feedback implementation in two argumentative writing tasks. Peer feedback quality was measured according to a self-designed two-dimensional measurement scale: accuracy and revision potential.ResultsQuantitative analyses of 5,606 implementable idea units of feedback and 440 writing drafts by 110 students revealed that feedback accuracy was at a medium level and revision potential was at a low level, with accuracy demonstrating stronger predictive power on implementation; the predictive strengths of feedback accuracy and revision potential were strongest when feedback features and focus were considered; the overall peer feedback quality was low and medium-quality feedback was implemented most frequently; feedback quality significantly and most strongly predicted implementation in combination with feedback features and focus.DiscussionThe study highlights the importance of future instructions in training students to provide and implement high-quality feedback with good accuracy and high revision potential
Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations
The local explanation provides heatmaps on images to explain how
Convolutional Neural Networks (CNNs) derive their output. Due to its visual
straightforwardness, the method has been one of the most popular explainable AI
(XAI) methods for diagnosing CNNs. Through our formative study (S1), however,
we captured ML engineers' ambivalent perspective about the local explanation as
a valuable and indispensable envision in building CNNs versus the process that
exhausts them due to the heuristic nature of detecting vulnerability. Moreover,
steering the CNNs based on the vulnerability learned from the diagnosis seemed
highly challenging. To mitigate the gap, we designed DeepFuse, the first
interactive design that realizes the direct feedback loop between a user and
CNNs in diagnosing and revising CNN's vulnerability using local explanations.
DeepFuse helps CNN engineers to systemically search "unreasonable" local
explanations and annotate the new boundaries for those identified as
unreasonable in a labor-efficient manner. Next, it steers the model based on
the given annotation such that the model doesn't introduce similar mistakes. We
conducted a two-day study (S2) with 12 experienced CNN engineers. Using
DeepFuse, participants made a more accurate and "reasonable" model than the
current state-of-the-art. Also, participants found the way DeepFuse guides
case-based reasoning can practically improve their current practice. We provide
implications for design that explain how future HCI-driven design can move our
practice forward to make XAI-driven insights more actionable.Comment: 32 pages, 6 figures, 5 tables. Accepted for publication in the
Proceedings of the ACM on Human-Computer Interaction (PACM HCI), CSCW 202
DebateKG: Automatic Policy Debate Case Creation with Semantic Knowledge Graphs
Recent work within the Argument Mining community has shown the applicability
of Natural Language Processing systems for solving problems found within
competitive debate. One of the most important tasks within competitive debate
is for debaters to create high quality debate cases. We show that effective
debate cases can be constructed using constrained shortest path traversals on
Argumentative Semantic Knowledge Graphs. We study this potential in the context
of a type of American Competitive Debate, called Policy Debate, which already
has a large scale dataset targeting it called DebateSum. We significantly
improve upon DebateSum by introducing 53180 new examples, as well as further
useful metadata for every example, to the dataset. We leverage the txtai
semantic search and knowledge graph toolchain to produce and contribute 9
semantic knowledge graphs built on this dataset. We create a unique method for
evaluating which knowledge graphs are better in the context of producing policy
debate cases. A demo which automatically generates debate cases, along with all
other code and the Knowledge Graphs, are open-sourced and made available to the
public here: https://github.com/Hellisotherpeople/DebateKGComment: 8 pages, knife-edge reject from EACL 2023 and workshops, System
Demonstration pape
A Pairwise Dataset for GUI Conversion and Retrieval between Android Phones and Tablets
With the popularity of smartphones and tablets, users have become accustomed
to using different devices for different tasks, such as using their phones to
play games and tablets to watch movies. To conquer the market, one app is often
available on both smartphones and tablets. However, although one app has
similar graphic user interfaces (GUIs) and functionalities on phone and tablet,
current app developers typically start from scratch when developing a
tablet-compatible version of their app, which drives up development costs and
wastes existing design resources. Researchers are attempting to employ deep
learning in automated GUIs development to enhance developers' productivity.
Deep learning models rely heavily on high-quality datasets. There are currently
several publicly accessible GUI page datasets for phones, but none for pairwise
GUIs between phones and tablets. This poses a significant barrier to the
employment of deep learning in automated GUI development. In this paper, we
collect and make public the Papt dataset, which is a pairwise dataset for GUI
conversion and retrieval between Android phones and tablets. The dataset
contains 10,035 phone-tablet GUI page pairs from 5,593 phone-tablet app pairs.
We illustrate the approaches of collecting pairwise data and statistical
analysis of this dataset. We also illustrate the advantages of our dataset
compared to other current datasets. Through preliminary experiments on this
dataset, we analyse the present challenges of utilising deep learning in
automated GUI development and find that our dataset can assist the application
of some deep learning models to tasks involving automatic GUI development.Comment: 10 pages, 9 figure
MolFM: A Multimodal Molecular Foundation Model
Molecular knowledge resides within three different modalities of information
sources: molecular structures, biomedical documents, and knowledge bases.
Effective incorporation of molecular knowledge from these modalities holds
paramount significance in facilitating biomedical research. However, existing
multimodal molecular foundation models exhibit limitations in capturing
intricate connections between molecular structures and texts, and more
importantly, none of them attempt to leverage a wealth of molecular expertise
derived from knowledge graphs. In this study, we introduce MolFM, a multimodal
molecular foundation model designed to facilitate joint representation learning
from molecular structures, biomedical texts, and knowledge graphs. We propose
cross-modal attention between atoms of molecular structures, neighbors of
molecule entities and semantically related texts to facilitate cross-modal
comprehension. We provide theoretical analysis that our cross-modal
pre-training captures local and global molecular knowledge by minimizing the
distance in the feature space between different modalities of the same
molecule, as well as molecules sharing similar structures or functions. MolFM
achieves state-of-the-art performance on various downstream tasks. On
cross-modal retrieval, MolFM outperforms existing models with 12.13% and 5.04%
absolute gains under the zero-shot and fine-tuning settings, respectively.
Furthermore, qualitative analysis showcases MolFM's implicit ability to provide
grounding from molecular substructures and knowledge graphs. Code and models
are available on https://github.com/BioFM/OpenBioMed.Comment: 31 pages, 15 figures, and 15 table
TeamSTEPPS and Organizational Culture
Patient safety issues remain despite several strategies developed for their deterrence. While many safety initiatives bring about improvement, they are repeatedly unsustainable and short-lived. The index hospital’s goal was to build an organizational culture within a groundwork that improves teamwork and continuing healthcare team engagement. Teamwork influences the efficiency of patient care, patient safety, and clinical outcomes, as it has been identified as an approach for enhancing collaboration, decreasing medical errors, and building a culture of safety in healthcare. The facility implemented Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS), an evidence-based framework which was used for team training to produce valuable and needed changes, facilitating modification of organizational culture, increasing patient safety compliance, or solving particular issues. This study aimed to identify the correlation between TeamSTEPPS enactment and improved organizational culture in the ambulatory care nursing department of a New York City public hospital
Question Answering with distilled BERT models: A case study for Biomedical Data
In the healthcare industry today, 80% of data is unstructured (Razzak et al., 2019). The challenge this imposes on healthcare providers is that they rely on unstructured data to inform their decision-making. Although Electronic Health Records (EHRs) exist to integrate patient data, healthcare providers are still challenged with searching for information and answers contained within unstructured data. Prior NLP and Deep Learning research has shown that these methods can improve information extraction on unstructured medical documents. This research expands upon those studies by developing a Question Answering system using distilled BERT models. Healthcare providers can use this system on their local computers to search for and receive answers to specific questions about patients. This paper’s best TinyBERT and TinyBioBERT models had Mean Reciprocal Rank (MRRs) of 0.522 and 0.284 respectively. Based on these findings this paper concludes that TinyBERT performed better than TinyBioBERT on BioASQ task 9b data
Terminology and ontology development for semantic annotation : A use case on sepsis and adverse events
publishedVersio
NeuKron: Constant-Size Lossy Compression of Sparse Reorderable Matrices and Tensors
Many real-world data are naturally represented as a sparse reorderable
matrix, whose rows and columns can be arbitrarily ordered (e.g., the adjacency
matrix of a bipartite graph). Storing a sparse matrix in conventional ways
requires an amount of space linear in the number of non-zeros, and lossy
compression of sparse matrices (e.g., Truncated SVD) typically requires an
amount of space linear in the number of rows and columns. In this work, we
propose NeuKron for compressing a sparse reorderable matrix into a
constant-size space. NeuKron generalizes Kronecker products using a recurrent
neural network with a constant number of parameters. NeuKron updates the
parameters so that a given matrix is approximated by the product and reorders
the rows and columns of the matrix to facilitate the approximation. The updates
take time linear in the number of non-zeros in the input matrix, and the
approximation of each entry can be retrieved in logarithmic time. We also
extend NeuKron to compress sparse reorderable tensors (e.g. multi-layer
graphs), which generalize matrices. Through experiments on ten real-world
datasets, we show that NeuKron is (a) Compact: requiring up to five orders of
magnitude less space than its best competitor with similar approximation
errors, (b) Accurate: giving up to 10x smaller approximation error than its
best competitors with similar size outputs, and (c) Scalable: successfully
compressing a matrix with over 230 million non-zero entries.Comment: Accepted to WWW 2023 - The Web Conference 202
Intelligent architecture to support second generation general accounting
Dissertation presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Information Analysis and ManagementThis study aimed to innovate the world of accounting software. After so many years, accountants are faced
with an unbelievable amount of work, which is not always productive, effective and efficient for both the
accountant and the company that provided him with the data required to carry out the accounting. There is
already accounting software with various automation processes, from ornamentation to profitability analysis
and management reporting. There is also software that is updated in accordance with the accounting laws,
i.e., the platform changes its mechanisms according to the changes in the law.
Despite the existence of this software, manual work remains, and the amount of information accountants are
faced with is still very large. It is difficult for accountants to do a 100% reliable job with so much information
and data they have. One of the most common situations in the accounting world is undoubtedly the
miscalculation or forgetting of some financial or non-financial data found in accounting operations (income
statements, balance sheets, etc.). To render accounting operations efficient, effective and productive, errorfree
and 100% reliable, an intelligent architecture has been developed to support second generation general
accounting. This architectural design was developed with a view to make the existing software smarter with
the help of artificial intelligence.
A study was carried out on accounting keys and concepts, on AI and main process automation techniques to
build the model. With these studies it was intended to acquire all possible requirements for the creation of the
architecture. Towards the end of the thesis the model was validated
- …