10 research outputs found
Recognizing Focal Liver Lesions in Contrast-Enhanced Ultrasound with Discriminatively Trained Spatio-Temporal Model
The aim of this study is to provide an automatic computational framework to
assist clinicians in diagnosing Focal Liver Lesions (FLLs) in
Contrast-Enhancement Ultrasound (CEUS). We represent FLLs in a CEUS video clip
as an ensemble of Region-of-Interests (ROIs), whose locations are modeled as
latent variables in a discriminative model. Different types of FLLs are
characterized by both spatial and temporal enhancement patterns of the ROIs.
The model is learned by iteratively inferring the optimal ROI locations and
optimizing the model parameters. To efficiently search the optimal spatial and
temporal locations of the ROIs, we propose a data-driven inference algorithm by
combining effective spatial and temporal pruning. The experiments show that our
method achieves promising results on the largest dataset in the literature (to
the best of our knowledge), which we have made publicly available.Comment: 5 pages, 1 figure
Attention-Aware Face Hallucination via Deep Reinforcement Learning
Face hallucination is a domain-specific super-resolution problem with the
goal to generate high-resolution (HR) faces from low-resolution (LR) input
images. In contrast to existing methods that often learn a single
patch-to-patch mapping from LR to HR images and are regardless of the
contextual interdependency between patches, we propose a novel Attention-aware
Face Hallucination (Attention-FH) framework which resorts to deep reinforcement
learning for sequentially discovering attended patches and then performing the
facial part enhancement by fully exploiting the global interdependency of the
image. Specifically, in each time step, the recurrent policy network is
proposed to dynamically specify a new attended region by incorporating what
happened in the past. The state (i.e., face hallucination result for the whole
image) can thus be exploited and updated by the local enhancement network on
the selected region. The Attention-FH approach jointly learns the recurrent
policy network and local enhancement network through maximizing the long-term
reward that reflects the hallucination performance over the whole image.
Therefore, our proposed Attention-FH is capable of adaptively personalizing an
optimal searching path for each face image according to its own characteristic.
Extensive experiments show our approach significantly surpasses the
state-of-the-arts on in-the-wild faces with large pose and illumination
variations
REM-Net: Recursive Erasure Memory Network for Commonsense Evidence Refinement
When answering a question, people often draw upon their rich world knowledge
in addition to the particular context. While recent works retrieve supporting
facts/evidence from commonsense knowledge bases to supply additional
information to each question, there is still ample opportunity to advance it on
the quality of the evidence. It is crucial since the quality of the evidence is
the key to answering commonsense questions, and even determines the upper bound
on the QA systems performance. In this paper, we propose a recursive erasure
memory network (REM-Net) to cope with the quality improvement of evidence. To
address this, REM-Net is equipped with a module to refine the evidence by
recursively erasing the low-quality evidence that does not explain the question
answering. Besides, instead of retrieving evidence from existing knowledge
bases, REM-Net leverages a pre-trained generative model to generate candidate
evidence customized for the question. We conduct experiments on two commonsense
question answering datasets, WIQA and CosmosQA. The results demonstrate the
performance of REM-Net and show that the refined evidence is explainable.Comment: Accepted by AAAI 202
LEGO-Prover: Neural Theorem Proving with Growing Libraries
Despite the success of large language models (LLMs), the task of theorem
proving still remains one of the hardest reasoning tasks that is far from being
fully solved. Prior methods using language models have demonstrated promising
results, but they still struggle to prove even middle school level theorems.
One common limitation of these methods is that they assume a fixed theorem
library during the whole theorem proving process. However, as we all know,
creating new useful theorems or even new theories is not only helpful but
crucial and necessary for advancing mathematics and proving harder and deeper
results. In this work, we present LEGO-Prover, which employs a growing skill
library containing verified lemmas as skills to augment the capability of LLMs
used in theorem proving. By constructing the proof modularly, LEGO-Prover
enables LLMs to utilize existing skills retrieved from the library and to
create new skills during the proving process. These skills are further evolved
(by prompting an LLM) to enrich the library on another scale. Modular and
reusable skills are constantly added to the library to enable tackling
increasingly intricate mathematical problems. Moreover, the learned library
further bridges the gap between human proofs and formal proofs by making it
easier to impute missing steps. LEGO-Prover advances the state-of-the-art pass
rate on miniF2F-valid (48.0% to 57.0%) and miniF2F-test (45.5% to 47.1%).
During the proving process, LEGO-Prover also manages to generate over 20,000
skills (theorems/lemmas) and adds them to the growing library. Our ablation
study indicates that these newly added skills are indeed helpful for proving
theorems, resulting in an improvement from a success rate of 47.1% to 50.4%. We
also release our code and all the generated skills
TRIGO: Benchmarking Formal Mathematical Proof Reduction for Generative Language Models
Automated theorem proving (ATP) has become an appealing domain for exploring
the reasoning ability of the recent successful generative language models.
However, current ATP benchmarks mainly focus on symbolic inference, but rarely
involve the understanding of complex number combination reasoning. In this
work, we propose TRIGO, an ATP benchmark that not only requires a model to
reduce a trigonometric expression with step-by-step proofs but also evaluates a
generative LM's reasoning ability on formulas and its capability to manipulate,
group, and factor number terms. We gather trigonometric expressions and their
reduced forms from the web, annotate the simplification process manually, and
translate it into the Lean formal language system. We then automatically
generate additional examples from the annotated samples to expand the dataset.
Furthermore, we develop an automatic generator based on Lean-Gym to create
dataset splits of varying difficulties and distributions in order to thoroughly
analyze the model's generalization ability. Our extensive experiments show our
proposed TRIGO poses a new challenge for advanced generative LM's including
GPT-4 which is pre-trained on a considerable amount of open-source formal
theorem-proving language data, and provide a new tool to study the generative
LM's ability on both formal and mathematical reasoning.Comment: Accepted by EMNLP 2023. Code is available at
https://github.com/menik1126/TRIG
Recommended from our members
Big Data in the Big D
Presentation for the 2017 International Conference on Knowledge Management. This presentation describes the use of a link prediction algorithm based on node embedding to map disease nodes and reveal relationships among various diseases
Treatment of obstructive azoospermia after inguinal hernia surgery: analysis of 17 cases
Objective To summarize the clinical features and treatment of iatrogenic vas deferens injury in the inguinal region. Methods Between January, 2014 and June, 2018, we collected the data of 17 patients treated in our hospital for obstructive azoospermia resulting from a previous inguinal hernia surgery for analysis of their clinical features, findings by surgical exploration, and characteristics of vas deferens injuries. All the patients underwent microsurgical vas deferens anastomosis and the mid-term and long-term outcomes of the patients were followed up. Results Fourteen of the 17 patients underwent vasovasostomy or vasoepididymostomy successfully. The patients were followed up for a mean of 19.6±9.5 months after the microsurgeries, and recanalization was achieved in 13 patients with a mean sperm count of (33.1±25.6)×106/L; natural conception of the spouses was achieved in 6 cases after the surgeries. Conclusions Inguinal hernia surgery is one of the main causes of vas deferens obstruction to result in obstructive azoospermia, and recanalization can be achieved in the majority of such cases by microvasectomy