687 research outputs found
Detection and Characterization of E-Health Research: A Bibliometrics (2001–2016)
E-health is the use of ICT to improve the ability to treat patients, facilitate behavior change, and improve health. It has many benefits like healthcare cost reduction, convenience for users, and health system improvement. Several literature reviews have included one part or the other of the field, but an overall review is lacking possibly due to the field’s constant evolution. An overview of E-health research is needed. We selected the related literature on E-health downloaded from Web of Science and PubMed as data source and used the visualization analysis function of CiteSpace. Literature information would be converted into precise mapping knowledge domain. Through further analysis of mappings, we explored the theoretical framework and the forefront in the field of E-health. Our study shows that over the past 15 years, the USA, England, and Australia were the top three countries that published the largest number of papers. Researches about Internet technology, telemedicine (m-health), and healthcare lay the basis of E-health research development. Particularly, m-health, health system management, and experimental intervention have emerged and formed the new study frontier in the recent 3–5 years. With the advancement of E-health projects, an increasing number of scholars have been studying the commercialization of E-health
How Multilingual is Multilingual LLM?
Large Language Models (LLMs), trained predominantly on extensive English
data, often exhibit limitations when applied to other languages. Current
research is primarily focused on enhancing the multilingual capabilities of
these models by employing various tuning strategies. Despite their
effectiveness in certain languages, the understanding of the multilingual
abilities of LLMs remains incomplete. This study endeavors to evaluate the
multilingual capacity of LLMs by conducting an exhaustive analysis across 101
languages, and classifies languages with similar characteristics into four
distinct quadrants. By delving into each quadrant, we shed light on the
rationale behind their categorization and offer actionable guidelines for
tuning these languages. Extensive experiments reveal that existing LLMs possess
multilingual capabilities that surpass our expectations, and we can
significantly improve the multilingual performance of LLMs by focusing on these
distinct attributes present in each quadrant
Aspect-Aware Latent Factor Model: Rating Prediction with Ratings and Reviews
Although latent factor models (e.g., matrix factorization) achieve good
accuracy in rating prediction, they suffer from several problems including
cold-start, non-transparency, and suboptimal recommendation for local users or
items. In this paper, we employ textual review information with ratings to
tackle these limitations. Firstly, we apply a proposed aspect-aware topic model
(ATM) on the review text to model user preferences and item features from
different aspects, and estimate the aspect importance of a user towards an
item. The aspect importance is then integrated into a novel aspect-aware latent
factor model (ALFM), which learns user's and item's latent factors based on
ratings. In particular, ALFM introduces a weighted matrix to associate those
latent factors with the same set of aspects discovered by ATM, such that the
latent factors could be used to estimate aspect ratings. Finally, the overall
rating is computed via a linear combination of the aspect ratings, which are
weighted by the corresponding aspect importance. To this end, our model could
alleviate the data sparsity problem and gain good interpretability for
recommendation. Besides, an aspect rating is weighted by an aspect importance,
which is dependent on the targeted user's preferences and targeted item's
features. Therefore, it is expected that the proposed method can model a user's
preferences on an item more accurately for each user-item pair locally.
Comprehensive experimental studies have been conducted on 19 datasets from
Amazon and Yelp 2017 Challenge dataset. Results show that our method achieves
significant improvement compared with strong baseline methods, especially for
users with only few ratings. Moreover, our model could interpret the
recommendation results in depth.Comment: This paper has been accepted by the WWW 2018 Conferenc
Efficient and Interpretable Compressive Text Summarisation with Unsupervised Dual-Agent Reinforcement Learning
Recently, compressive text summarisation offers a balance between the
conciseness issue of extractive summarisation and the factual hallucination
issue of abstractive summarisation. However, most existing compressive
summarisation methods are supervised, relying on the expensive effort of
creating a new training dataset with corresponding compressive summaries. In
this paper, we propose an efficient and interpretable compressive summarisation
method that utilises unsupervised dual-agent reinforcement learning to optimise
a summary's semantic coverage and fluency by simulating human judgment on
summarisation quality. Our model consists of an extractor agent and a
compressor agent, and both agents have a multi-head attentional pointer-based
structure. The extractor agent first chooses salient sentences from a document,
and then the compressor agent compresses these extracted sentences by selecting
salient words to form a summary without using reference summaries to compute
the summary reward. To our best knowledge, this is the first work on
unsupervised compressive summarisation. Experimental results on three widely
used datasets (e.g., Newsroom, CNN/DM, and XSum) show that our model achieves
promising performance and a significant improvement on Newsroom in terms of the
ROUGE metric, as well as interpretability of semantic coverage of summarisation
results.Comment: The 4th Workshop on Simple and Efficient Natural Language Processing
(SustaiNLP 2023), co-located with ACL 202
Stable Score Distillation for High-Quality 3D Generation
Although Score Distillation Sampling (SDS) has exhibited remarkable
performance in conditional 3D content generation, a comprehensive understanding
of its formulation is still lacking, hindering the development of 3D
generation. In this work, we decompose SDS as a combination of three functional
components, namely mode-seeking, mode-disengaging and variance-reducing terms,
analyzing the properties of each. We show that problems such as over-smoothness
and implausibility result from the intrinsic deficiency of the first two terms
and propose a more advanced variance-reducing term than that introduced by SDS.
Based on the analysis, we propose a simple yet effective approach named Stable
Score Distillation (SSD) which strategically orchestrates each term for
high-quality 3D generation and can be readily incorporated to various 3D
generation frameworks and 3D representations. Extensive experiments validate
the efficacy of our approach, demonstrating its ability to generate
high-fidelity 3D content without succumbing to issues such as over-smoothness
Study on Signal Detection of the Instantaneous Infrared Target Based on Finite Element Analysis
This paper presented a novel method to detect the signal of the instantaneous infrared target based on the Finite Element Analysis (FEA). The radiation energy produced by flame infrared target was divided to finite element section. Studied the distribution of the unit area energy of flame in the detection system and the calculation method of flame radiation obtained by the photosensitive surface of infrared detector in the optical field, we set up a target detection model based on FEA for infrared targets and deduced a calculation formula of the output signal of the infrared flame detection system. Furthermore, the paper analyzed the factors of the influence detection effect both of the environmental factors and the angle of incidence optical system, and simulated the relationship between all parameters of the established model and the characteristics of the object detected. The results of experimental data was consistent with the simulation result verified the correctness of the infrared target detection model established
Terrain Diffusion Network: Climatic-Aware Terrain Generation with Geological Sketch Guidance
Sketch-based terrain generation seeks to create realistic landscapes for
virtual environments in various applications such as computer games, animation
and virtual reality. Recently, deep learning based terrain generation has
emerged, notably the ones based on generative adversarial networks (GAN).
However, these methods often struggle to fulfill the requirements of flexible
user control and maintain generative diversity for realistic terrain.
Therefore, we propose a novel diffusion-based method, namely terrain diffusion
network (TDN), which actively incorporates user guidance for enhanced
controllability, taking into account terrain features like rivers, ridges,
basins, and peaks. Instead of adhering to a conventional monolithic denoising
process, which often compromises the fidelity of terrain details or the
alignment with user control, a multi-level denoising scheme is proposed to
generate more realistic terrains by taking into account fine-grained details,
particularly those related to climatic patterns influenced by erosion and
tectonic activities. Specifically, three terrain synthesisers are designed for
structural, intermediate, and fine-grained level denoising purposes, which
allow each synthesiser concentrate on a distinct terrain aspect. Moreover, to
maximise the efficiency of our TDN, we further introduce terrain and sketch
latent spaces for the synthesizers with pre-trained terrain autoencoders.
Comprehensive experiments on a new dataset constructed from NASA Topology
Images clearly demonstrate the effectiveness of our proposed method, achieving
the state-of-the-art performance. Our code and dataset will be publicly
available
- …