859 research outputs found
Recommended from our members
Understanding Governance Structures in Shaping Greenway Implementation in City-Regions: A Case Study in Central Zhejiang Province, China
Greenway implementation in city-regions is a collective action involving a complex range of relations between regional and local agencies, between government departments at the same administrative level, and between adjacent jurisdictions. This paper explores how greenway implementation governance is structured, and why different governance structures result in different greenway implementation processes and outcomes in a city-region. We use a case study approach to a greenway project in central Zhejiang Province (CCCZ), where data are collected through field reconnaissance, in-depth interviews, and document analysis. Preliminary findings reveal that the central Zhejiang Greenway Project has experienced a development from ‘territorially-specialized governance’ to ‘cross-scale governance’. ‘Double-hatted’ agencies comprising government leaders and professional representatives from different agencies can create cross-scale institutional linkages both vertically (across levels of governments) and horizontally (across jurisdictions an
A User-Centered Evaluation of Spanish Text Simplification
We present an evaluation of text simplification (TS) in Spanish for a
production system, by means of two corpora focused in both complex-sentence and
complex-word identification. We compare the most prevalent Spanish-specific
readability scores with neural networks, and show that the latter are
consistently better at predicting user preferences regarding TS. As part of our
analysis, we find that multilingual models underperform against equivalent
Spanish-only models on the same task, yet all models focus too often on
spurious statistical features, such as sentence length. We release the corpora
in our evaluation to the broader community with the hopes of pushing forward
the state-of-the-art in Spanish natural language processing.Comment: Data at https://github.com/microsoft/BrevE-CLar
Multi-aspect Repetition Suppression and Content Moderation of Large Language Models
Natural language generation is one of the most impactful fields in NLP, and
recent years have witnessed its evolution brought about by large language
models (LLMs). As the key instrument for writing assistance applications, they
are generally prone to replicating or extending offensive content provided in
the input. In low-resource data regime, they can also lead to repetitive
outputs (Holtzman et al., 2019) [1]. Usually, offensive content and repetitions
are mitigated with post-hoc methods, including n-gram level blocklists, top-k
and nucleus sampling. In this paper, we introduce a combination of exact and
non-exact repetition suppression using token and sequence level unlikelihood
loss, repetition penalty during training, inference, and post-processing
respectively. We further explore multi-level unlikelihood loss to the extent
that it endows the model with abilities to avoid generating offensive words and
phrases from the beginning. Finally, with comprehensive experiments, we
demonstrate that our proposed methods work exceptionally in controlling the
repetition and content quality of LLM outputs
Zelena sinteza i primjena zeolita ZSM-5
A ZSM-5 molecular sieve composite with a wide pore prepared by the solid phase in-situ synthesis method and fluid catalytic cracking, and an FCC catalyst additive prepared by the same ZSM-5 molecular sieve for increasing the amount of light olefin yield were investigated. The samples were characterized by XRD, N2 adsorption/desorption, SEM, and NH3-TPD, respectively. The results showed that the structure of the ZSM-5 molecular sieve composite prepared by solid phase in-situ synthesis method was pure MFI-type zeolite material. The crystallinity of ZSM-5 molecular sieve was 59.8 wt%. The synthesized ZSM-5 molecular sieve had more acid content and a wide-pore structure. The average pore size was 5.9 nm, and BET specific surface area and micropore specific surface area of sample were 213 m2 g–1 and 124 m2 g–1, respectively. The evaluated results indicated that the FCC catalyst additive had good selectivity for LPG, propylene, and butene, increasing propylene and butene yields by 2.28 wt% and 2.15 wt%, respectively, as well as had better heavy oil cracking capability and coke selectivity.
This work is licensed under a Creative Commons Attribution 4.0 International License.Istražen je kompozit molekularnog sita ZSM-5 sa širokim porama pripremljen sintezom na krutoj fazi in-situ i katalitičkim krekiranjem u vrtložnom sloju (FCC) te aditiv katalizatora FCC pripremljen istim molekularnim sitom ZSM-5 u svrhu povećanja količine prinosa lakog olefina. Uzorci su karakterizirani rendgenskom difrakcijom na prahu (XRD), adsorpcijom/desorpcijom N2, skenirajućim elektronskim mikroskopom (SEM) te temperaturno programiranom desorpcijom amonijaka (NH3-TPD). Rezultati su pokazali da je struktura smjese molekularnog sita ZSM-5 pripremljena metodom sinteze in-situ u čvrstoj fazi čisti zeolitni materijal skupine MFI. Kristaliničnost molekularnog sita ZSM-5 iznosila je 59,8 %. Sintetizirano molekularno sito ZSM-5 imalo je više kiseline i strukturu sa širokim porama. Prosječna veličina pora bila je 5,9 nm, a specifična površina (BET) i specifična površina mikropora uzoraka iznosile su 213 m2 g–1, odnosno 124 m2 g–1. Evaluirani rezultati ukazali su na to da aditiv katalizatora FCC pokazuje dobru selektivnost za ukapljeni naftni plin (UNP), propilen i buten, povećavajući prinos propilena i butena za 2,28 %, odnosno 2,15 %, kao i da ima bolju sposobnost krekiranja teškog ulja i selektivnost koksa.
Ovo djelo je dano na korištenje pod licencom Creative Commons Imenovanje 4.0 međunarodna
Energy Efficiency Maximization in IRS-Aided Cell-Free Massive MIMO System
In this paper, we consider an intelligent reflecting surface (IRS)-aided
cell-free massive multiple-input multiple-output system, where the beamforming
at access points and the phase shifts at IRSs are jointly optimized to maximize
energy efficiency (EE). To solve EE maximization problem, we propose an
iterative optimization algorithm by using quadratic transform and Lagrangian
dual transform to find the optimum beamforming and phase shifts. However, the
proposed algorithm suffers from high computational complexity, which hinders
its application in some practical scenarios. Responding to this, we further
propose a deep learning based approach for joint beamforming and phase shifts
design. Specifically, a two-stage deep neural network is trained offline using
the unsupervised learning manner, which is then deployed online for the
predictions of beamforming and phase shifts. Simulation results show that
compared with the iterative optimization algorithm and the genetic algorithm,
the unsupervised learning based approach has higher EE performance and lower
running time.Comment: 6 pages, 4 figure
In-context Autoencoder for Context Compression in a Large Language Model
We propose the In-context Autoencoder (ICAE) for context compression in a
large language model (LLM). The ICAE has two modules: a learnable encoder
adapted with LoRA from an LLM for compressing a long context into a limited
number of memory slots, and a fixed decoder which is the target LLM that can
condition on the memory slots for various purposes. We first pretrain the ICAE
using both autoencoding and language modeling objectives on massive text data,
enabling it to generate memory slots that accurately and comprehensively
represent the original context. Then, we fine-tune the pretrained ICAE on a
small amount of instruct data to enhance its interaction with various prompts
for producing desirable responses. Our experimental results demonstrate that
the ICAE learned with our proposed pretraining and fine-tuning paradigm can
effectively produce memory slots with context compression, which can
be well conditioned on by the target LLM to respond to various prompts. The
promising results demonstrate significant implications of the ICAE for its
novel approach to the long context problem and its potential to reduce
computation and memory overheads for LLM inference in practice, suggesting
further research effort in context management for an LLM. Our code and data
will be released shortly.Comment: Work in progres
An Unsupervised Three-way Decisions Framework of Overload Preference Based on Adjusted Weight Multi-attribute Decision-making Model
AbstractIn the process of traffic control, law-enforcement officials are required to accurately evaluate the potential probability of freight-driver's overloading behavior. This study establishes a model of overloading preference assessment on the basis of freight-driver's individual variation. With indexes selecting, the equal-weight and AHP-based adjusted weight decision-making model are used respectively to evaluate freight-driver's overload preference. Synthesizing the results from two models, we present a three-way decisions model to make judgment
An Evaluation on Large Language Model Outputs: Discourse and Memorization
We present an empirical evaluation of various outputs generated by nine of
the most widely-available large language models (LLMs). Our analysis is done
with off-the-shelf, readily-available tools. We find a correlation between
percentage of memorized text, percentage of unique text, and overall output
quality, when measured with respect to output pathologies such as
counterfactual and logically-flawed statements, and general failures like not
staying on topic. Overall, 80.0% of the outputs evaluated contained memorized
data, but outputs containing the most memorized content were also more likely
to be considered of high quality. We discuss and evaluate mitigation
strategies, showing that, in the models evaluated, the rate of memorized text
being output is reduced. We conclude with a discussion on potential
implications around what it means to learn, to memorize, and to evaluate
quality text.Comment: Preprint. Under revie
- …