55 research outputs found
Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research
The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction
Model Compression and Efficient Inference for Large Language Models: A Survey
Transformer based large language models have achieved tremendous success.
However, the significant memory and computational costs incurred during the
inference process make it challenging to deploy large models on
resource-constrained devices. In this paper, we investigate compression and
efficient inference methods for large language models from an algorithmic
perspective. Regarding taxonomy, similar to smaller models, compression and
acceleration algorithms for large language models can still be categorized into
quantization, pruning, distillation, compact architecture design, dynamic
networks. However, Large language models have two prominent characteristics
compared to smaller models: (1) Most of compression algorithms require
finetuning or even retraining the model after compression. The most notable
aspect of large models is the very high cost associated with model finetuning
or training. Therefore, many algorithms for large models, such as quantization
and pruning, start to explore tuning-free algorithms. (2) Large models
emphasize versatility and generalization rather than performance on a single
task. Hence, many algorithms, such as knowledge distillation, focus on how to
preserving their versatility and generalization after compression. Since these
two characteristics were not very pronounced in early large models, we further
distinguish large language models into medium models and ``real'' large models.
Additionally, we also provide an introduction to some mature frameworks for
efficient inference of large models, which can support basic compression or
acceleration algorithms, greatly facilitating model deployment for users.Comment: 47 pages, review 380 papers. The work is ongoin
Spatiotemporal-Enhanced Network for Click-Through Rate Prediction in Location-based Services
In Location-Based Services(LBS), user behavior naturally has a strong
dependence on the spatiotemporal information, i.e., in different geographical
locations and at different times, user click behavior will change
significantly. Appropriate spatiotemporal enhancement modeling of user click
behavior and large-scale sparse attributes is key to building an LBS model.
Although most of existing methods have been proved to be effective, they are
difficult to apply to takeaway scenarios due to insufficient modeling of
spatiotemporal information. In this paper, we address this challenge by seeking
to explicitly model the timing and locations of interactions and proposing a
Spatiotemporal-Enhanced Network, namely StEN. In particular, StEN applies a
Spatiotemporal Profile Activation module to capture common spatiotemporal
preference through attribute features. A Spatiotemporal Preference Activation
is further applied to model the personalized spatiotemporal preference embodied
by behaviors in detail. Moreover, a Spatiotemporal-aware Target Attention
mechanism is adopted to generate different parameters for target attention at
different locations and times, thereby improving the personalized
spatiotemporal awareness of the model.Comprehensive experiments are conducted
on three large-scale industrial datasets, and the results demonstrate the
state-of-the-art performance of our methods. In addition, we have also released
an industrial dataset for takeaway industry to make up for the lack of public
datasets in this community.Comment: accepted by CIKM workshop 202
Oxygenated Aromatic Compounds are Important Precursors of Secondary Organic Aerosol in Biomass Burning Emissions
Biomass burning is the largest combustion-related source of volatile organic compounds (VOCs) to the atmosphere. We describe the development of a state-of-the-science model to simulate the photochemical formation of secondary organic aerosol (SOA) from biomass-burning emissions observed in dry (RH <20%) environmental chamber experiments. The modeling is supported by (i) new oxidation chamber measurements, (ii) detailed concurrent measurements of SOA precursors in biomass-burning emissions, and (iii) development of SOA parameters for heterocyclic and oxygenated aromatic compounds based on historical chamber experiments. We find that oxygenated aromatic compounds, including phenols and methoxyphenols, account for slightly less than 60% of the SOA formed and help our model explain the variability in the organic aerosol mass (R² = 0.68) and O/C (R² = 0.69) enhancement ratios observed across 11 chamber experiments. Despite abundant emissions, heterocyclic compounds that included furans contribute to ∼20% of the total SOA. The use of pyrolysis-temperature-based or averaged emission profiles to represent SOA precursors, rather than those specific to each fire, provide similar results to within 20%. Our findings demonstrate the necessity of accounting for oxygenated aromatics from biomass-burning emissions and their SOA formation in chemical mechanisms
- …