92,995 research outputs found
A Formal Framework for Precise Parametric WCET Formulas
Parametric worst-case execution time (WCET) formulas are a valuable tool to estimate the impact of input data properties on the WCET at design time, or to guide scheduling decisions at runtime. Previous approaches to parametric WCET analysis either provide only informal ad-hoc solutions or tend to be rather pessimistic, as they do not take flow constraints other than simple loop bounds into account. We develop a formal framework around path- and frequency expressions, which allow us to reason about execution frequencies of program parts. Starting from a reducible control flow graph and a set of (parametric) constraints, we show how to obtain frequency expressions and refine them by means of sound approximations, which account for
more sophisticated flow constraints. Finally, we obtain closed-form parametric WCET formulas by means of partial evaluation. We developed a prototype, implementing our solution to parametric WCET analysis, and compared existing approaches within our setting. As our framework
supports fine-grained transformations to improve the precision of parametric formulas, it allows to focus on important flow relations in order to avoid intractably large formulas
CellTradeMap: Delineating trade areas for urban commercial districts with cellular networks
Understanding customer mobility patterns to com-mercial districts is crucial for urban planning, facility manage-ment, and business strategies. Trade areas are a widely appliedmeasure to quantify where the visitors are from. Traditionaltrade area analysis is limited to small-scale or store-level studiesbecause information such as visits to competitor commercialentities and place of residence is collected by labour-intensivequestionnaires or heavily biased location-based social media data.In this paper, we propose CellTradeMap, a novel district-leveltrade area analysis framework using mobile flow records (MFRs),a type of fine-grained cellular network data. CellTradeMap ex-tracts robust location information from the irregularly sampled,noisy MFRs, adapts the generic trade area analysis frameworkto incorporate cellular data, and enhances the original trade areamodel with cellular-based features. We evaluate CellTradeMap ona large-scale cellular network dataset covering 3.5 million mobilephone users in a metropolis in China. Experimental results showthat the trade areas extracted by CellTradeMap are aligned withdomain knowledge and CellTradeMap can model trade areaswith a high predictive accuracy
Towards an Expressive and Scalable Framework for expressing Join Point Models
Join point models are one of the key features in aspectoriented programming languages and tools. They provide\ud
software engineers means to pinpoint the exact locations in programs (join points) to weave in advices. Our experience in modularizing concerns in a large embedded system showed that existing join point models and their underlying program representations are not expressive enough. This prevents the selection of some join points of our interest. In this paper, we motivate the need for more fine-grained join point models within more expressive source code representations. We propose a new program representation called a program graph, over which more fine-grained join point models can be defined. In addition, we present a simple language to manipulate program graphs to perform source code transformations. This language thus can be used for specifying complex weaving algorithms over program graphs
Second-order Temporal Pooling for Action Recognition
Deep learning models for video-based action recognition usually generate
features for short clips (consisting of a few frames); such clip-level features
are aggregated to video-level representations by computing statistics on these
features. Typically zero-th (max) or the first-order (average) statistics are
used. In this paper, we explore the benefits of using second-order statistics.
Specifically, we propose a novel end-to-end learnable feature aggregation
scheme, dubbed temporal correlation pooling that generates an action descriptor
for a video sequence by capturing the similarities between the temporal
evolution of clip-level CNN features computed across the video. Such a
descriptor, while being computationally cheap, also naturally encodes the
co-activations of multiple CNN features, thereby providing a richer
characterization of actions than their first-order counterparts. We also
propose higher-order extensions of this scheme by computing correlations after
embedding the CNN features in a reproducing kernel Hilbert space. We provide
experiments on benchmark datasets such as HMDB-51 and UCF-101, fine-grained
datasets such as MPII Cooking activities and JHMDB, as well as the recent
Kinetics-600. Our results demonstrate the advantages of higher-order pooling
schemes that when combined with hand-crafted features (as is standard practice)
achieves state-of-the-art accuracy.Comment: Accepted in the International Journal of Computer Vision (IJCV
- …