5,740 research outputs found

    Click Carving: Segmenting Objects in Video with Point Clicks

    Full text link
    We present a novel form of interactive video object segmentation where a few clicks by the user helps the system produce a full spatio-temporal segmentation of the object of interest. Whereas conventional interactive pipelines take the user's initialization as a starting point, we show the value in the system taking the lead even in initialization. In particular, for a given video frame, the system precomputes a ranked list of thousands of possible segmentation hypotheses (also referred to as object region proposals) using image and motion cues. Then, the user looks at the top ranked proposals, and clicks on the object boundary to carve away erroneous ones. This process iterates (typically 2-3 times), and each time the system revises the top ranked proposal set, until the user is satisfied with a resulting segmentation mask. Finally, the mask is propagated across the video to produce a spatio-temporal object tube. On three challenging datasets, we provide extensive comparisons with both existing work and simpler alternative methods. In all, the proposed Click Carving approach strikes an excellent balance of accuracy and human effort. It outperforms all similarly fast methods, and is competitive or better than those requiring 2 to 12 times the effort.Comment: A preliminary version of the material in this document was filed as University of Texas technical report no. UT AI16-0

    Experiments with a Convex Polyhedral Analysis Tool for Logic Programs

    Full text link
    Convex polyhedral abstractions of logic programs have been found very useful in deriving numeric relationships between program arguments in order to prove program properties and in other areas such as termination and complexity analysis. We present a tool for constructing polyhedral analyses of (constraint) logic programs. The aim of the tool is to make available, with a convenient interface, state-of-the-art techniques for polyhedral analysis such as delayed widening, narrowing, "widening up-to", and enhanced automatic selection of widening points. The tool is accessible on the web, permits user programs to be uploaded and analysed, and is integrated with related program transformations such as size abstractions and query-answer transformation. We then report some experiments using the tool, showing how it can be conveniently used to analyse transition systems arising from models of embedded systems, and an emulator for a PIC microcontroller which is used for example in wearable computing systems. We discuss issues including scalability, tradeoffs of precision and computation time, and other program transformations that can enhance the results of analysis.Comment: Paper presented at the 17th Workshop on Logic-based Methods in Programming Environments (WLPE2007

    Formation of Black Holes from Collapsed Cosmic String Loops

    Get PDF
    The fraction of cosmic string loops which collapse to form black holes is estimated using a set of realistic loops generated by loop fragmentation. The smallest radius sphere into which each cosmic string loop may fit is obtained by monitoring the loop through one period of oscillation. For a loop with invariant length LL which contracts to within a sphere of radius RR, the minimum mass-per-unit length μmin\mu_{\rm min} necessary for the cosmic string loop to form a black hole according to the hoop conjecture is μmin=R/(2GL)\mu_{\rm min} = R /(2 G L). Analyzing 25,57625,576 loops, we obtain the empirical estimate fBH=104.9±0.2(Gμ)4.1±0.1f_{\rm BH} = 10^{4.9\pm 0.2} (G\mu)^{4.1 \pm 0.1} for the fraction of cosmic string loops which collapse to form black holes as a function of the mass-per-unit length μ\mu in the range 103Gμ3×10210^{-3} \lesssim G\mu \lesssim 3 \times 10^{-2}. We use this power law to extrapolate to Gμ106G\mu \sim 10^{-6}, obtaining the fraction fBHf_{\rm BH} of physically interesting cosmic string loops which collapse to form black holes within one oscillation period of formation. Comparing this fraction with the observational bounds on a population of evaporating black holes, we obtain the limit Gμ3.1(±0.7)×106G\mu \le 3.1 (\pm 0.7) \times 10^{-6} on the cosmic string mass-per-unit-length. This limit is consistent with all other observational bounds.Comment: uuencoded, compressed postscript; 20 pages including 7 figure

    A Formal Model For Real-Time Parallel Computation

    Full text link
    The imposition of real-time constraints on a parallel computing environment- specifically high-performance, cluster-computing systems- introduces a variety of challenges with respect to the formal verification of the system's timing properties. In this paper, we briefly motivate the need for such a system, and we introduce an automaton-based method for performing such formal verification. We define the concept of a consistent parallel timing system: a hybrid system consisting of a set of timed automata (specifically, timed Buchi automata as well as a timed variant of standard finite automata), intended to model the timing properties of a well-behaved real-time parallel system. Finally, we give a brief case study to demonstrate the concepts in the paper: a parallel matrix multiplication kernel which operates within provable upper time bounds. We give the algorithm used, a corresponding consistent parallel timing system, and empirical results showing that the system operates under the specified timing constraints.Comment: In Proceedings FTSCS 2012, arXiv:1212.657

    Scalable and deterministic timing-driven parallel placement for FPGAs

    Full text link

    On Timing Model Extraction and Hierarchical Statistical Timing Analysis

    Full text link
    In this paper, we investigate the challenges to apply Statistical Static Timing Analysis (SSTA) in hierarchical design flow, where modules supplied by IP vendors are used to hide design details for IP protection and to reduce the complexity of design and verification. For the three basic circuit types, combinational, flip-flop-based and latch-controlled, we propose methods to extract timing models which contain interfacing as well as compressed internal constraints. Using these compact timing models the runtime of full-chip timing analysis can be reduced, while circuit details from IP vendors are not exposed. We also propose a method to reconstruct the correlation between modules during full-chip timing analysis. This correlation can not be incorporated into timing models because it depends on the layout of the corresponding modules in the chip. In addition, we investigate how to apply the extracted timing models with the reconstructed correlation to evaluate the performance of the complete design. Experiments demonstrate that using the extracted timing models and reconstructed correlation full-chip timing analysis can be several times faster than applying the flattened circuit directly, while the accuracy of statistical timing analysis is still well maintained

    Bounding Worst-Case Data Cache Behavior by Analytically Deriving Cache Reference Patterns

    Get PDF
    While caches have become invaluable for higher-end architectures due to their ability to hide, in part, the gap between processor speed and memory access times, caches (and particularly data caches) limit the timing predictability for data accesses that may reside in memory or in cache. This is a significant problem for real-time systems. The objective our work is to provide accurate predictions of data cache behavior of scalar and non-scalar references whose reference patterns are known at compile time. Such knowledge about cache behavior provides the basis for significant improvements in bounding the worst-case execution time (WCET) of real-time programs, particularly for hard-to-analyze data caches. We exploit the power of the Cache Miss Equations (CME) framework but lift a number of limitations of traditional CME to generalize the analysis to more arbitrary programs. We further devised a transformation, coined “forced” loop fusion, which facilitates the analysis across sequential loops. Our contributions result in exact data cache reference patterns — in contrast to approximate cache miss behavior of prior work. Experimental results indicate improvements on the accuracy of worst-case data cache behavior up to two orders of magnitude over the original approach. In fact, our results closely bound and sometimes even exactly match those obtained by trace-driven simulation for worst-case inputs. The resulting WCET bounds of timing analysis confirm these findings in terms of providing tight bounds. Overall, our contributions lift analytical approaches to predict data cache behavior to a level suitable for efficient static timing analysis and, subsequently, real-time schedulability of tasks with predictable WCET
    corecore