17,656 research outputs found

    Multiple serial episode matching

    Get PDF
    12In a previous paper we generalized the Knuth-Morris-Pratt (KMP) pattern matching algorithm and defined a non-conventional kind of RAM, the MP--RAMs (RAMS equipped with extra operations), and designed an O(n)O(n) on-line algorithm for solving the serial episode matching problem on MP--RAMs when there is only one single episode. We here give two extensions of this algorithm to the case when we search for several patterns simultaneously and compare them. More preciseley, given q+1q+1 strings (a text tt of length nn and qq patterns m1,,mqm_1,\ldots,m_q) and a natural number ww, the {\em multiple serial episode matching problem} consists in finding the number of size ww windows of text tt which contain patterns m1,,mqm_1,\ldots,m_q as subsequences, i.e. for each mim_i, if mi=p1,,pkm_i=p_1,\ldots ,p_k, the letters p1,,pkp_1,\ldots ,p_k occur in the window, in the same order as in mim_i, but not necessarily consecutively (they may be interleaved with other letters).} The main contribution is an algorithm solving this problem on-line in time O(nq)O(nq)

    Television and serial fictions

    Get PDF
    Written in 1974, Raymond Williams's Television: Technology and Cultural Form was to become for many academics, and particularly for academics who approached popular culture from the perspective of the humanities, one of the foundational texts of the study of television, the first and even the only book on reading lists, the book which introduced the concept of ‘flow’ as a way of identifying ‘the defining characteristic of broadcasting’. While, almost forty years later, many of its formulations have worn thin with over-use, Williams's observation on the centrality of televisual dramatic fiction to modern experience still has the force of defamiliarisation: it is still surprising to consider, as if for the first time, how much of our time is spent with, how many of our references are drawn from, or how much the structure of contemporary feeling is shaped by television dramatic fiction in its various forms. ‘It seems probable’, says Williams, that in societies like Britain and the United States more drama is watched in a week or a weekend, by the majority of viewers, than would have been watched in a year or in some cases a lifetime in any previous historical period. It is not uncommon for the majority of viewers to see, regularly, as much as two or three hours of drama, of various kinds, every day. The implications of this have scarcely begun to be considered. It is clearly one of the unique characteristics of advanced industrial societies that drama as an experience is now an intrinsic part of everyday life, at a quantitative level which is so very much greater than any precedent as to see a fundamental qualitative change. Whatever the social and cultural reasons may finally be, it is clear that watching dramatic simulation of a wide range of experiences is now an essential part of our modern cultural pattern. Or, to put it categorically, most people spend more time watching various kinds of drama than in preparing and eating food

    Faster subsequence recognition in compressed strings

    Full text link
    Computation on compressed strings is one of the key approaches to processing massive data sets. We consider local subsequence recognition problems on strings compressed by straight-line programs (SLP), which is closely related to Lempel--Ziv compression. For an SLP-compressed text of length mˉ\bar m, and an uncompressed pattern of length nn, C{\'e}gielski et al. gave an algorithm for local subsequence recognition running in time O(mˉn2logn)O(\bar mn^2 \log n). We improve the running time to O(mˉn1.5)O(\bar mn^{1.5}). Our algorithm can also be used to compute the longest common subsequence between a compressed text and an uncompressed pattern in time O(mˉn1.5)O(\bar mn^{1.5}); the same problem with a compressed pattern is known to be NP-hard

    Massively parallel support for a case-based planning system

    Get PDF
    Case-based planning (CBP), a kind of case-based reasoning, is a technique in which previously generated plans (cases) are stored in memory and can be reused to solve similar planning problems in the future. CBP can save considerable time over generative planning, in which a new plan is produced from scratch. CBP thus offers a potential (heuristic) mechanism for handling intractable problems. One drawback of CBP systems has been the need for a highly structured memory to reduce retrieval times. This approach requires significant domain engineering and complex memory indexing schemes to make these planners efficient. In contrast, our CBP system, CaPER, uses a massively parallel frame-based AI language (PARKA) and can do extremely fast retrieval of complex cases from a large, unindexed memory. The ability to do fast, frequent retrievals has many advantages: indexing is unnecessary; very large case bases can be used; memory can be probed in numerous alternate ways; and queries can be made at several levels, allowing more specific retrieval of stored plans that better fit the target problem with less adaptation. In this paper we describe CaPER's case retrieval techniques and some experimental results showing its good performance, even on large case bases

    Taking the bite out of automated naming of characters in TV video

    No full text
    We investigate the problem of automatically labelling appearances of characters in TV or film material with their names. This is tremendously challenging due to the huge variation in imaged appearance of each character and the weakness and ambiguity of available annotation. However, we demonstrate that high precision can be achieved by combining multiple sources of information, both visual and textual. The principal novelties that we introduce are: (i) automatic generation of time stamped character annotation by aligning subtitles and transcripts; (ii) strengthening the supervisory information by identifying when characters are speaking. In addition, we incorporate complementary cues of face matching and clothing matching to propose common annotations for face tracks, and consider choices of classifier which can potentially correct errors made in the automatic extraction of training data from the weak textual annotation. Results are presented on episodes of the TV series ‘‘Buffy the Vampire Slayer”
    corecore