201,434 research outputs found

    Classification of near-normal sequences

    Get PDF
    We introduce a canonical form for near-normal sequences NN(n), and using it we enumerate the equivalence classes of such sequences for even n up to 30. These sequences are needed for Yang multiplication in the construction of longer T-sequences from base sequences.Comment: 13 pages, 1 table (over 5 pages long). Minor changes implemente

    An automatic technique for visual quality classification for MPEG-1 video

    Get PDF
    The Centre for Digital Video Processing at Dublin City University developed Fischlar [1], a web-based system for recording, analysis, browsing and playback of digitally captured television programs. One major issue for Fischlar is the automatic evaluation of video quality in order to avoid processing and storage of corrupted data. In this paper we propose an automatic classification technique that detects the video content quality in order to provide a decision criterion for the processing and storage stages

    Outstanding Issues in Our Understanding of L, T, and Y Dwarfs

    Get PDF
    Since the discovery of the first L dwarf 19 years ago and the discovery of the first T dwarf 7 years after that, we have amassed a large list of these objects, now numbering almost six hundred. Despite making headway in understanding the physical chemistry of their atmospheres, some important issues remain unexplained. Three of these are the subject of this paper: (1) What is the role of "second parameters" such as gravity and metallicity in shaping the emergent spectra of L and T dwarfs? Can we establish a robust classification scheme so that objects with unusual values of log(g) or [M/H], unusual dust content, or unresolved binarity are easily recognized? (2) Which physical processes drive the unusual behavior at the L/T transition? Which observations can be obtained to better confine the problem? (3) What will objects cooler than T8 look like? How will we know a Y dwarf when we first observe one?Comment: 11 pages including 5 figures. To appear in the conference proceedings for Cool Stars 1

    On the base sequence conjecture

    Get PDF
    Let BS(m,n) denote the set of base sequences (A;B;C;D), with A and B of length m and C and D of length n. The base sequence conjecture (BSC) asserts that BS(n+1,n) exist (i.e., are non-empty) for all n. This is known to be true for n <= 36 and when n is a Golay number. We show that it is also true for n=37 and n=38. It is worth pointing out that BSC is stronger than the famous Hadamard matrix conjecture. In order to demonstrate the abundance of base sequences, we have previously attached to BS(n+1,n) a graph Gamma_n and computed the Gamma_n for n <= 27. We now extend these computations and determine the Gamma_n for n=28,...,35. We also propose a conjecture describing these graphs in general.Comment: 19 pages, 10 tables. To appear in Discrete Mathematics

    Spatiotemporal Stacked Sequential Learning for Pedestrian Detection

    Full text link
    Pedestrian classifiers decide which image windows contain a pedestrian. In practice, such classifiers provide a relatively high response at neighbor windows overlapping a pedestrian, while the responses around potential false positives are expected to be lower. An analogous reasoning applies for image sequences. If there is a pedestrian located within a frame, the same pedestrian is expected to appear close to the same location in neighbor frames. Therefore, such a location has chances of receiving high classification scores during several frames, while false positives are expected to be more spurious. In this paper we propose to exploit such correlations for improving the accuracy of base pedestrian classifiers. In particular, we propose to use two-stage classifiers which not only rely on the image descriptors required by the base classifiers but also on the response of such base classifiers in a given spatiotemporal neighborhood. More specifically, we train pedestrian classifiers using a stacked sequential learning (SSL) paradigm. We use a new pedestrian dataset we have acquired from a car to evaluate our proposal at different frame rates. We also test on a well known dataset: Caltech. The obtained results show that our SSL proposal boosts detection accuracy significantly with a minimal impact on the computational cost. Interestingly, SSL improves more the accuracy at the most dangerous situations, i.e. when a pedestrian is close to the camera.Comment: 8 pages, 5 figure, 1 tabl

    Robust 3D Action Recognition through Sampling Local Appearances and Global Distributions

    Full text link
    3D action recognition has broad applications in human-computer interaction and intelligent surveillance. However, recognizing similar actions remains challenging since previous literature fails to capture motion and shape cues effectively from noisy depth data. In this paper, we propose a novel two-layer Bag-of-Visual-Words (BoVW) model, which suppresses the noise disturbances and jointly encodes both motion and shape cues. First, background clutter is removed by a background modeling method that is designed for depth data. Then, motion and shape cues are jointly used to generate robust and distinctive spatial-temporal interest points (STIPs): motion-based STIPs and shape-based STIPs. In the first layer of our model, a multi-scale 3D local steering kernel (M3DLSK) descriptor is proposed to describe local appearances of cuboids around motion-based STIPs. In the second layer, a spatial-temporal vector (STV) descriptor is proposed to describe the spatial-temporal distributions of shape-based STIPs. Using the Bag-of-Visual-Words (BoVW) model, motion and shape cues are combined to form a fused action representation. Our model performs favorably compared with common STIP detection and description methods. Thorough experiments verify that our model is effective in distinguishing similar actions and robust to background clutter, partial occlusions and pepper noise
    corecore