4,971 research outputs found

    Dimensionality Reduction and Classification feature using Mutual Information applied to Hyperspectral Images : A Filter strategy based algorithm

    Full text link
    Hyperspectral images (HIS) classification is a high technical remote sensing tool. The goal is to reproduce a thematic map that will be compared with a reference ground truth map (GT), constructed by expecting the region. The HIS contains more than a hundred bidirectional measures, called bands (or simply images), of the same region. They are taken at juxtaposed frequencies. Unfortunately, some bands contain redundant information, others are affected by the noise, and the high dimensionality of features made the accuracy of classification lower. The problematic is how to find the good bands to classify the pixels of regions. Some methods use Mutual Information (MI) and threshold, to select relevant bands, without treatment of redundancy. Others control and eliminate redundancy by selecting the band top ranking the MI, and if its neighbors have sensibly the same MI with the GT, they will be considered redundant and so discarded. This is the most inconvenient of this method, because this avoids the advantage of hyperspectral images: some precious information can be discarded. In this paper we'll accept the useful redundancy. A band contains useful redundancy if it contributes to produce an estimated reference map that has higher MI with the GT.nTo control redundancy, we introduce a complementary threshold added to last value of MI. This process is a Filter strategy; it gets a better performance of classification accuracy and not expensive, but less preferment than Wrapper strategy.Comment: 11 pages, 5 figures, journal pape

    Temporal video transcoding from H.264/AVC-to-SVC for digital TV broadcasting

    Get PDF
    Mobile digital TV environments demand flexible video compression like scalable video coding (SVC) because of varying bandwidths and devices. Since existing infrastructures highly rely on H.264/AVC video compression, network providers could adapt the current H.264/AVC encoded video to SVC. This adaptation needs to be done efficiently to reduce processing power and operational cost. This paper proposes two techniques to convert an H.264/AVC bitstream in Baseline (P-pictures based) and Main Profile (B-pictures based) without scalability to a scalable bitstream with temporal scalability as part of a framework for low-complexity video adaptation for digital TV broadcasting. Our approaches are based on accelerating the interprediction, focusing on reducing the coding complexity of mode decision and motion estimation tasks of the encoder stage by using information available after the H. 264/AVC decoding stage. The results show that when our techniques are applied, the complexity is reduced by 98 % while maintaining coding efficiency

    Maximizing the Probability of Delivery of Multipoint Relay Broadcast Protocol in Wireless Ad Hoc Networks with a Realistic Physical Layer

    Get PDF
    It is now commonly accepted that the unit disk graph used to model the physical layer in wireless networks does not reflect real radio transmissions, and that the lognormal shadowing model better suits to experimental simulations. Previous work on realistic scenarios focused on unicast, while broadcast requirements are fundamentally different and cannot be derived from unicast case. Therefore, broadcast protocols must be adapted in order to still be efficient under realistic assumptions. In this paper, we study the well-known multipoint relay protocol (MPR). In the latter, each node has to choose a set of neighbors to act as relays in order to cover the whole 2-hop neighborhood. We give experimental results showing that the original method provided to select the set of relays does not give good results with the realistic model. We also provide three new heuristics in replacement and their performances which demonstrate that they better suit to the considered model. The first one maximizes the probability of correct reception between the node and the considered relays multiplied by their coverage in the 2-hop neighborhood. The second one replaces the coverage by the average of the probabilities of correct reception between the considered neighbor and the 2-hop neighbors it covers. Finally, the third heuristic keeps the same concept as the second one, but tries to maximize the coverage level of the 2-hop neighborhood: 2-hop neighbors are still being considered as uncovered while their coverage level is not higher than a given coverage threshold, many neighbors may thus be selected to cover the same 2-hop neighbors

    Generalized Points-to Graphs: A New Abstraction of Memory in the Presence of Pointers

    Full text link
    Flow- and context-sensitive points-to analysis is difficult to scale; for top-down approaches, the problem centers on repeated analysis of the same procedure; for bottom-up approaches, the abstractions used to represent procedure summaries have not scaled while preserving precision. We propose a novel abstraction called the Generalized Points-to Graph (GPG) which views points-to relations as memory updates and generalizes them using the counts of indirection levels leaving the unknown pointees implicit. This allows us to construct GPGs as compact representations of bottom-up procedure summaries in terms of memory updates and control flow between them. Their compactness is ensured by the following optimizations: strength reduction reduces the indirection levels, redundancy elimination removes redundant memory updates and minimizes control flow (without over-approximating data dependence between memory updates), and call inlining enhances the opportunities of these optimizations. We devise novel operations and data flow analyses for these optimizations. Our quest for scalability of points-to analysis leads to the following insight: The real killer of scalability in program analysis is not the amount of data but the amount of control flow that it may be subjected to in search of precision. The effectiveness of GPGs lies in the fact that they discard as much control flow as possible without losing precision (i.e., by preserving data dependence without over-approximation). This is the reason why the GPGs are very small even for main procedures that contain the effect of the entire program. This allows our implementation to scale to 158kLoC for C programs

    Refinement Types for Logical Frameworks and Their Interpretation as Proof Irrelevance

    Full text link
    Refinement types sharpen systems of simple and dependent types by offering expressive means to more precisely classify well-typed terms. We present a system of refinement types for LF in the style of recent formulations where only canonical forms are well-typed. Both the usual LF rules and the rules for type refinements are bidirectional, leading to a straightforward proof of decidability of typechecking even in the presence of intersection types. Because we insist on canonical forms, structural rules for subtyping can now be derived rather than being assumed as primitive. We illustrate the expressive power of our system with examples and validate its design by demonstrating a precise correspondence with traditional presentations of subtyping. Proof irrelevance provides a mechanism for selectively hiding the identities of terms in type theories. We show that LF refinement types can be interpreted as predicates using proof irrelevance, establishing a uniform relationship between two previously studied concepts in type theory. The interpretation and its correctness proof are surprisingly complex, lending support to the claim that refinement types are a fundamental construct rather than just a convenient surface syntax for certain uses of proof irrelevance

    08161 Abstracts Collection -- Scalable Program Analysis

    Get PDF
    From April 13 to April 18, 2008, the Dagstuhl Seminar 08161 ``Scalable Program Analysis\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Redundancy Elimination for LF

    Get PDF
    AbstractWe present a type system extending the dependent type theory LF, whose terms are more amenable to compact representation. This is achieved by carefully omitting certain subterms which are redundant in the sense that they can be recovered from the types of other subterms. This system is capable of omitting more redundant information than previous work in the same vein, because of its uniform treatment of higher-order and first-order terms. Moreover the ‘recipe’ for reconstruction of omitted information is encoded directly into annotations on the types in a signature. This brings to light connections between bidirectional (synthesis vs. checking) typing algorithms of the object language on the one hand, and the bidirectional flow of information in the ambient encoding language. The resulting system is a compromise seeking to retain both the effectiveness of full unification-based term reconstruction such as is found in implementation practice, and the logical simplicity of pure LF
    • …
    corecore