3,043 research outputs found

    Shape representation and coding of visual objets in multimedia applications — An overview

    Get PDF
    Emerging multimedia applications have created the need for new functionalities in digital communications. Whereas existing compression standards only deal with the audio-visual scene at a frame level, it is now necessary to handle individual objects separately, thus allowing scalable transmission as well as interactive scene recomposition by the receiver. The future MPEG-4 standard aims at providing compression tools addressing these functionalities. Unlike existing frame-based standards, the corresponding coding schemes need to encode shape information explicitly. This paper reviews existing solutions to the problem of shape representation and coding. Region and contour coding techniques are presented and their performance is discussed, considering coding efficiency and rate-distortion control capability, as well as flexibility to application requirements such as progressive transmission, low-delay coding, and error robustnes

    Object-based video representations: shape compression and object segmentation

    Get PDF
    Object-based video representations are considered to be useful for easing the process of multimedia content production and enhancing user interactivity in multimedia productions. Object-based video presents several new technical challenges, however. Firstly, as with conventional video representations, compression of the video data is a requirement. For object-based representations, it is necessary to compress the shape of each video object as it moves in time. This amounts to the compression of moving binary images. This is achieved by the use of a technique called context-based arithmetic encoding. The technique is utilised by applying it to rectangular pixel blocks and as such it is consistent with the standard tools of video compression. The blockbased application also facilitates well the exploitation of temporal redundancy in the sequence of binary shapes. For the first time, context-based arithmetic encoding is used in conjunction with motion compensation to provide inter-frame compression. The method, described in this thesis, has been thoroughly tested throughout the MPEG-4 core experiment process and due to favourable results, it has been adopted as part of the MPEG-4 video standard. The second challenge lies in the acquisition of the video objects. Under normal conditions, a video sequence is captured as a sequence of frames and there is no inherent information about what objects are in the sequence, not to mention information relating to the shape of each object. Some means for segmenting semantic objects from general video sequences is required. For this purpose, several image analysis tools may be of help and in particular, it is believed that video object tracking algorithms will be important. A new tracking algorithm is developed based on piecewise polynomial motion representations and statistical estimation tools, e.g. the expectationmaximisation method and the minimum description length principle

    The CAP cancer protocols – a case study of caCORE based data standards implementation to integrate with the Cancer Biomedical Informatics Grid

    Get PDF
    BACKGROUND: The Cancer Biomedical Informatics Grid (caBIGℱ) is a network of individuals and institutions, creating a world wide web of cancer research. An important aspect of this informatics effort is the development of consistent practices for data standards development, using a multi-tier approach that facilitates semantic interoperability of systems. The semantic tiers include (1) information models, (2) common data elements, and (3) controlled terminologies and ontologies. The College of American Pathologists (CAP) cancer protocols and checklists are an important reporting standard in pathology, for which no complete electronic data standard is currently available. METHODS: In this manuscript, we provide a case study of Cancer Common Ontologic Representation Environment (caCORE) data standard implementation of the CAP cancer protocols and checklists model – an existing and complex paper based standard. We illustrate the basic principles, goals and methodology for developing caBIGℱ models. RESULTS: Using this example, we describe the process required to develop the model, the technologies and data standards on which the process and models are based, and the results of the modeling effort. We address difficulties we encountered and modifications to caCORE that will address these problems. In addition, we describe four ongoing development projects that will use the emerging CAP data standards to achieve integration of tissue banking and laboratory information systems. CONCLUSION: The CAP cancer checklists can be used as the basis for an electronic data standard in pathology using the caBIGℱ semantic modeling methodology

    æ·±ć±€ć­Šçż’ă«ćŸșă„ăç”»ćƒćœ§çžźăšć“èłȘè©•äŸĄ

    Get PDF
    æ—©ć€§ć­Šäœèš˜ç•Șć·:新8427早çšČ田性

    Integrated modeling and analysis methodologies for architecture-level vehicle design.

    Get PDF
    In order to satisfy customer expectations, a ground vehicle must be designed to meet a broad range of performance requirements. A satisfactory vehicle design process implements a set of requirements reflecting necessary, but perhaps not sufficient conditions for assuring success in a highly competitive market. An optimal architecture-level vehicle design configuration is one of the most important of these requirements. A basic layout that is efficient and flexible permits significant reductions in the time needed to complete the product development cycle, with commensurate reductions in cost. Unfortunately, architecture-level design is the most abstract phase of the design process. The high-level concepts that characterize these designs do not lend themselves to traditional analyses normally used to characterize, assess, and optimize designs later in the development cycle. This research addresses the need for architecture-level design abstractions that can be used to support ground vehicle development. The work begins with a rigorous description of hierarchical function-based abstractions representing not the physical configuration of the elements of a vehicle, but their function within the design space. The hierarchical nature of the abstractions lends itself to object orientation - convenient for software implementation purposes - as well as description of components, assemblies, feature groupings based on non-structural interactions, and eventually, full vehicles. Unlike the traditional early-design abstractions, the completeness of our function-based hierarchical abstractions, including their interactions, allows their use as a starting point for the derivation of analysis models. The scope of the research in this dissertation includes development of meshing algorithms for abstract structural models, a rigid-body analysis engine, and a fatigue analysis module. It is expected that the results obtained in this study will move systematic design and analysis to the earliest phases of the vehicle development process, leading to more highly optimized architectures, and eventually, better ground vehicles. This work shows that architecture level abstractions in many cases are better suited for life cycle support than geometric CAD models. Finally, substituting modeling, simulation, and optimization for intuition and guesswork will do much to mitigate the risk inherent in large projects by minimizing the possibility of incorporating irrevocably compromised architecture elements into a vehicle design that no amount of detail-level reengineering can undo

    Identification through Finger Bone Structure Biometrics

    Get PDF

    Proceedings of the 2021 Symposium on Information Theory and Signal Processing in the Benelux, May 20-21, TU Eindhoven

    Get PDF

    Finger Vein Verification with a Convolutional Auto-encoder

    Get PDF

    Towards Intelligent Runtime Framework for Distributed Heterogeneous Systems

    Get PDF
    Scientific applications strive for increased memory and computing performance, requiring massive amounts of data and time to produce results. Applications utilize large-scale, parallel computing platforms with advanced architectures to accommodate their needs. However, developing performance-portable applications for modern, heterogeneous platforms requires lots of effort and expertise in both the application and systems domains. This is more relevant for unstructured applications whose workflow is not statically predictable due to their heavily data-dependent nature. One possible solution for this problem is the introduction of an intelligent Domain-Specific Language (iDSL) that transparently helps to maintain correctness, hides the idiosyncrasies of lowlevel hardware, and scales applications. An iDSL includes domain-specific language constructs, a compilation toolchain, and a runtime providing task scheduling, data placement, and workload balancing across and within heterogeneous nodes. In this work, we focus on the runtime framework. We introduce a novel design and extension of a runtime framework, the Parallel Runtime Environment for Multicore Applications. In response to the ever-increasing intra/inter-node concurrency, the runtime system supports efficient task scheduling and workload balancing at both levels while allowing the development of custom policies. Moreover, the new framework provides abstractions supporting the utilization of heterogeneous distributed nodes consisting of CPUs and GPUs and is extensible to other devices. We demonstrate that by utilizing this work, an application (or the iDSL) can scale its performance on heterogeneous exascale-era supercomputers with minimal effort. A future goal for this framework (out of the scope of this thesis) is to be integrated with machine learning to improve its decision-making and performance further. As a bridge to this goal, since the framework is under development, we experiment with data from Nuclear Physics Particle Accelerators and demonstrate the significant improvements achieved by utilizing machine learning in the hit-based track reconstruction process
    • 

    corecore