26 research outputs found

    Picture coding in viewdata systems

    Get PDF
    Viewdata systems in commercial use at present offer the facility for transmitting alphanumeric text and graphic displays via the public switched telephone network. An enhancement to the system would be to transmit true video images instead of graphics. Such a system, under development in Britain at present uses Differential Pulse Code Modulation (DPCM) and a transmission rate of 1200 bits/sec. Error protection is achieved by the use of error protection codes, which increases the channel requirement. In this thesis, error detection and correction of DPCM coded video signals without the use of channel error protection is studied. The scheme operates entirely at the receiver by examining the local statistics of the received data to determine the presence of errors. Error correction is then undertaken by interpolation from adjacent correct or previousiy corrected data. DPCM coding of pictures has the inherent disadvantage of a slow build-up of the displayed picture at the receiver and difficulties with image size manipulation. In order to fit the pictorial information into a viewdata page, its size has to be reduced. Unitary transforms, typically the discrete Fourier transform (DFT), the discrete cosine transform (DCT) and the Hadamard transform (HT) enable lowpass filtering and decimation to be carried out in a single operation in the transform domain. Size reductions of different orders are considered and the merits of the DFT, DCT and HT are investigated. With limited channel capacity, it is desirable to remove the redundancy present in the source picture in order to reduce the bit rate. Orthogonal transformation decorrelates the spatial sample distribution and packs most of the image energy in the low order coefficients. This property is exploited in bit-reduction schemes which are adaptive to the local statistics of the different source pictures used. In some cases, bit rates of less than 1.0 bit/pel are achieved with satisfactory received picture quality. Unlike DPCM systems, transform coding has the advantage of being able to display rapidly a picture of low resolution by initial inverse transformation of the low order coefficients only. Picture resolution is then progressively built up as more coefficients are received and decoded. Different sequences of picture update are investigated to find that which achieves the best subjective quality with the fewest possible coefficients transmitted

    Advanced Information Systems and Technologies

    Get PDF
    This book comprises the proceedings of the V International Scientific Conference "Advanced Information Systems and Technologies, AIST-2017". The proceeding papers cover issues related to system analysis and modeling, project management, information system engineering, intelligent data processing computer networking and telecomunications. They will be useful for students, graduate students, researchers who interested in computer science

    Advanced Information Systems and Technologies

    Get PDF
    This book comprises the proceedings of the V International Scientific Conference "Advanced Information Systems and Technologies, AIST-2017". The proceeding papers cover issues related to system analysis and modeling, project management, information system engineering, intelligent data processing computer networking and telecomunications. They will be useful for students, graduate students, researchers who interested in computer science

    Temporal integration of loudness as a function of level

    Get PDF

    Temporal integration of loudness as a function of level

    Full text link

    The contour tree image encoding technique and file format

    Get PDF
    The process of contourization is presented which converts a raster image into a discrete set of plateaux or contours. These contours can be grouped into a hierarchical structure, defining total spatial inclusion, called a contour tree. A contour coder has been developed which fully describes these contours in a compact and efficient manner and is the basis for an image compression method. Simplification of the contour tree has been undertaken by merging contour tree nodes thus lowering the contour tree's entropy. This can be exploited by the contour coder to increase the image compression ratio. By applying general and simple rules derived from physiological experiments on the human vision system, lossy image compression can be achieved which minimises noticeable artifacts in the simplified image. The contour merging technique offers a complementary lossy compression system to the QDCT (Quantised Discrete Cosine Transform). The artifacts introduced by the two methods are very different; QDCT produces a general blurring and adds extra highlights in the form of overshoots, whereas contour merging sharpens edges, reduces highlights and introduces a degree of false contouring. A format based on the contourization technique which caters for most image types is defined, called the contour tree image format. Image operations directly on this compressed format have been studied which for certain manipulations can offer significant operational speed increases over using a standard raster image format. A couple of examples of operations specific to the contour tree format are presented showing some of the features of the new format.Science and Engineering Research Counci

    Space Communications: Theory and Applications. Volume 3: Information Processing and Advanced Techniques. A Bibliography, 1958 - 1963

    Get PDF
    Annotated bibliography on information processing and advanced communication techniques - theory and applications of space communication

    Composing Institutions and Institutionalising Composers: Value and Discipline in Contemporary English University Composition

    Get PDF
    ACADEMICS IN the field of composition are under numerous pressures and influences. This thesis is based on extensive interviews with composer-academics and examines the field of university composition as the intersection of art worlds, an academic discipline, and an organisationally managed form or work. In-so-doing, it theorises the particular ways in which the field is formed, maintained, and reformed by social and organisational pressures. This particularly concerns the theorisation of conflict and looks at how contradictions between different pressures have the potential to change the practices of composer-academics. Two key themes are placed at the centre of this theorisation: the value of authenticity and the discipline of rationalisation. The former attributes legitimacy to composers whose work is seen as autonomous and the latter extends the managerialism of the university organisation through the key rationalising technic of grammatisation. Throughout, these two ideas are in play, at every juncture setting up a tension between freedom and control. The first part lays out the field by considering its composition and ways in which it may usefully be categorised. Part Two examines the practices of research (composing) and teaching and looks at how the university organisation interacts with these. With organisational rationalism taken as the constant, the ideal typical institutions laid out suggest an affinity for certain forms of experimental composition and a banking approach to teaching. The final part flips this, taking the ideal type of autonomy as the constant and theorising what a university that is not in conflict with this fundamental artistic (and academic) ideal could look like in the form of the ‘Liminal University’
    corecore