962 research outputs found
Engaging Citizens with Televised Election Debates through Online Interactive Replays
In this paper we tackle the crisis of political trust and public engagement with politics by investigating new methods and tools to watch and take part in televised political debates. The paper presents relevant research at the intersection of citizenship, technologies and government/democracy, and describes the motivation, requirements and design of Democratic Replay, an online interactive video replay platform that offers a persistent, customisable digital space for: (a) members of the public to express their views as they watch online videos of political events; and (b) enabling for a richer collective understanding of what goes on in these complex media events
An availability study for a SME
A case study of an availability analysis for a small commercial company is presented. The analysis was carried out to meet a customer requirement for the availability of an electronic ground-based system in a benign environment. Availability calculations were based on failure data provided and an explanation of the methodology and problems encountered and dealt with are discussed. The methodology includes failure classification according to MIL-HDBK-781A and how it may be used to promote and develop internal processes. A commentary on the background to reliability/availability specification is provided and a number of recommendations for monitoring reliability and availability are given
A statistical bit error generator for emulation of complex forward error correction schemes
Forward error correction (FEC schemes are generally used in wireless communication systems to maintain an acceptable quality of service. Various models have been proposed in literature to predict the end-to-end quality of wireless video systems. However, most of these models utilize simplistic error generators which do not accurately represent any practical wireless channel. A more accurate way is to evaluate the quality of a video system using Monte Carlo techniques. However these necessitate huge computational times, making these methods unpractical. This paper proposes an alternative method that can be used in modeling of complex communications systems with minimal computational time. The proposed three random variable method was used to model two FEC schemes adopted by the digital video broadcasting (DVB) standard. Simulation results confirm that this method closely matches the performance of the considered communication systems in both bit error rate (BER) and peak signal-to-noise ratio (PSNR).peer-reviewe
Improving motion vector prediction using linear regression
The motion vectors take a large portion of the H.264/AVC encoded bitstream. This video coding standard employs predictive coding to minimize the amount of motion vector information to be transmitted. However, the motion vectors still accounts for around 40% of the transmitted bitstream, which suggests further research in this area. This paper presents an algorithm which employs a feature selection process to select the neighboring motion vectors which are most suitable to predict the motion vectors mv being encoded. The selected motion vectors are then used to approximate mv using Linear Regression. Simulation results have indicated a reduction in Mean Squared Error (MSE) of around 22% which results in reducing the residual error of the predictive coded motion vectors. This suggests that higher compression efficiencies can be achieved using the proposed Linear Regression based motion vector predictor.peer-reviewe
Depth coding using depth discontinuity prediction and in-loop boundary reconstruction filtering
This paper presents a depth coding strategy that employs K-means clustering to segment the sequence of depth images into K clusters. The resulting clusters are losslessly compressed and transmitted as supplemental enhancement information to aid the decoder in predicting macroblocks containing depth discontinuities. This method further employs an in-loop boundary reconstruction filter to reduce distortions at the edges. The proposed algorithm was integrated within both H.264/AVC and H.264/MVC video coding standards. Simulation results demonstrate that the proposed scheme outperforms the state of the art depth coding schemes, where rendered Peak Signal to Noise Ratio (PSNR) gains between 0.1 dB and 0.5 dB were observed.peer-reviewe
Cultural and economic complementarities of spatial agglomeration in the British television broadcasting industry: Some explorations.
This paper considers the processes supporting agglomeration in the British television broadcasting industry. It compares and contrasts the insights offered by the cultural turn in geography and more conventionally economic approaches. It finds that culture and institutions are fundamental to the constitution of production and exchange relationships and also that they solve fundamental economic problems of coordinating resources under conditions of uncertainty and limited information. Processes at a range of spatial scales are important, from highly local to global, and conventional economics casts some light on which firms are most active and successful
On fusion algebra of chiral models
We discuss some algebraic setting of chiral models in terms of
the statistical dimensions of their fields. In particular, the conformal
dimensions and the central charge of the chiral models are
calculated from their braid matrices. Futhermore, at level K=2, we present the
characteristic polynomials of their fusion matrices in a factored form.Comment: 11 pages, ioplpp
Recommended from our members
Digital switchover in Europe
This article discusses the political, economic, technological and human aspects of the digital switchover in Europe and explores various policies for managing the process. The article first examines the advantages and drawbacks of digital switchover, and identifies a number of challenges and policy dilemmas of making switchover an achievable objective. It goes on to look at digital television adoption across Europe and assesses the effectiveness of free-to-air digital television to accelerate take-up. Finally, the article examines EU initiatives as well as national plans in digital switchover and proposes various measures for encouraging the take-up of digital services and therefore bringing forward the likely idea of analogue switch-off
Improved rate-adaptive codes for distributed video coding
The research work is partially funded by the STEPS Malta.This scholarship is partly financed by the European Union - European Social Fund (ESF 1.25).Distributed Video Coding (DVC) is a coding paradigm which shifts the major computational intensive tasks from the encoder to the decoder. Temporal correlation is exploited at the decoder by predicting the Wyner-Ziv (WZ) frames from the adjacent key frames. Compression is then achieved by transmitting just the parity information required to correct the predicted frame and recover the original frame. This paper proposes an algorithm which identifies most of the unreliable bits in the predicted bit planes, by considering the discrepancies in the previously decoded bit plane. The design of the used Low Density Parity Check (LDPC) codes is then biased to provide better protection to the unreliable bits. Simulation results show that, for the same target quality, the proposed scheme can reduce the WZ bit rates by up to 7% compared to traditional schemes.peer-reviewe
Adaptive rounding operator for efficient Wyner-Ziv video coding
The research work disclosed in this publication is partially funded by the Strategic Educational Pathways Scholarship Scheme (Malta). The scholarship is part-financed by the
European Union – European Social Fund. (ESF 1.25).The Distributed Video Coding (DVC) paradigm can theoretically reach the same coding efficiencies of predictive block-based video coding schemes, like H.264/AVC. However, current DVC architectures are still far from this ideal performance. This is mainly attributed to inaccuracies in the Side Information (SI) predicted at the decoder. The work in this paper presents a coding scheme which tries to avoid mismatch in the SI predictions caused by small variations in light intensity. Using the appropriate rounding operator for every coefficient, the proposed method significantly reduces the correlation noise between the Wyner-Ziv (WZ) frame and the corresponding SI, achieving higher coding efficiencies. Experimental results demonstrate that the average Peak Signal-to-Noise Ratio (PSNR) is improved by up to 0.56dB relative to the DISCOVER codec.peer-reviewe
- …