1,003 research outputs found
Low-Complexity and Hardware-Friendly H.265/HEVC Encoder for Vehicular Ad-Hoc Networks
Real-time video streaming over vehicular ad-hoc networks (VANETs) has been considered as a critical challenge for road safety applications. The purpose of this paper is to reduce the computation complexity of high efficiency video coding (HEVC) encoder for VANETs. Based on a novel spatiotemporal neighborhood set, firstly the coding tree unit depth decision algorithm is presented by controlling the depth search range. Secondly, a Bayesian classifier is used for the prediction unit decision for inter-prediction, and prior probability value is calculated by Gibbs Random Field model. Simulation results show that the overall algorithm can significantly reduce encoding time with a reasonably low loss in encoding efficiency. Compared to HEVC reference software HM16.0, the encoding time is reduced by up to 63.96%, while the Bjontegaard delta bit-rate is increased by only 0.76–0.80% on average. Moreover, the proposed HEVC encoder is low-complexity and hardware-friendly for video codecs that reside on mobile vehicles for VANETs
Maximum-Entropy-Model-Enabled Complexity Reduction Algorithm in Modern Video Coding Standards
Symmetry considerations play a key role in modern science, and any differentiable symmetry of the action of a physical system has a corresponding conservation law. Symmetry may be regarded as reduction of Entropy. This work focuses on reducing the computational complexity of modern video coding standards by using the maximum entropy principle. The high computational complexity of the coding unit (CU) size decision in modern video coding standards is a critical challenge for real-time applications. This problem is solved in a novel approach considering CU termination, skip, and normal decisions as three-class making problems. The maximum entropy model (MEM) is formulated to the CU size decision problem, which can optimize the conditional entropy; the improved iterative scaling (IIS) algorithm is used to solve this optimization problem. The classification features consist of the spatio-temporal information of the CU, including the rate–distortion (RD) cost, coded block flag (CBF), and depth. For the case analysis, the proposed method is based on High Efficiency Video Coding (H.265/HEVC) standards. The experimental results demonstrate that the proposed method can reduce the computational complexity of the H.265/HEVC encoder significantly. Compared with the H.265/HEVC reference model, the proposed method can reduce the average encoding time by 53.27% and 56.36% under low delay and random access configurations, while Bjontegaard Delta Bit Rates (BD-BRs) are 0.72% and 0.93% on average
Efficient VVC Intra Prediction Based on Deep Feature Fusion and Probability Estimation
The ever-growing multimedia traffic has underscored the importance of
effective multimedia codecs. Among them, the up-to-date lossy video coding
standard, Versatile Video Coding (VVC), has been attracting attentions of video
coding community. However, the gain of VVC is achieved at the cost of
significant encoding complexity, which brings the need to realize fast encoder
with comparable Rate Distortion (RD) performance. In this paper, we propose to
optimize the VVC complexity at intra-frame prediction, with a two-stage
framework of deep feature fusion and probability estimation. At the first
stage, we employ the deep convolutional network to extract the spatialtemporal
neighboring coding features. Then we fuse all reference features obtained by
different convolutional kernels to determine an optimal intra coding depth. At
the second stage, we employ a probability-based model and the spatial-temporal
coherence to select the candidate partition modes within the optimal coding
depth. Finally, these selected depths and partitions are executed whilst
unnecessary computations are excluded. Experimental results on standard
database demonstrate the superiority of proposed method, especially for High
Definition (HD) and Ultra-HD (UHD) video sequences.Comment: 10 pages, 10 figure
Representation Learning: A Review and New Perspectives
The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different
representations can entangle and hide more or less the different explanatory
factors of variation behind the data. Although specific domain knowledge can be
used to help design representations, learning with generic priors can also be
used, and the quest for AI is motivating the design of more powerful
representation-learning algorithms implementing such priors. This paper reviews
recent work in the area of unsupervised feature learning and deep learning,
covering advances in probabilistic models, auto-encoders, manifold learning,
and deep networks. This motivates longer-term unanswered questions about the
appropriate objectives for learning good representations, for computing
representations (i.e., inference), and the geometrical connections between
representation learning, density estimation and manifold learning
Rich probabilistic models for semantic labeling
Das Ziel dieser Monographie ist es die Methoden und Anwendungen des semantischen Labelings zu erforschen. Unsere Beiträge zu diesem sich rasch entwickelten Thema sind bestimmte Aspekte der Modellierung und der Inferenz in probabilistischen Modellen und ihre Anwendungen in den interdisziplinären Bereichen der Computer Vision sowie medizinischer Bildverarbeitung und Fernerkundung
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Recommended from our members
High performance latent dirichlet allocation for text mining
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Latent Dirichlet Allocation (LDA), a total probability generative model, is a three-tier Bayesian model. LDA computes the latent topic structure of the data and obtains the significant information of documents. However, traditional LDA has several limitations in practical applications. LDA cannot be directly used in classification because it is a non-supervised learning model. It needs to be embedded into appropriate classification algorithms. LDA is a generative model as it normally generates the latent topics in the categories where the target documents do not belong to, producing the deviation in computation and reducing the classification accuracy. The number of topics in LDA influences the learning process of model parameters greatly. Noise samples in the training data also affect the final text classification result. And, the quality of LDA based classifiers depends on the quality of the training samples to a great extent. Although parallel LDA algorithms are proposed to deal with huge amounts of data, balancing computing loads in a computer cluster poses another challenge. This thesis presents a text classification method which combines the LDA model and Support Vector Machine (SVM) classification algorithm for an improved accuracy in classification when reducing the dimension of datasets. Based on Density-Based Spatial Clustering of Applications with Noise (DBSCAN), the algorithm automatically optimizes the number of topics to be selected which reduces the number of iterations in computation. Furthermore, this thesis presents a noise data reduction scheme to process noise data. When the noise ratio is large in the training data set, the noise reduction scheme can always produce a high level of accuracy in classification. Finally, the thesis parallelizes LDA using the MapReduce model which is the de facto computing standard in supporting data intensive applications. A genetic algorithm based load balancing algorithm is designed to balance the workloads among computers in a heterogeneous MapReduce cluster where the computers have a variety of computing resources in terms of CPU speed, memory space and hard disk space
Recommended from our members
Inference Algorithms and Sensorimotor Representations in Brains and Machines
Animals function in a 3D world in which survival depends on robust, well-controlled actions. Historically, researchers in Artificial Intelligence (AI) and neuroscience have explored sensory and motor systems independently. There is a growing body of literature in AI and neuroscience to suggest that they actually work in tandem. While there has been a great deal of work on vision and audition as sensory modalities in these fields, one could argue that a more fundamental modality in biology is haptics, or the sense of touch. In this thesis, we will look at building computational models that integrate tactile sensing with other sensory modalities to perform manipulation-like tasks in robots and discrimination tasks in mice. We will also explore the problem of inference through the lens of Markov Chain Monte Carlo methods (MCMC). We elaborate on the ideas discussed in this thesis in the introduction presented in Chapter 1. A challenging problem one often faces when applying probabilistic mathematical models to the study of sensory-motor systems and other problems involving learning of inference is sampling. Hamiltonian Markov Chain Monte Carlo (HMC) algorithms can efficiently draw representative samples from complex probabilistic models. Most MCMC methods rely on detailed balance to ensure that we can sample from the correct distribution. This constraint can be relaxed in discrete state spaces such as those employed by HMC type methods. In Chapter 2, we study HMC methods without detailed balance to explore faster convergence. Markov jump processes are stochastic processes on discrete state space but continuous in time. In Chapter 3, we use Markov Jump Processes to simulate waiting times along with generalized detailed balance. This waiting time ,we show, helps generate samples faster. Most MCMC methods are plagued by slow simulation times on discrete computing systems. In Chapter 4, we explore HMC in analog circuits where the problem of generating samples from a distribution is mapped to the problem of sampling charge in a capacitor.The second half of this dissertation focuses on the role of haptics in perception and action. Manipulation is a fundamental problem for artificial and biological agents. High dimensional actuators (say, fingers, trunks,etc) are really hard to control. In Chapter 5, we present an approach to learn to actuate dexterous manipulators to grasp objects in simulation. Haptics as a sensory modality is critical to many manipulation tasks. Employing haptics in high dimensional dextrous actuators is challenging. In Chapter 6, we explore how intrinsic curiosity and haptics can be used to learn exploration strategies for discrimination of objects with dextrous hands. A key component to make tactile sensing a possibility is the availability of cheap, efficient, scalable hardware. Chapter 7 presents results for tactile servoing using a physical gelsight sensor. Traditional neuroscience texts delineate sensory and motor systems as two independent systems yet recent results suggest that this may not be entirely complete. That is, there is evidence to suggest that the representations in the cortex is more distributed than is accepted. Finally in Chapter 8, we explore building a computational model of spiking neural data collected from both the barrel and motor cortices during free and active whisking. These works help towards understanding sensorimotor representations in the context of haptics and high dimensional controls. We conclude with a discussion on future directions in Chapter 9
BNAIC 2008:Proceedings of BNAIC 2008, the twentieth Belgian-Dutch Artificial Intelligence Conference
Recommended from our members
Signal separation of musical instruments: simulation-based methods for musical signal decomposition and transcription
This thesis presents techniques for the modelling of musical signals, with particular regard to monophonic and polyphonic pitch estimation. Musical signals are modelled as a set of notes, each comprising of a set of harmonically-related sinusoids. An hierarchical model is presented that is very general and applicable to any signal that can be decomposed as the sum of basis functions. Parameter estimation is posed within a Bayesian framework, allowing for the incorporation of prior information about model parameters. The resulting posterior distribution is of variable dimension and so reversible jump MCMC simulation techniques are employed for the parameter estimation task. The extension of the model to time-varying signals with high posterior correlations between model parameters is described. The parameters and hyperparameters of several frames of data are estimated jointly to achieve a more robust detection. A general model for the description of time-varying homogeneous and heterogeneous multiple component signals is developed, and then applied to the analysis of musical signals. The importance of high level musical and perceptual psychological knowledge in the formulation of the model is highlighted, and attention is drawn to the limitation of pure signal processing techniques for dealing with musical signals. Gestalt psychological grouping principles motivate the hierarchical signal model, and component identifiability is considered in terms of perceptual streaming where each component establishes its own context. A major emphasis of this thesis is the practical application of MCMC techniques, which are generally deemed to be too slow for many applications. Through the design of efficient transition kernels highly optimised for harmonic models, and by careful choice of assumptions and approximations, implementations approaching the order of realtime are viable.Engineering and Physical Sciences Research Counci
- …