We propose a novel approach to concentration for non-independent random
variables. The main idea is to ``pretend'' that the random variables are
independent and pay a multiplicative price measuring how far they are from
actually being independent. This price is encapsulated in the Hellinger
integral between the joint and the product of the marginals, which is then
upper bounded leveraging tensorisation properties. Our bounds represent a
natural generalisation of concentration inequalities in the presence of
dependence: we recover exactly the classical bounds (McDiarmid's inequality)
when the random variables are independent. Furthermore, in a ``large
deviations'' regime, we obtain the same decay in the probability as for the
independent case, even when the random variables display non-trivial
dependencies. To show this, we consider a number of applications of interest.
First, we provide a bound for Markov chains with finite state space. Then, we
consider the Simple Symmetric Random Walk, which is a non-contracting Markov
chain, and a non-Markovian setting in which the stochastic process depends on
its entire past. To conclude, we propose an application to Markov Chain Monte
Carlo methods, where our approach leads to an improved lower bound on the
minimum burn-in period required to reach a certain accuracy. In all of these
settings, we provide a regime of parameters in which our bound fares better
than what the state of the art can provide