Maximum entropy models (MEM) have been widely used in the last 10 years formodelling, explaining and predicting the statistics of networks of spiking neurons.However, as the network size increases, the number of model parameters increasesrapidily, hindering its interpretation and fast computation. However, these parametersare not necessarily independent from each other; when some of them are related byhidden dependencies, their number can be reduced, allowing to map the MEM into alower dimensional space. Here, we present a novel framework for MEM dimensionalityreduction that uses the geometrical properties of MEM to find the subset of dimensionsthat best captures the network high-order statistics, without fitting the model to data.This allows us define a parameter somehow representing the degree of compressibility ofthe code. The method was tested on synthetic data where the underlying statistics isknown and on retinal ganglion cells (RGC) data recorded using multi-electrode arrays(MEA) under different stimuli. We found that MEM dimensionality reduction dependson the interdependences between the network activity, the density of the raster and thenumber of observed events. For RGC data we found that the activity is highlyinterdependent, with a dimensionality reduction of almost 50%, compared to a randomraster, showing that the network activity is highly compressible, possibly due to thenetwork redundancies. This dimensionality reduction depends on the stimuli statistics,supporting the idea that sensory networks adapts to stimuli statistics, modifying thelevel of redundancy, i.e. the coding strategy