Big displays and ultrawalls are increasingly present in nowadays environments (e.g. in city spaces, buildings, transportation means, teaching rooms, operation rooms, convention centers, etc.), at the same time that they are widely used as tools for collaborative work, monitoring and control in many other contexts. How to enhance interaction with big displays to make it more natural and fluent is still an open challenge. This paper presents a system for multimodal interaction based on pointing and speech recognition. The system makes possible for the user to control the big display through a combination of pointing gestures and a set of control commands built on a predefined vocabulary. The system is already prototyped and being used for service demonstrations for different applications