In this paper, we describe the first realisation of an evaluation environment for Performance Trees, a recently proposed formalism for the specification of performance properties and measures. In particular, we present details of the architecture and implementation of this environment that comprises a client-side model and performance query specification tool, and a server-side distributed evaluation engine, supported by a dedicated computing cluster. The evaluation engine combines the analytic capabilities of a number of distributed tools for steady-state, passage time and transient analysis, and also incorporates a caching mechanism to avoid redundant calculations. We demonstrate in the context of a case study how this analysis pipeline allows remote users to design their models and performance queries in a sophisticated yet easy to use framework, and subsequently evaluate them by harnessing the computing power of a Grid cluster back-end.Accepted versio