Of late, there has been a considerable interest in models, algorithms and methodologies specifically targeted towards designing hardware and software for streaming applications. Such applications process potentially infinite streams of audio/video data or network packets and are found in a wide range of devices, starting from mobile phones to set-top boxes. Given a streaming application and an architecture, the timing analysis problem is to determine the timing properties of the processed data stream, given the timing properties of the input stream. Most of the previous work related to estimating or optimizing these timing properties take a high-level view of the architecture and neglect microarchitectural features such as caches. In this paper, we show that an accurate estimation of a streaming application’s timing properties, however, heavily relies on an appropriate modeling of the processor microarchitecture, such as its instruction cache. Towards this, we present a novel framework for timing analysis of stream processing applications. Our framework accurately models the evolution of the instruction cache of the underlying processor as a stream is processed, and the fact that the execution time involved in processing any data item depends on all the previous data items occurring in the stream. We have implemented a prototype of this framework partly in C and partly in Mathematica and plan to integrate it into a designspace exploration tool for system-level design of hardwaresoftware architectures for streaming applications.