ABSTRACT This study focuses on which competence aims that are actually measured in the written exam in English in Norwegian upper secondary schools and, further, on students' understanding of the texts and the tasks in the exam sets. English is a mandatory subject for first year students in the general studies programmes in the upper secondary schools. In the vocational study programmes the subject is compulsory as well, but it spans over two years, and students sit for the exam at the end of their second year. The written examination is prepared, and graded centrally by the Norwegian Directorate of Education. The study is based on the analysis of six exams sets from spring 2010 to fall 2013. As well as looking at which competence aims that are tested, the study further aims to investigate to what extent the students understand the texts and tasks that are given, so that they are able to show a broad spectrum of their competence. In addition, surveys and interviews with students from both general and vocational study programs (building and construction) have been carried out. As this exam is considered a high stake test it is important that the validity and reliability are high. This is also investigated. The analysis of the exam sets showed a considerable difference in which, and how many, competence aims were tested in each set. Consequently, students who sit for the same examination are not measured in the same competence aims. What is measured, further depends on the tasks the student chooses, the content of the text they write and whether or not they are aware of what they should write about in order to show a broad spectrum of their competence. The findings also show the vocational students were not very motivated to read the texts in the preparation booklet. The majority found the texts difficult, tedious and not at all related to their field of study. Students from general studies were more positive to the text booklet and saw the benefits of it. Regarding the prompts, the vocational students were generally better at choosing the writing tasks that suited their study programme and the tasks in which they were able to show their competence the best. The study concludes that the ENG1002/1003 exam does not measure a broad spectrum of competences. The competence aims are complex and difficult to measure, the texts in the preparation booklet are not interesting and useful for all students and the prompts are unclear. Consequently, this makes the exam, in its present form, invalid and unreliable