The aims of present paper are: (a) to examine the theoretical and methodological issues pertaining to the use of grammaticality judgment tasks in linguistic theory; (b) to design and administer a grammaticality judgment task that is not characterized by the problems commonly associated with such tasks; (c) to introduce FACETS as a novel way to analyze grammaticality judgments in order to determine (i) which participants should be excluded from the analyses, (ii) which test items should be revised, and (iii) whether the grammaticality judgments are internally consistent. First, the paper discusses the concept of grammaticality and addresses validity issues pertaining to the use of grammaticality judgment tasks in linguistic theory. Second, it tackles methodological issues concerning the creation of test items, the specification of procedures, and the analysis and interpretation of the results. A grammaticality judgment task is then administered to 20 native speakers of American English, and FACETS is introduced as a means to analyze the judgments and assess their internal consistency. The results reveal a general tendency on the part of the participants to judge both grammatical and ungrammatical items as grammatical. The FACETS analysis indicates that the grammaticality judgments of (at least) two participants are not internally consistent. It also shows that two of the test items received from six to eight unexpected judgments. Despite these results, the analysis also indicates that overall, the grammaticality judgments obtained on each sentence type and on grammatical versus ungrammatical items were internally consistent. In light of the results and of the efficiency of the program, the implementation of FACETS is recommended in the analysis of grammaticality judgments in linguistic theory