Open-vocabulary semantic segmentation models associate vision and text tolabel pixels from an undefined set of classes using textual queries, providingversatile performance on novel datasets. However, large shifts between trainingand test domains degrade their performance, requiring fine-tuning for effectivereal-world applications. We introduce Semantic Library Adaptation (SemLA), anovel framework for training-free, test-time domain adaptation. SemLA leveragesa library of LoRA-based adapters indexed with CLIP embeddings, dynamicallymerging the most relevant adapters based on proximity to the target domain inthe embedding space. This approach constructs an ad-hoc model tailored to eachspecific input without additional training. Our method scales efficiently,enhances explainability by tracking adapter contributions, and inherentlyprotects data privacy, making it ideal for sensitive applications.Comprehensive experiments on a 20-domain benchmark built over 10 standarddatasets demonstrate SemLA's superior adaptability and performance acrossdiverse settings, establishing a new standard in domain adaptation foropen-vocabulary semantic segmentation.<br