The objective of personalization and stylization in text-to-image is to
instruct a pre-trained diffusion model to analyze new concepts introduced by
users and incorporate them into expected styles. Recently, parameter-efficient
fine-tuning (PEFT) approaches have been widely adopted to address this task and
have greatly propelled the development of this field. Despite their popularity,
existing efficient fine-tuning methods still struggle to achieve effective
personalization and stylization in T2I generation. To address this issue, we
propose block-wise Low-Rank Adaptation (LoRA) to perform fine-grained
fine-tuning for different blocks of SD, which can generate images faithful to
input prompts and target identity and also with desired style. Extensive
experiments demonstrate the effectiveness of the proposed method