6 research outputs found

    Disabled Friendly Facility between Feasibility and Legality

    Get PDF
    Most hotels in Lebanon, built before 2011, do not provide access to disabled persons in light of 220/2000 law. This is due to: 1/ a misconception that a Disabled Friendly Facility (DFF) would be on behalf of hotel guests’ satisfaction and consequently, would reduce hotel’s popularity and revenue; 2/ a fear that the unforeseen demand for DFFs will be offset by expenses and, in best-case scenarios, would not generate enough profits to pay back initial investments. In brief, hotel-business investors are not sure about the convenience of a DFF and about the number of DFFs to provide in light of 7194/2011law. The objective of this paper is, on the first hand, to demonstrate the financial feasibility and the economic convenience of a DFF and, on the other hand, to test its impact on the satisfaction of hotel guests. In other terms, on the popularity of the hotel

    P{\O}DA: Prompt-driven Zero-shot Domain Adaptation

    Full text link
    Domain adaptation has been vastly investigated in computer vision but still requires access to target images at train time, which might be intractable in some uncommon conditions. In this paper, we propose the task of `Prompt-driven Zero-shot Domain Adaptation', where we adapt a model trained on a source domain using only a single general textual description of the target domain, i.e., a prompt. First, we leverage a pretrained contrastive vision-language model (CLIP) to optimize affine transformations of source features, steering them towards target text embeddings, while preserving their content and semantics. Second, we show that augmented features can be used to perform zero-shot domain adaptation for semantic segmentation. Experiments demonstrate that our method significantly outperforms CLIP-based style transfer baselines on several datasets for the downstream task at hand. Our prompt-driven approach even outperforms one-shot unsupervised domain adaptation on some datasets, and gives comparable results on others. Our code is available at https://github.com/astra-vision/PODA.Comment: Project page: https://astra-vision.github.io/PODA

    A Simple Recipe for Language-guided Domain Generalized Segmentation

    No full text
    Project page: https://astra-vision.github.io/FAMixGeneralization to new domains not seen during training is one of the long-standing goals and challenges in deploying neural networks in real-world applications. Existing generalization techniques necessitate substantial data augmentation, potentially sourced from external datasets, and aim at learning invariant representations by imposing various alignment constraints. Large-scale pretraining has recently shown promising generalization capabilities, along with the potential of bridging different modalities. For instance, the recent advent of vision-language models like CLIP has opened the doorway for vision models to exploit the textual modality. In this paper, we introduce a simple framework for generalizing semantic segmentation networks by employing language as the source of randomization. Our recipe comprises three key ingredients: i) the preservation of the intrinsic CLIP robustness through minimal fine-tuning, ii) language-driven local style augmentation, and iii) randomization by locally mixing the source and augmented styles during training. Extensive experiments report state-of-the-art results on various generalization benchmarks. The code will be made available

    PØDA: Prompt-driven Zero-shot Domain Adaptation

    No full text
    Project page: https://astra-vision.github.io/PODA/Domain adaptation has been vastly investigated in computer vision but still requires access to target images at train time, which might be intractable in some conditions, especially for long-tail samples. In this paper, we propose the task of `Prompt-driven Zero-shot Domain Adaptation', where we adapt a model trained on a source domain using only a general textual description of the target domain, i.e., a prompt. First, we leverage a pretrained contrastive vision-language model (CLIP) to optimize affine transformations of source features, bringing them closer to target text embeddings, while preserving their content and semantics. Second, we show that augmented features can be used to perform zero-shot domain adaptation for semantic segmentation. Experiments demonstrate that our method significantly outperforms CLIP-based style transfer baselines on several datasets for the downstream task at hand. Our prompt-driven approach even outperforms one-shot unsupervised domain adaptation on some datasets, and gives comparable results on others. The code is available at https://github.com/astra-vision/PODA
    corecore