Zero shot learning in Image Classification refers to the setting where images
from some novel classes are absent in the training data but other information
such as natural language descriptions or attribute vectors of the classes are
available. This setting is important in the real world since one may not be
able to obtain images of all the possible classes at training. While previous
approaches have tried to model the relationship between the class attribute
space and the image space via some kind of a transfer function in order to
model the image space correspondingly to an unseen class, we take a different
approach and try to generate the samples from the given attributes, using a
conditional variational autoencoder, and use the generated samples for
classification of the unseen classes. By extensive testing on four benchmark
datasets, we show that our model outperforms the state of the art, particularly
in the more realistic generalized setting, where the training classes can also
appear at the test time along with the novel classes