Vehicle light detection, association, and localization are required for
important downstream safe autonomous driving tasks, such as predicting a
vehicle's light state to determine if the vehicle is making a lane change or
turning. Currently, many vehicle light detectors use single-stage detectors
which predict bounding boxes to identify a vehicle light, in a manner decoupled
from vehicle instances. In this paper, we present a method for detecting a
vehicle light given an upstream vehicle detection and approximation of a
visible light's center. Our method predicts four approximate corners associated
with each vehicle light. We experiment with CNN architectures, data
augmentation, and contextual preprocessing methods designed to reduce
surrounding-vehicle confusion. We achieve an average distance error from the
ground truth corner of 4.77 pixels, about 16.33% of the size of the vehicle
light on average. We train and evaluate our model on the LISA Lights Dataset,
allowing us to thoroughly evaluate our vehicle light corner detection model on
a large variety of vehicle light shapes and lighting conditions. We propose
that this model can be integrated into a pipeline with vehicle detection and
vehicle light center detection to make a fully-formed vehicle light detection
network, valuable to identifying trajectory-informative signals in driving
scenes