Radio frequency (RF) signal mapping, which is the process of analyzing and
predicting the RF signal strength and distribution across specific areas, is
crucial for cellular network planning and deployment. Traditional approaches to
RF signal mapping rely on statistical models constructed based on measurement
data, which offer low complexity but often lack accuracy, or ray tracing tools,
which provide enhanced precision for the target area but suffer from increased
computational complexity. Recently, machine learning (ML) has emerged as a
data-driven method for modeling RF signal propagation, which leverages models
trained on synthetic datasets to perform RF signal mapping in "unseen" areas.
In this paper, we present Geo2SigMap, an ML-based framework for efficient and
high-fidelity RF signal mapping using geographic databases. First, we develop
an automated framework that seamlessly integrates three open-source tools:
OpenStreetMap (geographic databases), Blender (computer graphics), and Sionna
(ray tracing), enabling the efficient generation of large-scale 3D building
maps and ray tracing models. Second, we propose a cascaded U-Net model, which
is pre-trained on synthetic datasets and employed to generate detailed RF
signal maps, leveraging environmental information and sparse measurement data.
Finally, we evaluate the performance of Geo2SigMap via a real-world measurement
campaign, where three types of user equipment (UE) collect over 45,000 data
points related to cellular information from six LTE cells operating in the
citizens broadband radio service (CBRS) band. Our results show that Geo2SigMap
achieves an average root-mean-square-error (RMSE) of 6.04 dB for predicting the
reference signal received power (RSRP) at the UE, representing an average RMSE
improvement of 3.59 dB compared to existing methods