4 research outputs found

    Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition

    Get PDF
    The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future

    CorrFractal: High-Resolution Correspondence Method Using Fractal Affinity on Self-Supervised Learning

    No full text
    Existing supervised learning-based methods performed high-resolution visual correspondence using a decoder module. However, in self-supervised learning-based methods, it is difficult to use a decoder module that is easily influenced by labels. This paper will introduce a self-supervised learning-based visual correspondence method for high-resolution representation without decoder module. To this end, the paper proposed four modules. Each module has an output of the original resolution and distributes the role of the decoder module to perform high-resolution representation. The first module is the pattern boosted quantization module, which learns pattern information along with color information to create high-resolution pseudo labeling. The second module is the backbone module, which is created by applying aggregation to the backbone network to simultaneously handle semantic features and high-resolution features. The third module is the appearance module, which learns appearance information using the features of the high-resolution embedding space. The fourth module is the correspondence module, which gradually reconstructs a high-resolution visual correspondence using low-resolution input. It was confirmed using subtraction image that the proposed method improves the performance about representation of thin objects and object boundaries. Video segmentation performance was evaluated on the DAVIS-2017 val dataset using the J&F mean, yielding 65.4%
    corecore