42 research outputs found

    D-Tags Design by Combining Bluetooth Router, IoT, and Mobile Phone to Track Personal Items

    Get PDF
    Losing personal items such as a wallet or room keys is disturbing. Problems arise when clues to find the item are lacking or even non-existent. Of one hundred-two people who filled out the questionnaire about how often losing their belongings, 76% had experienced it. Because of that, it must be hard to remember where the last they put the stuff. Therefore people need tools that can help them easily find their item with a transmitter and connect to a mobile phone. Previous research showed that the transmitter with a frequency system had a detection distance of only 5 meters. From this weakness, the authors propose the development of a tracking items device that combines an Internet of Things-based Bluetooth transmitter and receiver system approach called D-Tags by combining Bluetooth routers, IoT, and mobile phones. The system is designed for both indoor and outdoor areas. Bluetooth testing allows the device to detect items up to 7.43 meters without wall obstacles. The system provided location information such as Living Room or Bedroom and the coordinates when outside the room. Regarding time, a single detection item is faster in the range of 15.13 seconds to 15.60 seconds than searching for two things simultaneously. From the tracking radius of the outdoor area, the device can track items up to 31.8 meters from the last item's position. All information tracking history can be seen on the web application. The experiment results prove that D-Tags can be used to track items by indicating their location and with a relatively short search duration

    Bayesian dense inverse searching algorithm for real-time stereo matching in minimally invasive surgery

    Full text link
    This paper reports a CPU-level real-time stereo matching method for surgical images (10 Hz on 640 * 480 image with a single core of i5-9400). The proposed method is built on the fast ''dense inverse searching'' algorithm, which estimates the disparity of the stereo images. The overlapping image patches (arbitrary squared image segment) from the images at different scales are aligned based on the photometric consistency presumption. We propose a Bayesian framework to evaluate the probability of the optimized patch disparity at different scales. Moreover, we introduce a spatial Gaussian mixed probability distribution to address the pixel-wise probability within the patch. In-vivo and synthetic experiments show that our method can handle ambiguities resulted from the textureless surfaces and the photometric inconsistency caused by the Lambertian reflectance. Our Bayesian method correctly balances the probability of the patch for stereo images at different scales. Experiments indicate that the estimated depth has higher accuracy and fewer outliers than the baseline methods in the surgical scenario

    Selective-Stereo: Adaptive Frequency Information Selection for Stereo Matching

    Full text link
    Stereo matching methods based on iterative optimization, like RAFT-Stereo and IGEV-Stereo, have evolved into a cornerstone in the field of stereo matching. However, these methods struggle to simultaneously capture high-frequency information in edges and low-frequency information in smooth regions due to the fixed receptive field. As a result, they tend to lose details, blur edges, and produce false matches in textureless areas. In this paper, we propose Selective Recurrent Unit (SRU), a novel iterative update operator for stereo matching. The SRU module can adaptively fuse hidden disparity information at multiple frequencies for edge and smooth regions. To perform adaptive fusion, we introduce a new Contextual Spatial Attention (CSA) module to generate attention maps as fusion weights. The SRU empowers the network to aggregate hidden disparity information across multiple frequencies, mitigating the risk of vital hidden disparity information loss during iterative processes. To verify SRU's universality, we apply it to representative iterative stereo matching methods, collectively referred to as Selective-Stereo. Our Selective-Stereo ranks 1st1^{st} on KITTI 2012, KITTI 2015, ETH3D, and Middlebury leaderboards among all published methods. Code is available at https://github.com/Windsrain/Selective-Stereo.Comment: Accepted to CVPR 202

    ELFNet: Evidential Local-global Fusion for Stereo Matching

    Full text link
    Although existing stereo matching models have achieved continuous improvement, they often face issues related to trustworthiness due to the absence of uncertainty estimation. Additionally, effectively leveraging multi-scale and multi-view knowledge of stereo pairs remains unexplored. In this paper, we introduce the \textbf{E}vidential \textbf{L}ocal-global \textbf{F}usion (ELF) framework for stereo matching, which endows both uncertainty estimation and confidence-aware fusion with trustworthy heads. Instead of predicting the disparity map alone, our model estimates an evidential-based disparity considering both aleatoric and epistemic uncertainties. With the normal inverse-Gamma distribution as a bridge, the proposed framework realizes intra evidential fusion of multi-level predictions and inter evidential fusion between cost-volume-based and transformer-based stereo matching. Extensive experimental results show that the proposed framework exploits multi-view information effectively and achieves state-of-the-art overall performance both on accuracy and cross-domain generalization. The codes are available at https://github.com/jimmy19991222/ELFNet.Comment: ICCV 202

    Uncertainty Guided Adaptive Warping for Robust and Efficient Stereo Matching

    Full text link
    Correlation based stereo matching has achieved outstanding performance, which pursues cost volume between two feature maps. Unfortunately, current methods with a fixed model do not work uniformly well across various datasets, greatly limiting their real-world applicability. To tackle this issue, this paper proposes a new perspective to dynamically calculate correlation for robust stereo matching. A novel Uncertainty Guided Adaptive Correlation (UGAC) module is introduced to robustly adapt the same model for different scenarios. Specifically, a variance-based uncertainty estimation is employed to adaptively adjust the sampling area during warping operation. Additionally, we improve the traditional non-parametric warping with learnable parameters, such that the position-specific weights can be learned. We show that by empowering the recurrent network with the UGAC module, stereo matching can be exploited more robustly and effectively. Extensive experiments demonstrate that our method achieves state-of-the-art performance over the ETH3D, KITTI, and Middlebury datasets when employing the same fixed model over these datasets without any retraining procedure. To target real-time applications, we further design a lightweight model based on UGAC, which also outperforms other methods over KITTI benchmarks with only 0.6 M parameters.Comment: Accepted by ICCV202
    corecore