687 research outputs found

    Spectroscopic Analysis in the Virtual Observatory Environment with SPLAT-VO

    Full text link
    SPLAT-VO is a powerful graphical tool for displaying, comparing, modifying and analyzing astronomical spectra, as well as searching and retrieving spectra from services around the world using Virtual Observatory (VO) protocols and services. The development of SPLAT-VO started in 1999, as part of the Starlink StarJava initiative, sometime before that of the VO, so initial support for the VO was necessarily added once VO standards and services became available. Further developments were supported by the Joint Astronomy Centre, Hawaii until 2009. Since end of 2011 development of SPLAT-VO has been continued by the German Astrophysical Virtual Observatory, and the Astronomical Institute of the Academy of Sciences of the Czech Republic. From this time several new features have been added, including support for the latest VO protocols, along with new visualization and spectra storing capabilities. This paper presents the history of SPLAT-VO, it's capabilities, recent additions and future plans, as well as a discussion on the motivations and lessons learned up to now.Comment: 15 pages, 6 figures, accepted for publication in Astronomy & Computin

    VR/Urban: spread.gun - design process and challenges in developing a shared encounter for media façades

    Get PDF
    Designing novel interaction concepts for urban environments is not only a technical challenge in terms of scale, safety, portability and deployment, but also a challenge of designing for social configurations and spatial settings. To outline what it takes to create a consistent and interactive experience in urban space, we describe the concept and multidisciplinary design process of VR/Urban's media intervention tool called Spread.gun, which was created for the Media Façade Festival 2008 in Berlin. Main design aims were the anticipation of urban space, situational system configuration and embodied interaction. This case study also reflects on the specific technical, organizational and infrastructural challenges encountered when developing media façade installations

    MatrixVT: Efficient Multi-Camera to BEV Transformation for 3D Perception

    Full text link
    This paper proposes an efficient multi-camera to Bird's-Eye-View (BEV) view transformation method for 3D perception, dubbed MatrixVT. Existing view transformers either suffer from poor transformation efficiency or rely on device-specific operators, hindering the broad application of BEV models. In contrast, our method generates BEV features efficiently with only convolutions and matrix multiplications (MatMul). Specifically, we propose describing the BEV feature as the MatMul of image feature and a sparse Feature Transporting Matrix (FTM). A Prime Extraction module is then introduced to compress the dimension of image features and reduce FTM's sparsity. Moreover, we propose the Ring \& Ray Decomposition to replace the FTM with two matrices and reformulate our pipeline to reduce calculation further. Compared to existing methods, MatrixVT enjoys a faster speed and less memory footprint while remaining deploy-friendly. Extensive experiments on the nuScenes benchmark demonstrate that our method is highly efficient but obtains results on par with the SOTA method in object detection and map segmentation task

    BroadBEV: Collaborative LiDAR-camera Fusion for Broad-sighted Bird's Eye View Map Construction

    Full text link
    A recent sensor fusion in a Bird's Eye View (BEV) space has shown its utility in various tasks such as 3D detection, map segmentation, etc. However, the approach struggles with inaccurate camera BEV estimation, and a perception of distant areas due to the sparsity of LiDAR points. In this paper, we propose a broad BEV fusion (BroadBEV) that addresses the problems with a spatial synchronization approach of cross-modality. Our strategy aims to enhance camera BEV estimation for a broad-sighted perception while simultaneously improving the completion of LiDAR's sparsity in the entire BEV space. Toward that end, we devise Point-scattering that scatters LiDAR BEV distribution to camera depth distribution. The method boosts the learning of depth estimation of the camera branch and induces accurate location of dense camera features in BEV space. For an effective BEV fusion between the spatially synchronized features, we suggest ColFusion that applies self-attention weights of LiDAR and camera BEV features to each other. Our extensive experiments demonstrate that BroadBEV provides a broad-sighted BEV perception with remarkable performance gains

    FB-BEV: BEV Representation from Forward-Backward View Transformations

    Full text link
    View Transformation Module (VTM), where transformations happen between multi-view image features and Bird-Eye-View (BEV) representation, is a crucial step in camera-based BEV perception systems. Currently, the two most prominent VTM paradigms are forward projection and backward projection. Forward projection, represented by Lift-Splat-Shoot, leads to sparsely projected BEV features without post-processing. Backward projection, with BEVFormer being an example, tends to generate false-positive BEV features from incorrect projections due to the lack of utilization on depth. To address the above limitations, we propose a novel forward-backward view transformation module. Our approach compensates for the deficiencies in both existing methods, allowing them to enhance each other to obtain higher quality BEV representations mutually. We instantiate the proposed module with FB-BEV, which achieves a new state-of-the-art result of 62.4\% NDS on the nuScenes test set. The code will be released at \url{https://github.com/NVlabs/FB-BEV}.Comment: Accept to ICCV 202

    Development of wireless network planning software for rural community use

    Get PDF
    Rural New Zealand has poor access to broadband Internet. The CRCnet project at the University of Waikato identified point-to-point wireless technology as an appropriate solution, and built networks for rural communities. The project identified viable solutions using low-cost wireless technologies and commodity hardware, allowing them to establish general construction guidelines for planning rural wireless networks. The CRCnet researchers speculated that these general construction guidelines had simplified the wireless network problem to a point at which it seemed feasible to embed the guidelines within a software tool. A significant observation by the CRCnet researchers was that community members are collectively aware of much of the local information that is required in the planning process. Bringing these two ideas together, this thesis hypothesises that a software tool could be designed to enable members of rural communities to plan their own wireless networks. To investigate this hypothesis, a wireless network planning system (WiPlan) was developed. WiPlan includes a tutorial that takes the unique approach of teaching the user process rather than the detail of network planning. WiPlan was evaluated using a novel evaluation technique structured as a roleplaying game. The study design provided participants with local knowledge appropriate for their planning roles. In two trials, WiPlan was found to support participants in successfully planning feasible networks, soliciting local knowledge as needed throughout the planning process. Participants in both trials were able to use the techniques introduced by the tutorial while planning their wireless network and successfully plan feasible wireless networks within budget in both study trials. This thesis explores the feasibility of designing a wireless networking planning tool, that can assist members of rural communities with no expertise in wireless network planning, to plan a feasible network and provides reasonable evidence to support the claim that such a planning tool is feasible
    corecore