1,124 research outputs found

    Design guidelines: The relationship between well-being, universal design and landscape architecture : A dissertation submitted in partial fulfilment of the requirements for the Degree of Master at Lincoln University

    Get PDF
    Well-being and mental health issues are becoming more and more important nowadays, especially while the world is experiencing the COVID-19 pandemic. It is well-known that natural environments have the ability to restore mental status. There are existing design guidelines to provide criteria and suggestions to guide people. These guidelines help designers promote mental health and well-being through designing therapeutic gardens and gardens in general health facilities. However, these guidelines are either too broad or too specific to apply, as there is a lack of guidelines that have been created for promoting well-being and mental health in universal public spaces. On the other hand, universal design guidelines could be applied to all kinds of design. These guidelines are seen as a basic consideration in landscape design, but there is little focus on exploring the potential of improving well-being through practising universal design. By identifying and understanding the relationship between universal design guidelines, landscape architecture and well-being, this paper will propose new frameworks for future study

    Efficient Convolutional Neural Network for FMCW Radar Based Hand Gesture Recognition

    Full text link
    FMCW radar could detect object's range, speed and Angleof-Arrival, advantages are robust to bad weather, good range resolution, and good speed resolution. In this paper, we consider the FMCW radar as a novel interacting interface on laptop. We merge sequences of object's range, speed, azimuth information into single input, then feed to a convolution neural network to learn spatial and temporal patterns. Our model achieved 96% accuracy on test set and real-time test.Comment: Poster in Ubicomp 201

    Medial septal cholinergic neurons modulate isoflurane anesthesia.

    Get PDF
    BACKGROUND: Cholinergic drugs are known to modulate the response of general anesthesia. However, the sensitivity of isoflurane or other volatile anesthetics after selective lesion of septal cholinergic neurons that project to the hippocampus is not known. METHODS: Male Long Evans rats had 192 immunoglobulin G-saporin infused into the medial septum (n = 10), in order to selectively lesion cholinergic neurons, whereas control, sham-lesioned rats were infused with saline (n = 12). Two weeks after septal infusion, the hypnotic properties of isoflurane and ketamine were measured using a behavioral endpoint of loss of righting reflex (LORR). Septal lesion was assessed by counting choline acetyltransferase-immunoreactive cells and parvalbumin-immunoreactive cells. RESULTS: Rats with 192 immunoglobulin G-saporin lesion, as compared with control rats with sham lesion, showed a 85% decrease in choline acetyltransferase-immunoreactive, but not parvalbumin-immunoreactive, neurons in the medial septal area. Lesioned as compared with control rats showed increased isoflurane sensitivity, characterized by a leftward shift of the graph plotting cumulative LORR percent with isoflurane dose. However, lesioned and control rats were not different in their LORR sensitivity to ketamine. When administered with 1.375% isoflurane, LORR induction time was shorter, whereas emergence time was longer, in lesioned as compared with control rats. Hippocampal 62-100 Hz gamma power in the electroencephalogram decreased with isoflurane dose, with a decrease that was greater in lesioned (n = 5) than control rats (n = 5). CONCLUSIONS: These findings suggest a role of the septal cholinergic neurons in modulating the sensitivity to isoflurane anesthesia, which affects both induction and emergence. The sensitivity of hippocampal gamma power to isoflurane appears to indicate anesthesia (LORR) sensitivity

    CVTNet: A Cross-View Transformer Network for Place Recognition Using LiDAR Data

    Full text link
    LiDAR-based place recognition (LPR) is one of the most crucial components of autonomous vehicles to identify previously visited places in GPS-denied environments. Most existing LPR methods use mundane representations of the input point cloud without considering different views, which may not fully exploit the information from LiDAR sensors. In this paper, we propose a cross-view transformer-based network, dubbed CVTNet, to fuse the range image views (RIVs) and bird's eye views (BEVs) generated from the LiDAR data. It extracts correlations within the views themselves using intra-transformers and between the two different views using inter-transformers. Based on that, our proposed CVTNet generates a yaw-angle-invariant global descriptor for each laser scan end-to-end online and retrieves previously seen places by descriptor matching between the current query scan and the pre-built database. We evaluate our approach on three datasets collected with different sensor setups and environmental conditions. The experimental results show that our method outperforms the state-of-the-art LPR methods with strong robustness to viewpoint changes and long-time spans. Furthermore, our approach has a good real-time performance that can run faster than the typical LiDAR frame rate. The implementation of our method is released as open source at: https://github.com/BIT-MJY/CVTNet.Comment: accepted by IEEE Transactions on Industrial Informatics 202

    LCPR: A Multi-Scale Attention-Based LiDAR-Camera Fusion Network for Place Recognition

    Full text link
    Place recognition is one of the most crucial modules for autonomous vehicles to identify places that were previously visited in GPS-invalid environments. Sensor fusion is considered an effective method to overcome the weaknesses of individual sensors. In recent years, multimodal place recognition fusing information from multiple sensors has gathered increasing attention. However, most existing multimodal place recognition methods only use limited field-of-view camera images, which leads to an imbalance between features from different modalities and limits the effectiveness of sensor fusion. In this paper, we present a novel neural network named LCPR for robust multimodal place recognition, which fuses LiDAR point clouds with multi-view RGB images to generate discriminative and yaw-rotation invariant representations of the environment. A multi-scale attention-based fusion module is proposed to fully exploit the panoramic views from different modalities of the environment and their correlations. We evaluate our method on the nuScenes dataset, and the experimental results show that our method can effectively utilize multi-view camera and LiDAR data to improve the place recognition performance while maintaining strong robustness to viewpoint changes. Our open-source code and pre-trained models are available at https://github.com/ZhouZijie77/LCPR .Comment: Accepted by IEEE Robotics and Automation Letters (RAL) 202

    SeqOT: A Spatial-Temporal Transformer Network for Place Recognition Using Sequential LiDAR Data

    Full text link
    Place recognition is an important component for autonomous vehicles to achieve loop closing or global localization. In this paper, we tackle the problem of place recognition based on sequential 3D LiDAR scans obtained by an onboard LiDAR sensor. We propose a transformer-based network named SeqOT to exploit the temporal and spatial information provided by sequential range images generated from the LiDAR data. It uses multi-scale transformers to generate a global descriptor for each sequence of LiDAR range images in an end-to-end fashion. During online operation, our SeqOT finds similar places by matching such descriptors between the current query sequence and those stored in the map. We evaluate our approach on four datasets collected with different types of LiDAR sensors in different environments. The experimental results show that our method outperforms the state-of-the-art LiDAR-based place recognition methods and generalizes well across different environments. Furthermore, our method operates online faster than the frame rate of the sensor. The implementation of our method is released as open source at: https://github.com/BIT-MJY/SeqOT.Comment: Submitted to IEEE Transactions on Industrial Electronic
    • …
    corecore