7,291 research outputs found

    Deep Learning for Scene Flow Estimation on Point Clouds: A Survey and Prospective Trends

    Get PDF
    Aiming at obtaining structural information and 3D motion of dynamic scenes, scene flow estimation has been an interest of research in computer vision and computer graphics for a long time. It is also a fundamental task for various applications such as autonomous driving. Compared to previous methods that utilize image representations, many recent researches build upon the power of deep analysis and focus on point clouds representation to conduct 3D flow estimation. This paper comprehensively reviews the pioneering literature in scene flow estimation based on point clouds. Meanwhile, it delves into detail in learning paradigms and presents insightful comparisons between the state-of-the-art methods using deep learning for scene flow estimation. Furthermore, this paper investigates various higher-level scene understanding tasks, including object tracking, motion segmentation, etc. and concludes with an overview of foreseeable research trends for scene flow estimation

    An aluminum optical clock setup and its evaluation using Ca+

    Get PDF
    This thesis reports about the progress of the aluminum ion clock that is set up at the German National Metrological Institute, Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig. All known relevant systematic frequency shifts are discussed. The systematic shifts were measured on the co-trapped logic ion 40Ca+, which is advantageous due to its higher sensitivity to external fields compared to 27Al+. The observation of the clock transition of 27Al+ and an analysis of the detection error is described.DFG/DQ-mat/Project-ID 274200144 – SFB 1227/E

    Review of Methodologies to Assess Bridge Safety During and After Floods

    Get PDF
    This report summarizes a review of technologies used to monitor bridge scour with an emphasis on techniques appropriate for testing during and immediately after design flood conditions. The goal of this study is to identify potential technologies and strategies for Illinois Department of Transportation that may be used to enhance the reliability of bridge safety monitoring during floods from local to state levels. The research team conducted a literature review of technologies that have been explored by state departments of transportation (DOTs) and national agencies as well as state-of-the-art technologies that have not been extensively employed by DOTs. This review included informational interviews with representatives from DOTs and relevant industry organizations. Recommendations include considering (1) acquisition of tethered kneeboard or surf ski-mounted single-beam sonars for rapid deployment by local agencies, (2) acquisition of remote-controlled vessels mounted with single-beam and side-scan sonars for statewide deployment, (3) development of large-scale particle image velocimetry systems using remote-controlled drones for stream velocity and direction measurement during floods, (4) physical modeling to develop Illinois-specific hydrodynamic loading coefficients for Illinois bridges during flood conditions, and (5) development of holistic risk-based bridge assessment tools that incorporate structural, geotechnical, hydraulic, and scour measurements to provide rapid feedback for bridge closure decisions.IDOT-R27-SP50Ope

    Machine learning enabled millimeter wave cellular system and beyond

    Get PDF
    Millimeter-wave (mmWave) communication with advantages of abundant bandwidth and immunity to interference has been deemed a promising technology for the next generation network and beyond. With the help of mmWave, the requirements envisioned of the future mobile network could be met, such as addressing the massive growth required in coverage, capacity as well as traffic, providing a better quality of service and experience to users, supporting ultra-high data rates and reliability, and ensuring ultra-low latency. However, due to the characteristics of mmWave, such as short transmission distance, high sensitivity to the blockage, and large propagation path loss, there are some challenges for mmWave cellular network design. In this context, to enjoy the benefits from the mmWave networks, the architecture of next generation cellular network will be more complex. With a more complex network, it comes more complex problems. The plethora of possibilities makes planning and managing a complex network system more difficult. Specifically, to provide better Quality of Service and Quality of Experience for users in the such network, how to provide efficient and effective handover for mobile users is important. The probability of handover trigger will significantly increase in the next generation network, due to the dense small cell deployment. Since the resources in the base station (BS) is limited, the handover management will be a great challenge. Further, to generate the maximum transmission rate for the users, Line-of-sight (LOS) channel would be the main transmission channel. However, due to the characteristics of mmWave and the complexity of the environment, LOS channel is not feasible always. Non-line-of-sight channel should be explored and used as the backup link to serve the users. With all the problems trending to be complex and nonlinear, and the data traffic dramatically increasing, the conventional method is not effective and efficiency any more. In this case, how to solve the problems in the most efficient manner becomes important. Therefore, some new concepts, as well as novel technologies, require to be explored. Among them, one promising solution is the utilization of machine learning (ML) in the mmWave cellular network. On the one hand, with the aid of ML approaches, the network could learn from the mobile data and it allows the system to use adaptable strategies while avoiding unnecessary human intervention. On the other hand, when ML is integrated in the network, the complexity and workload could be reduced, meanwhile, the huge number of devices and data could be efficiently managed. Therefore, in this thesis, different ML techniques that assist in optimizing different areas in the mmWave cellular network are explored, in terms of non-line-of-sight (NLOS) beam tracking, handover management, and beam management. To be specific, first of all, a procedure to predict the angle of arrival (AOA) and angle of departure (AOD) both in azimuth and elevation in non-line-of-sight mmWave communications based on a deep neural network is proposed. Moreover, along with the AOA and AOD prediction, a trajectory prediction is employed based on the dynamic window approach (DWA). The simulation scenario is built with ray tracing technology and generate data. Based on the generated data, there are two deep neural networks (DNNs) to predict AOA/AOD in the azimuth (AAOA/AAOD) and AOA/AOD in the elevation (EAOA/EAOD). Furthermore, under an assumption that the UE mobility and the precise location is unknown, UE trajectory is predicted and input into the trained DNNs as a parameter to predict the AAOA/AAOD and EAOA/EAOD to show the performance under a realistic assumption. The robustness of both procedures is evaluated in the presence of errors and conclude that DNN is a promising tool to predict AOA and AOD in a NLOS scenario. Second, a novel handover scheme is designed aiming to optimize the overall system throughput and the total system delay while guaranteeing the quality of service (QoS) of each user equipment (UE). Specifically, the proposed handover scheme called O-MAPPO integrates the reinforcement learning (RL) algorithm and optimization theory. An RL algorithm known as multi-agent proximal policy optimization (MAPPO) plays a role in determining handover trigger conditions. Further, an optimization problem is proposed in conjunction with MAPPO to select the target base station and determine beam selection. It aims to evaluate and optimize the system performance of total throughput and delay while guaranteeing the QoS of each UE after the handover decision is made. Third, a multi-agent RL-based beam management scheme is proposed, where multiagent deep deterministic policy gradient (MADDPG) is applied on each small-cell base station (SCBS) to maximize the system throughput while guaranteeing the quality of service. With MADDPG, smart beam management methods can serve the UEs more efficiently and accurately. Specifically, the mobility of UEs causes the dynamic changes of the network environment, the MADDPG algorithm learns the experience of these changes. Based on that, the beam management in the SCBS is optimized according the reward or penalty when severing different UEs. The approach could improve the overall system throughput and delay performance compared with traditional beam management methods. The works presented in this thesis demonstrate the potentiality of ML when addressing the problem from the mmWave cellular network. Moreover, it provides specific solutions for optimizing NLOS beam tracking, handover management and beam management. For NLOS beam tracking part, simulation results show that the prediction errors of the AOA and AOD can be maintained within an acceptable range of ±2. Further, when it comes to the handover optimization part, the numerical results show the system throughput and delay are improved by 10% and 25%, respectively, when compared with two typical RL algorithms, Deep Deterministic Policy Gradient (DDPG) and Deep Q-learning (DQL). Lastly, when it considers the intelligent beam management part, numerical results reveal the convergence performance of the MADDPG and the superiority in improving the system throughput compared with other typical RL algorithms and the traditional beam management method

    Simultaneous Localization And Mapping for robots: an experimental analysis using ROS

    Get PDF
    The field of robotics has seen major improvements in the past few decades. One of the most important problem researchers all around the world tried to solve is how to make a mobile robot completely autonomous. One important step to achieve this goal is to create robots that can navigate an unknown environment and, using several sensors, build a map of it, locating themselves on the map. This particular problem takes the name of Simultaneous Localization And Mapping (SLAM) and it is very important for different scenarios, such as a mobile robot that navigates an indoor environment, where GPS location cannot perform well. Theoretically, this problem can be considered solved, since several solutions have been proposed in the literature, but in practice these solutions perform sufficiently well only under particular condition, such as when the environment is static and its dimension is limited. In real world instead, the environment can change, objects can be moved, and external factors can modify the appearance of a place, making the localization of the robot very uncertain. Therefore, in practice, a long-term SLAM is an unsolved problem and it is an open field of research. A practical problem for which a definitive solution hasn’t been proposed yet is the Loop Closure Detection (LCD) issue, that is necessary to achieve a real long-term SLAM, and it is the ability of the robot to recognize places previously visited. There are many solutions proposed in the literature, but it is very challenging for a robot to recognize the same place at different time in the day, or in different seasons, or again when the particular location is not visited for long time. During the years, several practice SLAM solutions have been implemented, but a really long-term SLAM hasn’t been reached yet. In this thesis a comparison is made between two mature SLAM approaches, highlighting their criticalities and possible improvements in view of a long-term SLAM

    Full stack development toward a trapped ion logical qubit

    Get PDF
    Quantum error correction is a key step toward the construction of a large-scale quantum computer, by preventing small infidelities in quantum gates from accumulating over the course of an algorithm. Detecting and correcting errors is achieved by using multiple physical qubits to form a smaller number of robust logical qubits. The physical implementation of a logical qubit requires multiple qubits, on which high fidelity gates can be performed. The project aims to realize a logical qubit based on ions confined on a microfabricated surface trap. Each physical qubit will be a microwave dressed state qubit based on 171Yb+ ions. Gates are intended to be realized through RF and microwave radiation in combination with magnetic field gradients. The project vertically integrates software down to hardware compilation layers in order to deliver, in the near future, a fully functional small device demonstrator. This thesis presents novel results on multiple layers of a full stack quantum computer model. On the hardware level a robust quantum gate is studied and ion displacement over the X-junction geometry is demonstrated. The experimental organization is optimized through automation and compressed waveform data transmission. A new quantum assembly language purely dedicated to trapped ion quantum computers is introduced. The demonstrator is aimed at testing implementation of quantum error correction codes while preparing for larger scale iterations.Open Acces

    Single Frame Atmospheric Turbulence Mitigation: A Benchmark Study and A New Physics-Inspired Transformer Model

    Full text link
    Image restoration algorithms for atmospheric turbulence are known to be much more challenging to design than traditional ones such as blur or noise because the distortion caused by the turbulence is an entanglement of spatially varying blur, geometric distortion, and sensor noise. Existing CNN-based restoration methods built upon convolutional kernels with static weights are insufficient to handle the spatially dynamical atmospheric turbulence effect. To address this problem, in this paper, we propose a physics-inspired transformer model for imaging through atmospheric turbulence. The proposed network utilizes the power of transformer blocks to jointly extract a dynamical turbulence distortion map and restore a turbulence-free image. In addition, recognizing the lack of a comprehensive dataset, we collect and present two new real-world turbulence datasets that allow for evaluation with both classical objective metrics (e.g., PSNR and SSIM) and a new task-driven metric using text recognition accuracy. Both real testing sets and all related code will be made publicly available.Comment: This paper is accepted as a poster at ECCV 202

    Novel strategies for the modulation and investigation of memories in the hippocampus

    Full text link
    Disruptions of the memory systems in the brain are linked to the manifestation of many neuropsychiatric diseases such as Alzheimer’s disease, depression, and post-traumatic stress disorder. The limited efficacy of current treatments necessities the development of more effective therapies. Neuromodulation has proven effective in a variety of neurological diseases and could be an attractive solution for memory disorders. However, the application of neuromodulation requires a more detailed understanding of the network dynamics associated with memory formation and recall. In this work, we applied a combination of optical and computational tools in the development of a novel strategy for the modulation of memories, and have expanded its application for interrogation of the hippocampal circuitry underlying memory processing in mice. First, we developed a closed-loop optogenetic stimulation platform to activate neurons implicated in memory processing (engram neurons) with a high temporal resolution. We applied this platform to modulate the activity of engram neurons and assess memory processing with respect to synchronous network activity. The results of our investigation support the proposal that encoding new information and recalling stored memories occur during distinct epochs of hippocampal network-wide oscillations. Having established the high efficacy of the modulation of engram neurons’ activity in a closed-loop fashion, we sought to combine it with two-photon imaging to enable high spatial resolution interrogation of hippocampal circuitry. We developed a behavioral apparatus for head-fixed engram modulation and the assessment of memory recall in immobile animals. Moreover, through the optimization of dual color two-photon imaging, we improved the ability to monitor activity of neurons in the subfields of the hippocampus with cellular specificity. The platform created here will be applied to investigate the effects of engram reactivation on downstream projections targets with high spatial and cell subtype specificity. Following these lines of investigations will enhance our understanding of memory modulation and could lead to novel neuromodulation treatments for neurological disorders associated with memory malfunctioning

    Episodic outflow feedback in low-mass star formation

    Get PDF
    Protostellar outflows are a ubiquitous signpost of star formation. Even the youngest and most embedded sources launch outflows that entrain ambient core material, significantly altering the whole accretion phase of protostars. Thereby outflows reduce the star formation efficiency and determine the finial stellar mass. By extracting angular momentum outflows allow the stars to accrete mass from their surrounding accretion discs. In the case of low-mass star formation, outflows are considered to be the most important feedback mechanism. Observations of long chains of outflow bullets show that outflow feedback is episodic rather than continuous. How episodic outflow feedback impacts the evolution and outcome of star formation is still not fully understood. This thesis contains three publications addressing the impact of episodic outflow feedback on the star formation process and the fossil information carried by the outflows. Using an episodic, sub-grid outflow model in a total of 111 numerical smoothed particle hydrodynamics simulations are performed to follow the star formation process through the early stages. These simulations contain a resolution and parameter study showing that episodic outflow feedback is highly self-regulating. Episodic protostellar outflows entrain about ten times their initially ejected mass, thereby approximately halving the star formation efficiency, resulting in a shifted stellar initial mass function. Protostellar outflows affect how the stars accrete by promoting disc accretion over radial accretion. The promoted disc accretion enhances the fraction of equal-mass twin binaries to a fraction in good agreement with observations. Simulations without outflow feedback form more stars and higher-order multiple systems, which predominantly break apart into binary systems. Outflow feedback enhances the stability of higher-order multiple systems such that the resulting multiplicity statistics are in good agreement with observations. Since the accretion of gas and the launching of outflows are highly connected, protostellar outflows carry fossil records of the launching protostar's accretion history. Hubble wedges in position-velocity diagrams correspond to episodically ejected outflow bullets that have not yet interacted with the cavity wall. Using the kinematic information carried by the outflow and especially by the bullets, it is possible to estimate stellar accretion rates. Dynamical ages of outflows and individual bullets give an estimate of the protostellar age and a history of outburst events. The outflow opening angle and activity help to differentiate between evolutionary stages. This information combined allows a reconstruction of the launching protostars accretion history. Episodic outflows significantly shape the evolution and morphology of the star formation process and should therefore be considered when studying star formation

    Image-Based Rendering Of Real Environments For Virtual Reality

    Get PDF
    • …
    corecore