4,524 research outputs found

    N-body simulations of gravitational dynamics

    Full text link
    We describe the astrophysical and numerical basis of N-body simulations, both of collisional stellar systems (dense star clusters and galactic centres) and collisionless stellar dynamics (galaxies and large-scale structure). We explain and discuss the state-of-the-art algorithms used for these quite different regimes, attempt to give a fair critique, and point out possible directions of future improvement and development. We briefly touch upon the history of N-body simulations and their most important results.Comment: invited review (28 pages), to appear in European Physics Journal Plu

    RoboDepth: Robust Out-of-Distribution Depth Estimation under Corruptions

    Full text link
    Depth estimation from monocular images is pivotal for real-world visual perception systems. While current learning-based depth estimation models train and test on meticulously curated data, they often overlook out-of-distribution (OoD) situations. Yet, in practical settings -- especially safety-critical ones like autonomous driving -- common corruptions can arise. Addressing this oversight, we introduce a comprehensive robustness test suite, RoboDepth, encompassing 18 corruptions spanning three categories: i) weather and lighting conditions; ii) sensor failures and movement; and iii) data processing anomalies. We subsequently benchmark 42 depth estimation models across indoor and outdoor scenes to assess their resilience to these corruptions. Our findings underscore that, in the absence of a dedicated robustness evaluation framework, many leading depth estimation models may be susceptible to typical corruptions. We delve into design considerations for crafting more robust depth estimation models, touching upon pre-training, augmentation, modality, model capacity, and learning paradigms. We anticipate our benchmark will establish a foundational platform for advancing robust OoD depth estimation.Comment: NeurIPS 2023; 45 pages, 25 figures, 13 tables; Code at https://github.com/ldkong1205/RoboDept

    On the Synergies between Machine Learning and Binocular Stereo for Depth Estimation from Images: a Survey

    Full text link
    Stereo matching is one of the longest-standing problems in computer vision with close to 40 years of studies and research. Throughout the years the paradigm has shifted from local, pixel-level decision to various forms of discrete and continuous optimization to data-driven, learning-based methods. Recently, the rise of machine learning and the rapid proliferation of deep learning enhanced stereo matching with new exciting trends and applications unthinkable until a few years ago. Interestingly, the relationship between these two worlds is two-way. While machine, and especially deep, learning advanced the state-of-the-art in stereo matching, stereo itself enabled new ground-breaking methodologies such as self-supervised monocular depth estimation based on deep networks. In this paper, we review recent research in the field of learning-based depth estimation from single and binocular images highlighting the synergies, the successes achieved so far and the open challenges the community is going to face in the immediate future.Comment: Accepted to TPAMI. Paper version of our CVPR 2019 tutorial: "Learning-based depth estimation from stereo and monocular images: successes, limitations and future challenges" (https://sites.google.com/view/cvpr-2019-depth-from-image/home

    Content-Aware Multimedia Communications

    Get PDF
    The demands for fast, economic and reliable dissemination of multimedia information are steadily growing within our society. While people and economy increasingly rely on communication technologies, engineers still struggle with their growing complexity. Complexity in multimedia communication originates from several sources. The most prominent is the unreliability of packet networks like the Internet. Recent advances in scheduling and error control mechanisms for streaming protocols have shown that the quality and robustness of multimedia delivery can be improved significantly when protocols are aware of the content they deliver. However, the proposed mechanisms require close cooperation between transport systems and application layers which increases the overall system complexity. Current approaches also require expensive metrics and focus on special encoding formats only. A general and efficient model is missing so far. This thesis presents efficient and format-independent solutions to support cross-layer coordination in system architectures. In particular, the first contribution of this work is a generic dependency model that enables transport layers to access content-specific properties of media streams, such as dependencies between data units and their importance. The second contribution is the design of a programming model for streaming communication and its implementation as a middleware architecture. The programming model hides the complexity of protocol stacks behind simple programming abstractions, but exposes cross-layer control and monitoring options to application programmers. For example, our interfaces allow programmers to choose appropriate failure semantics at design time while they can refine error protection and visibility of low-level errors at run-time. Based on some examples we show how our middleware simplifies the integration of stream-based communication into large-scale application architectures. An important result of this work is that despite cross-layer cooperation, neither application nor transport protocol designers experience an increase in complexity. Application programmers can even reuse existing streaming protocols which effectively increases system robustness.Der Bedarf unsere Gesellschaft nach kostengünstiger und zuverlässiger Kommunikation wächst stetig. Während wir uns selbst immer mehr von modernen Kommunikationstechnologien abhängig machen, müssen die Ingenieure dieser Technologien sowohl den Bedarf nach schneller Einführung neuer Produkte befriedigen als auch die wachsende Komplexität der Systeme beherrschen. Gerade die Übertragung multimedialer Inhalte wie Video und Audiodaten ist nicht trivial. Einer der prominentesten Gründe dafür ist die Unzuverlässigkeit heutiger Netzwerke, wie z.B.~dem Internet. Paketverluste und schwankende Laufzeiten können die Darstellungsqualität massiv beeinträchtigen. Wie jüngste Entwicklungen im Bereich der Streaming-Protokolle zeigen, sind jedoch Qualität und Robustheit der Übertragung effizient kontrollierbar, wenn Streamingprotokolle Informationen über den Inhalt der transportierten Daten ausnutzen. Existierende Ansätze, die den Inhalt von Multimediadatenströmen beschreiben, sind allerdings meist auf einzelne Kompressionsverfahren spezialisiert und verwenden berechnungsintensive Metriken. Das reduziert ihren praktischen Nutzen deutlich. Außerdem erfordert der Informationsaustausch eine enge Kooperation zwischen Applikationen und Transportschichten. Da allerdings die Schnittstellen aktueller Systemarchitekturen nicht darauf vorbereitet sind, müssen entweder die Schnittstellen erweitert oder alternative Architekturkonzepte geschaffen werden. Die Gefahr beider Varianten ist jedoch, dass sich die Komplexität eines Systems dadurch weiter erhöhen kann. Das zentrale Ziel dieser Dissertation ist es deshalb, schichtenübergreifende Koordination bei gleichzeitiger Reduzierung der Komplexität zu erreichen. Hier leistet die Arbeit zwei Beträge zum aktuellen Stand der Forschung. Erstens definiert sie ein universelles Modell zur Beschreibung von Inhaltsattributen, wie Wichtigkeiten und Abhängigkeitsbeziehungen innerhalb eines Datenstroms. Transportschichten können dieses Wissen zur effizienten Fehlerkontrolle verwenden. Zweitens beschreibt die Arbeit das Noja Programmiermodell für multimediale Middleware. Noja definiert Abstraktionen zur Übertragung und Kontrolle multimedialer Ströme, die die Koordination von Streamingprotokollen mit Applikationen ermöglichen. Zum Beispiel können Programmierer geeignete Fehlersemantiken und Kommunikationstopologien auswählen und den konkreten Fehlerschutz dann zur Laufzeit verfeinern und kontrolliere

    Real-Time Quantum Noise Suppression In Very Low-Dose Fluoroscopy

    Get PDF
    Fluoroscopy provides real-time X-ray screening of patient's organs and of various radiopaque objects, which make it an invaluable tool for many interventional procedures. For this reason, the number of fluoroscopy screenings has experienced a consistent growth in the last decades. However, this trend has raised many concerns about the increase in X-ray exposure, as even low-dose procedures turned out to be not as safe as they were considered, thus demanding a rigorous monitoring of the X-ray dose delivered to the patients and to the exposed medical staff. In this context, the use of very low-dose protocols would be extremely beneficial. Nonetheless, this would result in very noisy images, which need to be suitably denoised in real-time to support interventional procedures. Simple smoothing filters tend to produce blurring effects that undermines the visibility of object boundaries, which is essential for the human eye to understand the imaged scene. Therefore, some denoising strategies embed noise statistics-based criteria to improve their denoising performances. This dissertation focuses on the Noise Variance Conditioned Average (NVCA) algorithm, which takes advantage of the a priori knowledge of quantum noise statistics to perform noise reduction while preserving the edges and has already outperformed many state-of-the-art methods in the denoising of images corrupted by quantum noise, while also being suitable for real-time hardware implementation. Different issues are addressed that currently limit the actual use of very low-dose protocols in clinical practice, e.g. the evaluation of actual performances of denoising algorithms in very low-dose conditions, the optimization of tuning parameters to obtain the best denoising performances, the design of an index to properly measure the quality of X-ray images, and the assessment of an a priori noise characterization approach to account for time-varying noise statistics due to changes of X-ray tube settings. An improved NVCA algorithm is also presented, along with its real-time hardware implementation on a Field Programmable Gate Array (FPGA). The novel algorithm provides more efficient noise reduction performances also for low-contrast moving objects, thus relaxing the trade-off between noise reduction and edge preservation, while providing a further reduction of hardware complexity, which allows for low usage of logic resources also on small FPGA platforms. The results presented in this dissertation provide the means for future studies aimed at embedding the NVCA algorithm in commercial fluoroscopic devices to accomplish real-time denoising of very low-dose X-ray images, which would foster their actual use in clinical practice

    Expanding Navigation Systems by Integrating It with Advanced Technologies

    Get PDF
    Navigation systems provide the optimized route from one location to another. It is mainly assisted by external technologies such as Global Positioning System (GPS) and satellite-based radio navigation systems. GPS has many advantages such as high accuracy, available anywhere, reliable, and self-calibrated. However, GPS is limited to outdoor operations. The practice of combining different sources of data to improve the overall outcome is commonly used in various domains. GIS is already integrated with GPS to provide the visualization and realization aspects of a given location. Internet of things (IoT) is a growing domain, where embedded sensors are connected to the Internet and so IoT improves existing navigation systems and expands its capabilities. This chapter proposes a framework based on the integration of GPS, GIS, IoT, and mobile communications to provide a comprehensive and accurate navigation solution. In the next section, we outline the limitations of GPS, and then we describe the integration of GIS, smartphones, and GPS to enable its use in mobile applications. For the rest of this chapter, we introduce various navigation implementations using alternate technologies integrated with GPS or operated as standalone devices
    • …
    corecore