132 research outputs found

    Refinement of Visual Hulls for Human Performance Capture

    Get PDF

    Pre-Trained Driving in Localized Surroundings with Semantic Radar Information and Machine Learning

    Get PDF
    Entlang der Signalverarbeitungskette von Radar Detektionen bis zur Fahrzeugansteuerung, diskutiert diese Arbeit eine semantischen Radar Segmentierung, einen darauf aufbauenden Radar SLAM, sowie eine im Verbund realisierte autonome Parkfunktion. Die Radarsegmentierung der (statischen) Umgebung wird durch ein Radar-spezifisches neuronales Netzwerk RadarNet erreicht. Diese Segmentierung ermöglicht die Entwicklung des semantischen Radar Graph-SLAM SERALOC. Auf der Grundlage der semantischen Radar SLAM Karte wird eine beispielhafte autonome Parkfunktionalität in einem realen Versuchsträger umgesetzt. Entlang eines aufgezeichneten Referenzfades parkt die Funktion ausschließlich auf Basis der Radar Wahrnehmung mit bisher unerreichter Positioniergenauigkeit. Im ersten Schritt wird ein Datensatz von 8.2 · 10^6 punktweise semantisch gelabelten Radarpunktwolken über eine Strecke von 2507.35m generiert. Es sind keine vergleichbaren Datensätze dieser Annotationsebene und Radarspezifikation öffentlich verfügbar. Das überwachte Training der semantischen Segmentierung RadarNet erreicht 28.97% mIoU auf sechs Klassen. Außerdem wird ein automatisiertes Radar-Labeling-Framework SeRaLF vorgestellt, welches das Radarlabeling multimodal mittels Referenzkameras und LiDAR unterstützt. Für die kohärente Kartierung wird ein Radarsignal-Vorfilter auf der Grundlage einer Aktivierungskarte entworfen, welcher Rauschen und andere dynamische Mehrwegreflektionen unterdrückt. Ein speziell für Radar angepasstes Graph-SLAM-Frontend mit Radar-Odometrie Kanten zwischen Teil-Karten und semantisch separater NDT Registrierung setzt die vorgefilterten semantischen Radarscans zu einer konsistenten metrischen Karte zusammen. Die Kartierungsgenauigkeit und die Datenassoziation werden somit erhöht und der erste semantische Radar Graph-SLAM für beliebige statische Umgebungen realisiert. Integriert in ein reales Testfahrzeug, wird das Zusammenspiel der live RadarNet Segmentierung und des semantischen Radar Graph-SLAM anhand einer rein Radar-basierten autonomen Parkfunktionalität evaluiert. Im Durchschnitt über 42 autonome Parkmanöver (∅3.73 km/h) bei durchschnittlicher Manöverlänge von ∅172.75m wird ein Median absoluter Posenfehler von 0.235m und End-Posenfehler von 0.2443m erreicht, der vergleichbare Radar-Lokalisierungsergebnisse um ≈ 50% übertrifft. Die Kartengenauigkeit von veränderlichen, neukartierten Orten über eine Kartierungsdistanz von ∅165m ergibt eine ≈ 56%-ige Kartenkonsistenz bei einer Abweichung von ∅0.163m. Für das autonome Parken wurde ein gegebener Trajektorienplaner und Regleransatz verwendet

    Shadow segmentation and tracking in real-world conditions

    Get PDF
    Visual information, in the form of images and video, comes from the interaction of light with objects. Illumination is a fundamental element of visual information. Detecting and interpreting illumination effects is part of our everyday life visual experience. Shading for instance allows us to perceive the three-dimensional nature of objects. Shadows are particularly salient cues for inferring depth information. However, we do not make any conscious or unconscious effort to avoid them as if they were an obstacle when we walk around. Moreover, when humans are asked to describe a picture, they generally omit the presence of illumination effects, such as shadows, shading, and highlights, to give a list of objects and their relative position in the scene. Processing visual information in a way that is close to what the human visual system does, thus being aware of illumination effects, represents a challenging task for computer vision systems. Illumination phenomena interfere in fact with fundamental tasks in image analysis and interpretation applications, such as object extraction and description. On the other hand, illumination conditions are an important element to be considered when creating new and richer visual content that combines objects from different sources, both natural and synthetic. When taken into account, illumination effects can play an important role in achieving realism. Among illumination effects, shadows are often integral part of natural scenes and one of the elements contributing to naturalness of synthetic scenes. In this thesis, the problem of extracting shadows from digital images is discussed. A new analysis method for the segmentation of cast shadows in still and moving images without the need of human supervision is proposed. The problem of separating moving cast shadows from moving objects in image sequences is particularly relevant for an always wider range of applications, ranging from video analysis to video coding, and from video manipulation to interactive environments. Therefore, particular attention has been dedicated to the segmentation of shadows in video. The validity of the proposed approach is however also demonstrated through its application to the detection of cast shadows in still color images. Shadows are a difficult phenomenon to model. Their appearance changes with changes in the appearance of the surface they are cast upon. It is therefore important to exploit multiple constraints derived from the analysis of the spectral, geometric and temporal properties of shadows to develop effective techniques for their extraction. The proposed method combines an analysis of color information and of photometric invariant features to a spatio-temporal verification process. With regards to the use of color information for shadow analysis, a complete picture of the existing solutions is provided, which points out the fundamental assumptions, the adopted color models and the link with research problems such as computational color constancy and color invariance. The proposed spatial verification does not make any assumption about scene geometry nor about object shape. The temporal analysis is based on a novel shadow tracking technique. On the basis of the tracking results, a temporal reliability estimation of shadows is proposed which allows to discard shadows which do not present time coherence. The proposed approach is general and can be applied to a wide class of applications and input data. The proposed cast shadow segmentation method has been evaluated on a number of different video data representing indoor and outdoor real-world environments. The obtained results have confirmed the validity of the approach, in particular its ability to deal with different types of content and its robustness to different physically important independent variables, and have demonstrated the improvement with respect to the state of the art. Examples of application of the proposed shadow segmentation tool to the enhancement of video object segmentation, tracking and description operations, and to video composition, have demonstrated the advantages of a shadow-aware video processing

    Yield sensing technologies for perennial and annual horticultural crops: a review

    Get PDF
    Yield maps provide a detailed account of crop production and potential revenue of a farm. This level of details enables a range of possibilities from improving input management, conducting on-farm experimentation, or generating profitability map, thus creating value for farmers. While this technology is widely available for field crops such as maize, soybean and grain, few yield sensing systems exist for horticultural crops such as berries, field vegetable or orchards. Nevertheless, a wide range of techniques and technologies have been investigated as potential means of sensing crop yield for horticultural crops. This paper reviews yield monitoring approaches that can be divided into proximal, either direct or indirect, and remote measurement principles. It reviews remote sensing as a way to estimate and forecast yield prior to harvest. For each approach, basic principles are explained as well as examples of application in horticultural crops and success rate. The different approaches provide whether a deterministic (direct measurement of weight for instance) or an empirical (capacitance measurements correlated to weight for instance) result, which may impact transferability. The discussion also covers the level of precision required for different tasks and the trend and future perspectives. This review demonstrated the need for more commercial solutions to map yield of horticultural crops. It also showed that several approaches have demonstrated high success rate and that combining technologies may be the best way to provide enough accuracy and robustness for future commercial systems

    Modeling and Simulation in Engineering

    Get PDF
    This book provides an open platform to establish and share knowledge developed by scholars, scientists, and engineers from all over the world, about various applications of the modeling and simulation in the design process of products, in various engineering fields. The book consists of 12 chapters arranged in two sections (3D Modeling and Virtual Prototyping), reflecting the multidimensionality of applications related to modeling and simulation. Some of the most recent modeling and simulation techniques, as well as some of the most accurate and sophisticated software in treating complex systems, are applied. All the original contributions in this book are jointed by the basic principle of a successful modeling and simulation process: as complex as necessary, and as simple as possible. The idea is to manipulate the simplifying assumptions in a way that reduces the complexity of the model (in order to make a real-time simulation), but without altering the precision of the results

    Quantifying atherosclerosis in vasculature using ultrasound imaging

    Get PDF
    Cerebrovascular disease accounts for approximately 30% of the global burden associated with cardiovascular diseases [1]. According to the World Stroke Organisation, there are approximately 13.7 million new stroke cases annually, and just under six million people will die from stroke each year [2]. The underlying cause of this disease is atherosclerosis – a vascular pathology which is characterised by thickening and hardening of blood vessel walls. When fatty substances such as cholesterol accumulate on the inner linings of an artery, they cause a progressive narrowing of the lumen referred to as a stenosis. Localisation and grading of the severity of a stenosis, is important for practitioners to assess the risk of rupture which leads to stroke. Ultrasound imaging is popular for this purpose. It is low cost, non-invasive, and permits a quick assessment of vessel geometry and stenosis by measuring the intima media thickness. Research is showing that 3D monitoring of plaque progression may provide a better indication of sites which are at risk of rupture. Various metrics have been proposed. From these, the quantification of plaques by measuring vessel wall volume (VWV) using the segmented media-adventitia boundaries (MAB) and lumen-intima boundaries (LIB) has been shown to be sensitive to temporal changes in carotid plaque burden. Thus, methods to segment these boundaries are required to help generate VWV measurements with high accuracy, less user interaction and increased robustness to variability in di↵erent user acquisition protocols.ii This work proposes three novel methods to address these requirements, to ultimately produce a highly accurate, fully automated segmentation algorithm which works on intensity-invariant data. The first method proposed was that of generating a novel, intensity-invariant representation of ultrasound data by creating phase-congruency maps from raw unprocessed radio-frequency ultrasound information. Experiments carried out showed that this representation retained the necessary anatomical structural information to facilitate segmentation, while concurrently being invariant to changes in amplitude from the user. The second method proposed was the novel application of Deep Convolutional Networks (DCN) to carotid ultrasound images to achieve fully automatic delineation of the MAB boundaries, in addition to the use of a novel fusion of amplitude and phase congruency data as an image source. Experiments carried out showed that the DCN produces highly accurate and automated results, and that the fusion of amplitude and phase yield superior results to either one alone. The third method proposed was a new geometrically constrained objective function for the network's Stochastic Gradient Descent optimisation, thus tuning it to the segmentation problem at hand, while also developing the network further to concurrently delineate both the MAB and LIB to produce vessel wall contours. Experiments carried out here also show that the novel geometric constraints improve the segmentation results on both MAB and LIB contours. In conclusion, the presented work provides significant novel contributions to field of Carotid Ultrasound segmentation, and with future work, this could lead to implementations which facilitate plaque progression analysis for the end�user

    A Concept For Surface Reconstruction From Digitised Data

    Get PDF
    Reverse engineering and in particular the reconstruction of surfaces from digitized data is an important task in industry. With the development of new digitizing technologies such as laser or photogrammetry, real objects can be measured or digitized quickly and cost effectively. The result of the digitizing process is a set of discrete 3D sample points. These sample points have to be converted into a mathematical, continuous surface description, which can be further processed in different computer applications. The main goal of this work is to develop a concept for such a computer aided surface generation tool, that supports the new scanning technologies and meets the requirements in industry towards such a product. Therefore first, the requirements to be met by a surface reconstruction tool are determined. This marketing study has been done by analysing different departments of several companies. As a result, a catalogue of requirements is developed. The number of tasks and applications shows the importance of a fast and precise computer aided reconstruction tool in industry. The main result from the analysis is, that many important applications such as stereolithographie, copy milling etc. are based on triangular meshes or they are able to handle these polygonal surfaces. Secondly the digitizer, currently available on the market and used in industry are analysed. Any scanning system has its strength and weaknesses. A typical problem in digitizing is, that some areas of a model cannot be digitized due to occlusion or obstruction. The systems are also different in terms of accuracy, flexibility etc. The analysis of the systems leads to a second catalogue of requirements and tasks, which have to be solved in order to provide a complete and effective software tool. The analysis also shows, that the reconstruction problem cannot be solved fully automatically due to many limitations of the scanning technologies. Based on the two requirements, a concept for a software tool in order to process digitized data is developed and presented. The concept is restricted to the generation of polygonal surfaces. It combines automatic processes, such as the generation of triangular meshes from digitized data, as well as user interactive tools such as the reconstruction of sharp corners or the compensation of the scanning probe radius in tactile measured data. The most difficult problem in this reconstruction process is the automatic generation of a surface from discrete measured sample points. Hence, an algorithm for generating triangular meshes from digitized data has been developed. The algorithm is based on the principle of multiple view combination. The proposed approach is able to handle large numbers of data points (examples with up to 20 million data points were processed). Two pre-processing algorithm for triangle decimation and surface smoothing are also presented and part of the mesh generation process. Several practical examples, which show the effectiveness, robustness and reliability of the algorithm are presented
    corecore