273 research outputs found

    Working Capital Management in der wertorientierten Unternehmenssteuerung bei Siemens Transformers

    Full text link
    Die Erhöhung operativer Cash Flows führt nach dem Discounted-Cash-Flow-Modell unmittelbar zu einer Unternehmenswertsteigerung. Die Praxis der wertorientierten Unternehmensführung misst demgegenüber den Erfolg zumeist auf Basis des Economic Value Added. In diesem Konzept schlagen sich Cash-Flow-Verbesserungen nur mittelbar durch reduzierte Kapitalkosten erfolgserhöhend nieder. Die Siemens AG hat den Zusammenhang von Cash Flows, Kapitaleffizienz und Wertsteigerung frühzeitig erkannt und im Jahr 2007 ihr Konzept der wertorientierten Unternehmensführung weiterentwickelt. Seitdem werden auf Basis der Cash Conversion Rate Cash-Flow-Wirkungen unmittelbar berücksichtigt. Am Beispiel des Working Capital Managements bei Siemens Transformers wird das neue Steuerungskonzept im Hinblick auf die Incentivierung des Managements und die Umsetzung operativer Maßnahmen untersucht.Based on the Discounted Cash Flow Model an increase of the Operating Cash Flow leads to an immediate effect on value creation. The practice of value based management however, predominantly measures corporate performance based on Economic Value Added. This concept considers improvements in cash flow only indirectly through a reduction in the cost of capital. Siemens AG recognized the importance of combining cash flow, capital efficiency and value creation and adapted their concept of value based management in 2007. Since then cash flow changes are directly reflected through the cash conversion rate. This paper analyses the effects of the new management concept on the incentives for the management and the execution of operational measures based on the example of the Working Capital Management at Siemens Transformers

    Saliency-guided Adaptive Seeding for Supervoxel Segmentation

    Full text link
    We propose a new saliency-guided method for generating supervoxels in 3D space. Rather than using an evenly distributed spatial seeding procedure, our method uses visual saliency to guide the process of supervoxel generation. This results in densely distributed, small, and precise supervoxels in salient regions which often contain objects, and larger supervoxels in less salient regions that often correspond to background. Our approach largely improves the quality of the resulting supervoxel segmentation in terms of boundary recall and under-segmentation error on publicly available benchmarks.Comment: 6 pages, accepted to IROS201

    Semantic segmentation priors for object discovery

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Reliable object discovery in realistic indoor scenes is a necessity for many computer vision and service robot applications. In these scenes, semantic segmentation methods have made huge advances in recent years. Such methods can provide useful prior information for object discovery by removing false positives and by delineating object boundaries. We propose a novel method that combines bottom-up object discovery and semantic priors for producing generic object candidates in RGB-D images. We use a deep learning method for semantic segmentation to classify colour and depth superpixels into meaningful categories. Separately for each category, we use saliency to estimate the location and scale of objects, and superpixels to find their precise boundaries. Finally, object candidates of all categories are combined and ranked. We evaluate our approach on the NYU Depth V2 dataset and show that we outperform other state-of-the-art object discovery methods in terms of recall.Peer ReviewedPostprint (author's final draft

    Small, but important: Traffic light proposals for detecting small traffic lights and beyond

    Full text link
    Traffic light detection is a challenging problem in the context of self-driving cars and driver assistance systems. While most existing systems produce good results on large traffic lights, detecting small and tiny ones is often overlooked. A key problem here is the inherent downsampling in CNNs, leading to low-resolution features for detection. To mitigate this problem, we propose a new traffic light detection system, comprising a novel traffic light proposal generator that utilizes findings from general object proposal generation, fine-grained multi-scale features, and attention for efficient processing. Moreover, we design a new detection head for classifying and refining our proposals. We evaluate our system on three challenging, publicly available datasets and compare it against six methods. The results show substantial improvements of at least 12.6%12.6\% on small and tiny traffic lights, as well as strong results across all sizes of traffic lights.Comment: Accepted at ICVS 202

    Audio-Visual Speech Enhancement with Score-Based Generative Models

    Full text link
    This paper introduces an audio-visual speech enhancement system that leverages score-based generative models, also known as diffusion models, conditioned on visual information. In particular, we exploit audio-visual embeddings obtained from a self-super\-vised learning model that has been fine-tuned on lipreading. The layer-wise features of its transformer-based encoder are aggregated, time-aligned, and incorporated into the noise conditional score network. Experimental evaluations show that the proposed audio-visual speech enhancement system yields improved speech quality and reduces generative artifacts such as phonetic confusions with respect to the audio-only equivalent. The latter is supported by the word error rate of a downstream automatic speech recognition model, which decreases noticeably, especially at low input signal-to-noise ratios.Comment: Submitted to ITG Conference on Speech Communicatio
    corecore