5,337 research outputs found

    Mirroring or misting: On the role of product architecture, product complexity, and the rate of product component change

    Get PDF
    This paper contributes to the literature on the within-firm and across-firm mirroring hypothesis – the assumed architectural mapping between firms’ strategic choices of product architecture and firm architecture, and between firms’ architectural choices and the industry structures that emerge. Empirical evidence is both limited and mixed and there is evidently a need for a more nuanced theory that embeds not only whether the mirroring hypothesis holds, but under what product architecture and component-level conditions it may or may not hold. We invoke an industrial economics perspective to develop a stylised product architecture typology and hypothesise how the combined effects of product architecture type, product complexity and the rate of product component change may be associated with phases of mirroring or misting. Our framework helps to reconcile much existing mixed evidence and provides the foundation for further empirical research

    Investigating the IT Silo problem: From Strict to Adaptive mirroring between IT Architecture and Organisational Health Services

    Get PDF
    A crucial problem reducing efficient information flow within healthcare is the presence of siloed IT architectures. Siloed IT Architectures causes disruptive and disconnected information flow within and between health institutions, and complicates the establishment of qualitative health services to practitioners and citizens. In this paper, we analyze this challenge using a mirroring lens. Our research question is, how can we establish a supportive IT architecture that reduces the IT silo problem? Our empirical evidence comes from a case in Norway, where we analyzed a transformation initiative on the national, regional, and local levels. Our investigation into the IT silo problem contributes to the literature on information flow and IT architecture within healthcare in two ways. First, we find that strict mirroring that leads to sub-optimization and silofication, is a major cause for the presence of IT silos. Second, we demonstrate how adaptive mirroring – a modular strategy for combining global and local requirements in IT architecture – improves the changeability and manageability of IT architectures

    High throughput spatial convolution filters on FPGAs

    Get PDF
    Digital signal processing (DSP) on field- programmable gate arrays (FPGAs) has long been appealing because of the inherent parallelism in these computations that can be easily exploited to accelerate such algorithms. FPGAs have evolved significantly to further enhance the mapping of these algorithms, included additional hard blocks, such as the DSP blocks found in modern FPGAs. Although these DSP blocks can offer more efficient mapping of DSP computations, they are primarily designed for 1-D filter structures. We present a study on spatial convolutional filter implementations on FPGAs, optimizing around the structure of the DSP blocks to offer high throughput while maintaining the coefficient flexibility that other published architectures usually sacrifice. We show that it is possible to implement large filters for large 4K resolution image frames at frame rates of 30–60 FPS, while maintaining functional flexibility

    Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks

    Full text link
    It is desirable to train convolutional networks (CNNs) to run more efficiently during inference. In many cases however, the computational budget that the system has for inference cannot be known beforehand during training, or the inference budget is dependent on the changing real-time resource availability. Thus, it is inadequate to train just inference-efficient CNNs, whose inference costs are not adjustable and cannot adapt to varied inference budgets. We propose a novel approach for cost-adjustable inference in CNNs - Stochastic Downsampling Point (SDPoint). During training, SDPoint applies feature map downsampling to a random point in the layer hierarchy, with a random downsampling ratio. The different stochastic downsampling configurations known as SDPoint instances (of the same model) have computational costs different from each other, while being trained to minimize the same prediction loss. Sharing network parameters across different instances provides significant regularization boost. During inference, one may handpick a SDPoint instance that best fits the inference budget. The effectiveness of SDPoint, as both a cost-adjustable inference approach and a regularizer, is validated through extensive experiments on image classification
    • …
    corecore