26,999 research outputs found

    Managing design variety, process variety and engineering change: a case study of two capital good firms

    Get PDF
    Many capital good firms deliver products that are not strictly one-off, but instead share a certain degree of similarity with other deliveries. In the delivery of the product, they aim to balance stability and variety in their product design and processes. The issue of engineering change plays an important in how they manage to do so. Our aim is to gain more understanding into how capital good firms manage engineering change, design variety and process variety, and into the role of the product delivery strategies they thereby use. Product delivery strategies are defined as the type of engineering work that is done independent of an order and the specification freedom the customer has in the remaining part of the design. Based on the within-case and cross-case analysis of two capital good firms several mechanisms for managing engineering change, design variety and process variety are distilled. It was found that there exist different ways of (1) managing generic design information, (2) isolating large engineering changes, (3) managing process variety, (4) designing and executing engineering change processes. Together with different product delivery strategies these mechanisms can be placed within an archetypes framework of engineering change management. On one side of the spectrum capital good firms operate according to open product delivery strategies, have some practices in place to investigate design reuse potential, isolate discontinuous engineering changes into the first deliveries of the product, employ ‘probe and learn’ process management principles in order to allow evolving insights to be accurately executed and have informal engineering change processes. On the other side of the spectrum capital good firms operate according to a closed product delivery strategy, focus on prevention of engineering changes based on design standards, need no isolation mechanisms for discontinuous engineering changes, have formal process management practices in place and make use of closed and formal engineering change procedures. The framework should help managers to (1) analyze existing configurations of product delivery strategies, product and process designs and engineering change management and (2) reconfigure any of these elements according to a ‘misfit’ derived from the framework. Since this is one of the few in-depth empirical studies into engineering change management in the capital good sector, our work adds to the understanding on the various ways in which engineering change can be dealt with

    On-board processing concepts for future satellite communications systems

    Get PDF
    The initial definition of on-board processing for an advanced satellite communications system to service domestic markets in the 1990's is discussed. An exemplar system with both RF on-board switching and demodulation/remodulation baseband processing is used to identify important issues related to system implementation, cost, and technology development. Analyses of spectrum-efficient modulation, coding, and system control techniques are summarized. Implementations for an RF switch and baseband processor are described. Among the major conclusions listed is the need for high gain satellites capable of handling tens of simultaneous beams for the efficient reuse of the 2.5 GHz 30/20 frequency band. Several scanning beams are recommended in addition to the fixed beams. Low power solid state 20 GHz GaAs FET power amplifiers in the 5W range and a general purpose digital baseband processor with gigahertz logic speeds and megabits of memory are also recommended

    Spl needs an automatic holistic model for software reasoning with feature models

    Get PDF
    The number of features and their relations in a Software Product Line (SPL) may lead to have SPLs with a big number of potential products which may be difficult to manage. This number of potential products widely increases if, as well as functional features, extra–functional features are taken into account. There are several questions that a SPL engineer would like to ask to his SPL model such as: is it a valid model?, how many potential products a SPL has?, is there any product fulfilling the customer needs? and so forth. These types of questions are error prone to answer without an automatic support. The work reported in this position paper glipmses some misconceptions of previous related proposals: we uphold the need to have an holistic product line model were not distinction are made between functional and extra–functional features, we propose a model based on a formalism strong enough to support both type o features: contraint programming.Ministerio de Ciencia y Tecnología TIC2003-02737-C02-0

    An Adaptive Design Methodology for Reduction of Product Development Risk

    Full text link
    Embedded systems interaction with environment inherently complicates understanding of requirements and their correct implementation. However, product uncertainty is highest during early stages of development. Design verification is an essential step in the development of any system, especially for Embedded System. This paper introduces a novel adaptive design methodology, which incorporates step-wise prototyping and verification. With each adaptive step product-realization level is enhanced while decreasing the level of product uncertainty, thereby reducing the overall costs. The back-bone of this frame-work is the development of Domain Specific Operational (DOP) Model and the associated Verification Instrumentation for Test and Evaluation, developed based on the DOP model. Together they generate functionally valid test-sequence for carrying out prototype evaluation. With the help of a case study 'Multimode Detection Subsystem' the application of this method is sketched. The design methodologies can be compared by defining and computing a generic performance criterion like Average design-cycle Risk. For the case study, by computing Average design-cycle Risk, it is shown that the adaptive method reduces the product development risk for a small increase in the total design cycle time.Comment: 21 pages, 9 figure

    System-on-chip Computing and Interconnection Architectures for Telecommunications and Signal Processing

    Get PDF
    This dissertation proposes novel architectures and design techniques targeting SoC building blocks for telecommunications and signal processing applications. Hardware implementation of Low-Density Parity-Check decoders is approached at both the algorithmic and the architecture level. Low-Density Parity-Check codes are a promising coding scheme for future communication standards due to their outstanding error correction performance. This work proposes a methodology for analyzing effects of finite precision arithmetic on error correction performance and hardware complexity. The methodology is throughout employed for co-designing the decoder. First, a low-complexity check node based on the P-output decoding principle is designed and characterized on a CMOS standard-cells library. Results demonstrate implementation loss below 0.2 dB down to BER of 10^{-8} and a saving in complexity up to 59% with respect to other works in recent literature. High-throughput and low-latency issues are addressed with modified single-phase decoding schedules. A new "memory-aware" schedule is proposed requiring down to 20% of memory with respect to the traditional two-phase flooding decoding. Additionally, throughput is doubled and logic complexity reduced of 12%. These advantages are traded-off with error correction performance, thus making the solution attractive only for long codes, as those adopted in the DVB-S2 standard. The "layered decoding" principle is extended to those codes not specifically conceived for this technique. Proposed architectures exhibit complexity savings in the order of 40% for both area and power consumption figures, while implementation loss is smaller than 0.05 dB. Most modern communication standards employ Orthogonal Frequency Division Multiplexing as part of their physical layer. The core of OFDM is the Fast Fourier Transform and its inverse in charge of symbols (de)modulation. Requirements on throughput and energy efficiency call for FFT hardware implementation, while ubiquity of FFT suggests the design of parametric, re-configurable and re-usable IP hardware macrocells. In this context, this thesis describes an FFT/IFFT core compiler particularly suited for implementation of OFDM communication systems. The tool employs an accuracy-driven configuration engine which automatically profiles the internal arithmetic and generates a core with minimum operands bit-width and thus minimum circuit complexity. The engine performs a closed-loop optimization over three different internal arithmetic models (fixed-point, block floating-point and convergent block floating-point) using the numerical accuracy budget given by the user as a reference point. The flexibility and re-usability of the proposed macrocell are illustrated through several case studies which encompass all current state-of-the-art OFDM communications standards (WLAN, WMAN, xDSL, DVB-T/H, DAB and UWB). Implementations results are presented for two deep sub-micron standard-cells libraries (65 and 90 nm) and commercially available FPGA devices. Compared with other FFT core compilers, the proposed environment produces macrocells with lower circuit complexity and same system level performance (throughput, transform size and numerical accuracy). The final part of this dissertation focuses on the Network-on-Chip design paradigm whose goal is building scalable communication infrastructures connecting hundreds of core. A low-complexity link architecture for mesochronous on-chip communication is discussed. The link enables skew constraint looseness in the clock tree synthesis, frequency speed-up, power consumption reduction and faster back-end turnarounds. The proposed architecture reaches a maximum clock frequency of 1 GHz on 65 nm low-leakage CMOS standard-cells library. In a complex test case with a full-blown NoC infrastructure, the link overhead is only 3% of chip area and 0.5% of leakage power consumption. Finally, a new methodology, named metacoding, is proposed. Metacoding generates correct-by-construction technology independent RTL codebases for NoC building blocks. The RTL coding phase is abstracted and modeled with an Object Oriented framework, integrated within a commercial tool for IP packaging (Synopsys CoreTools suite). Compared with traditional coding styles based on pre-processor directives, metacoding produces 65% smaller codebases and reduces the configurations to verify up to three orders of magnitude

    DECIMAL: A requirements engineering tool for product families

    Get PDF
    Today, many software organizations are utilizing product families as a way of improving productivity, improving quality and reducing development time. When a new member is added to a product family, there must be a way to verify whether the new member\u27s specific requirements are met within the reuse constraints of its product family. The contribution of this paper is to demonstrate such a verification process by describing a requirements engineering tool called DECIMAL. DECIMAL is an interactive, automated, GUI driven verification tool that automatically checks for completeness (checking to see if all commonalities are satisfied) and consistency (checking to see if dependencies between variabilities are satisfied) of the new member\u27s requirements with the product family\u27s requirements. DECIMAL also checks that variabilities are within the range and data type specified for the product family. The approach is to perform the verification using a database as the underlying analysis engine. A pilot study of a virtual reality device driver product family is also described which investigates the feasibility of this approach by evaluating the tool

    Water productivity: methodologies and management

    Get PDF
    Irrigation programsIrrigation efficiencyProductivityCrop productionWater stressWater tableSoil moistureIrrigation schedulingCase studies

    Water productivity: methodologies and management

    Get PDF
    Irrigated farmingIrrigation programsWater conservationRiver basin managementIrrigation efficiencyProductivityCrop productionCase studies

    F as in Fat: How Obesity Policies Are Failing in America, 2005

    Get PDF
    Examines national and state obesity rates and government policies. Challenges the research community to focus on major research questions to inform policy decisions, and policymakers to pursue actions to combat the obesity crisis
    corecore