87 research outputs found

    New product entry success : an examination of variable, dimensional and model evolution

    Get PDF
    This thesis examines the evolution of antecedents, dimensions and initial screening models which discriminate between new product success and failure. It advances on previous empirical new product success/failure comparative studies by developing a discrete simulation procedure in which participating new product managers supply judgements retrospectively on new product strategies and orientations for two distinct time periods in the new product program: (1) the initial screening stage and (2) a period approximately 1 year after market entry. Unique linear regression functions are derived for each event and offer different, but complimentary, temporally appropriate sets of determining factors. Model predictive accuracy ascends over time and conditional process moderators alter success factors at both time periods. Whilst the work validates and synthesises much from the new product development literature, is exposes probable measurement timing error when single retrospective models assess success dimension rank at the initial screen. Six of seven hypotheses are accepted and demonstrate that: 1. Many antecedents of success and measures of objective attainment are perceived by NPD (new product development) managers to differ significantly over time. 2. Reactive strategy, NPD multigenerational history and a superior product are the most important dimensions of success through one year post launch. 3. Current linear screening models constructed using retrospective methods produce average prescriptive dimensions which exhibit measurement timing error when used at the initial screen. 4. Success dimensions evolve from somewhat deterministic to more stochastic over time with model forecasting accuracy rising as launch approaches based on better data availability. 5. Product market PiLC (the life expectancy of an introduction before modification is necessary calculated in years and months) and its order of entry and level of innovation alter aggregate success model accuracy and dimension rank. 6. Proper initial dimensional alignment and intra-process realignment based on changing environments is critical to a successful project through one year post launch. The work cautions practitioners not to wait for better models to be developed but immediately: (1) benchmark reasons for their current product market success, failure and kill historical "batting average"; (2) enhance and/or replace contributing/offending processes and systems based on these history lessons; (3) choose or reject aggregate or conditional success/failure models based on team forecasting ability; (4) concentrate on the selected model's time specific dimensions of success and (5) provide/reserve adequate resources to adapt strategically over time to both internal and external antecedent changes in the NPD environment. Finally, it recommends new research into temporal, conditional and strategic tradeoffs in internal and external antecedents/dimensions of success. Best results should come from using both linear and curvilinear methods to validate more complex yet statistically elegant NPD simulations

    Improvement of Internet of Things maturity in medium size technology company

    Get PDF
    The purpose of this research is to find out how Internet of Things maturity models can be used in organizations to determine the current level and target level of the Internet of Things. By defining the current level and the target level, it is possible to find out what kind of development steps the organization should take to reach the desired level. In addition, the research aims at identifying the benefits of the Internet of Things for the business operations of organizations. With the help of models, the maturity of the organization can be determined from several different subject areas of the Internet of Things. In this case study, an Internet of Things maturity model is customized for the target organization. The model evaluates the maturity of the organization in the dimensions of governance, technology and connectivity, data-analytics based decision making, people and processes. These areas are divided into even smaller sub-dimensions, the maturity of which is assessed separately. In addition, the research also investigates the current level and target level of the Internet of Things maturity of the target organization through interviews conducted in the target organization. Based on the definitions of the current level and the target level, a road map is created for the target organization. The target organization’s current maturity is between levels 2–3 in each dimension. Levels 3–4 were initially defined as the target. The research also revealed that to develop the target organization's maturity, improving the governance dimension is critical to raise the maturity of other dimensions. A 6-phase plan was drawn up for the target organization to reach the target level

    Haptic manipulation of virtual linkages with closed loops

    Get PDF

    A Holistic Usability Framework For Distributed Simulation Systems

    Get PDF
    This dissertation develops a holistic usability framework for distributed simulation systems (DSSs). The framework is developed considering relevant research in human-computer interaction, computer science, technical writing, engineering, management, and psychology. The methodology used consists of three steps: (1) framework development, (2) surveys of users to validate and refine the framework, and to determine attribute weights, and (3) application of the framework to two real-world systems. The concept of a holistic usability framework for DSSs arose during a project to improve the usability of the Virtual Test Bed, a prototypical DSS, and the framework is partly a result of that project. In addition, DSSs at Ames Research Center were studied for additional insights. The framework has six dimensions: end user needs, end user interface(s), programming, installation, training, and documentation. The categories of participants in this study include managers, researchers, programmers, end users, trainers, and trainees. The first survey was used to obtain qualitative and quantitative data to validate and refine the framework. Attributes that failed the validation test were dropped from the framework. A second survey was used to obtain attribute weights. The refined framework was used to evaluate two existing DSSs, measuring their holistic usabilities. Ensuring that the needs of the variety of types of users who interact with the system during design, development, and use are met is important to launch a successful system. Adequate consideration of system usability along the several dimensions in the framework will not only ensure system success but also increase productivity, lower life cycle costs, and result in a more pleasurable working experience for people who work with the system

    Haptic manipulation of virtual linkages with closed loops

    Get PDF

    An Analog VLSI Deep Machine Learning Implementation

    Get PDF
    Machine learning systems provide automated data processing and see a wide range of applications. Direct processing of raw high-dimensional data such as images and video by machine learning systems is impractical both due to prohibitive power consumption and the “curse of dimensionality,” which makes learning tasks exponentially more difficult as dimension increases. Deep machine learning (DML) mimics the hierarchical presentation of information in the human brain to achieve robust automated feature extraction, reducing the dimension of such data. However, the computational complexity of DML systems limits large-scale implementations in standard digital computers. Custom analog signal processing (ASP) can yield much higher energy efficiency than digital signal processing (DSP), presenting means of overcoming these limitations. The purpose of this work is to develop an analog implementation of DML system. First, an analog memory is proposed as an essential component of the learning systems. It uses the charge trapped on the floating gate to store analog value in a non-volatile way. The memory is compatible with standard digital CMOS process and allows random-accessible bi-directional updates without the need for on-chip charge pump or high voltage switch. Second, architecture and circuits are developed to realize an online k-means clustering algorithm in analog signal processing. It achieves automatic recognition of underlying data pattern and online extraction of data statistical parameters. This unsupervised learning system constitutes the computation node in the deep machine learning hierarchy. Third, a 3-layer, 7-node analog deep machine learning engine is designed featuring online unsupervised trainability and non-volatile floating-gate analog storage. It utilizes massively parallel reconfigurable current-mode analog architecture to realize efficient computation. And algorithm-level feedback is leveraged to provide robustness to circuit imperfections in analog signal processing. At a processing speed of 8300 input vectors per second, it achieves 1×1012 operation per second per Watt of peak energy efficiency. In addition, an ultra-low-power tunable bump circuit is presented to provide similarity measures in analog signal processing. It incorporates a novel wide-input-range tunable pseudo-differential transconductor. The circuit demonstrates tunability of bump center, width and height with a power consumption significantly lower than previous works

    Forecasting manufacturing variation using historical process capability data : applications for random assembly, selective assembly, and serial processing

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2003.Includes bibliographical references (p. 337-340).In today's competitive marketplace, companies are under increased pressure to produce products that have a low cost and high quality. Product cost and quality are influenced by many factors. One factor that strongly influences both is manufacturing variation. Manufacturing variation is the range of values that a product's dimensions assume. Variation exists because no production process is perfect. Often times, controlling this variation is attempted during production when substantial effort and resources, e.g., time, money, and manpower, are required. The effort and resources could be reduced if the manufacturing variation could be forecast and managed during the design of the product. Traditionally, several barriers have been present that make forecasting and managing variation during the design process very challenging. The first barrier is the effort required of a design engineer to know the company's process capability, which makes it difficult to specify tolerances that can be manufactured reliably. The second barrier is the difficulty associated with understanding how a single manufacturing process or series of processes affects the variation of a product. This barrier impedes the analysis of tradeoffs among processes, the quantifying of the impact incoming stock variation has on final product variation, and the identification of sources of variation within the production system. The third barrier is understanding how selective assembly influences the final variation of a product, which results in selective assembly not being utilized efficiently. In this thesis, tools and methods to overcome the aforementioned barriers are presented. A process capability database is developed to connect engineers to manufacturing data to assist with(cont.) detailing a design. A theory is introduced that models a production process with two math functions, which are constructed using process capability data. These two math functions are used to build closed-form equations that calculate the mean and standard deviation of parts exiting a process. The equations are used to analyze tradeoffs among processes, to compute the impact incoming variation has on output, and to identify sources of variation. Finally, closed-form equations are created that compute the variation of a product resulting from a selective assembly operation. Using these tools, forecasting and managing manufacturing variation is possible for a wide variety of products and production systems.by Daniel C. Kern.Ph.D

    Thermal parameter optimisation for accurate finite element based on simulation of machine tools

    Get PDF
    The need for high-speed/high-precision machine tools is swiftly increasing in response to the growth of production technology that necessitates high- recision parts and high productivity. The influence of thermally induced errors in machine tools can have a much greater influence on the dimensional tolerances of the final products produced as compared to geometric and cutting force errors. Therefore, to maintain high accuracy of machine tool, it requires an accurate method of thermal error control or compensation using a detailed model. The thermal errors of machine tools are induced by the propagation of heat through the structure of the machine due to excitation of internal and external heat sources such as belt drives, motors and bearings. There has been significant research effort to model thermal errors of machine tools in recent decades. The utilised techniques have proved their capabilities with excellent thermal prediction and compensation results but they often involve significant effort for effective implementation with constraints for complexity, robustness, and cost. One of the most significant drawbacks of modelling machine behaviour using Finite Element Analysis (FEA) is the difficulty of accurately obtaining the characteristic of heat transfer, such as heat power of machine tool heat sources and the various boundary conditions. The aims of this research to provide reliable techniques to obtain heat transfer coefficients of machine tools in order to improve the accuracy of FEA simulations. FEA is used to simulate the thermal characteristics of spindle system of small Vertical Machining Centre (VMC) using SolidWorks Simulation software. Most FEA models of machine tools employ the general prediction technique based on formulae, provided by OEMs, to identify many of the boundary conditions associated with simulating thermal error in machine tools. The formulae method was used to identify the heat transfer coefficients of a small VMC feed drive system. Employing these values allowed FEA to be used to simulate the thermal characteristics of the feed drive model. In addition, an alternative efficient methodology, based on energy balance calculations and thermal imaging, was used to obtain the heat transfer coefficients of the same feed drive system. Then the parameters obtained were applied to the FEA model of the system and validated against experimental results. The residual thermal error was reduced to just 20 % when the energy balance method was employed and compared with a residual of 30 %, when the formulae method was employed. The existing energy balance method was also used to obtain the heat transfer coefficients of the headslide on a small VMC based on thermal imaging data. Then FEA model of the headslide of VMC was created and simulated. There was significant reduction in the thermal error but significant uncertainties in the method were identified suggesting that further improvements could be made. An additional novel Two Dimensional (2D) optimisation technique based on thermal imaging data was created and used to calibrate the calculated heat transfer coefficients of the headslide of a small sized machine tool. In order to optimise the heat power of various heat sources, a 2D model of surface temperature of the headslide was created in Matlab software and compared against the experimental data both spatially across a plane and over time in order to take into account time varying heat loads. The effectiveness of the technique was proved using FEA models of the machine and comparison with test data from the machine tool. Significant improvement was achieved with correlation of 85 % between simulated thermal characteristics and the experimental dat

    Self-organising an indoor location system using a paintable amorphous computer

    No full text
    This thesis investigates new methods for self-organising a precisely defined pattern of intertwined number sequences which may be used in the rapid deployment of a passive indoor positioning system's infrastructure.A future hypothetical scenario is used where computing particles are suspended in paint and covered over a ceiling. A spatial pattern is then formed over the covered ceiling. Any small portion of the spatial pattern may be decoded, by a simple camera equipped device, to provide a unique location to support location-aware pervasive computing applications.Such a pattern is established from the interactions of many thousands of locally connected computing particles that are disseminated randomly and densely over a surface, such as a ceiling. Each particle has initially no knowledge of its location or network topology and shares no synchronous clock or memory with any other particle.The challenge addressed within this thesis is how such a network of computing particles that begin in such an initial state of disarray and ignorance can, without outside intervention or expensive equipment, collaborate to create a relative coordinate system. It shows how the coordinate system can be created to be coherent, even in the face of obstacles, and closely represent the actual shape of the networked surface itself. The precision errors incurred during the propagation of the coordinate system are identified and the distributed algorithms used to avoid this error are explained and demonstrated through simulation.A new perimeter detection algorithm is proposed that discovers network edges and other obstacles without the use of any existing location knowledge. A new distributed localisation algorithm is demonstrated to propagate a relative coordinate system throughout the network and remain free of the error introduced by the network perimeter that is normally seen in non-convex networks. This localisation algorithm operates without prior configuration or calibration, allowing the coordinate system to be deployed without expert manual intervention or on networks that are otherwise inaccessible.The painted ceiling's spatial pattern, when based on the proposed localisation algorithm, is discussed in the context of an indoor positioning system
    • …
    corecore