357,928 research outputs found
Feature Models as Support for Business Model Implementation of Cyber-Physical Systems
From a business perspective Cyber-Physical Systems (CPS) can contribute to process innovation, product innovation or business model innovation. In this paper, the focus is on business model innovation based on CPS, i.e. we take the perspective of enterprises using CPS as basis for new customer services. In order to create viable CPS solutions, stakeholders from different enterprise functions should be involved, including business perspective and technical perspective. However, the business-related stakeholders often do not understand the technical possibilities and the technology-related stakeholders do understand the business opportunities. The paper proposes to use feature models as mediation support between business-oriented and technology-oriented stakeholders. Feature models conventionally are used for controlling variability, i.e. as a means for engineers to plan and design features for configuration and implementation. We propose to use them as a way to identify value propositions based on features. The main contributions of the paper are (a) to identify the potential feature models for alignment of business and technology-related stakeholders, (b) to propose feature model “slices” as support for business model development of CPS, and (c) an industrial case illustrating feasibility and utility of the approach
Recommended from our members
Intrinsic Functions for Securing CMOS Computation: Variability, Modeling and Noise Sensitivity
A basic premise behind modern secure computation is the demand for lightweight cryptographic primitives, like identifier or key generator. From a circuit perspective, the development of cryptographic modules has also been driven by the aggressive scalability of complementary metal-oxide-semiconductor (CMOS) technology. While advancing into nano-meter regime, one significant characteristic of today\u27s CMOS design is the random nature of process variability, which limits the nominal circuit design. With the continuous scaling of CMOS technology, instead of mitigating the physical variability, leveraging such properties becomes a promising way. One of the famous products adhering to this double-edged sword philosophy is the Physically Unclonable Functions (PUFs), which extract secret keys from uncontrollable manufacturing variability on integrated circuits (ICs). However, since PUFs take advantage of microscopic process variations, thus many specialized issues including variability, modeling attacks and noise sensitivity need to be considered and addressed.
In this dissertation, we present our recent work on PUF based secure computation from three aspects: variability, modeling and noise sensitivity, which are deemed the foundations of our study. Moreover, we found that the three factors coordinate with each other in our study, for example, the modeling technique can be utilized to improve the unsatisfied reliability caused by noise sensitivity, quantifying the variability can effectively eliminate the impact from noise, and modeling can help with characterizing the physical variability precisely
A strategy for achieving manufacturing statistical process control within a highly complex aerospace environment
This paper presents a strategy to achieve process control and overcome the previously mentioned industry constraints by changing the company focus to the process as opposed to the product. The strategy strives to achieve process control by identifying and controlling the process parameters that influence process capability followed by the implementation of a process control framework that marries statistical methods with lean business process and change management principles. The reliability of the proposed strategy is appraised using case study methodology in a state of the art manufacturing facility on Multi-axis CNC machine tools
Critical flow – towards a construction flow theory
This paper introduces the concept of Construction Physics as a more comprehensive way of understanding the construction process from a flow perspective. It establishes a preliminary definition of the term and investigates briefly the present knowledge, flow models and methods for their management. From this it argues that the state of the art does not fully cover the whole process and proposes a holistic view of the flow of all prerequisites feeding the process. It introduces the key term Critical Flow and concludes by recommending areas that should be investigated as a joint IGLC research, development and testing programme
The gaps between healthcare service and building design : a state of the art review
Healthcare buildings are designed to achieve diverse objectives, ranging from providing appropriate environments where care can be delivered to communities to increasing operational efficiency and improving patient flows and the patient experience. Improvements in
operational efficiency should result from state-of-the-art buildings, more appropriate layouts, departmental adjacencies, efficient clinical and business processes and enhanced information systems. However, complexities around requirements and stakeholders management may prevent the achievement of such objectives. The aim of this article is to identify and understand how healthcare services (re)design and building design can be integrated to facilitate increased performance both in terms of service delivery and future changes. Findings indicate that current approaches and innovation are restricted due to functional barriers in the design process, and that there is a need to support the development of operations driven design through time (e.g. flexible and durable) that satisfies diverse needs
Understanding Internet topology: principles, models, and validation
Building on a recent effort that combines a first-principles approach to modeling router-level connectivity with a more pragmatic use of statistics and graph theory, we show in this paper that for the Internet, an improved understanding of its physical infrastructure is possible by viewing the physical connectivity as an annotated graph that delivers raw connectivity and bandwidth to the upper layers in the TCP/IP protocol stack, subject to practical constraints (e.g., router technology) and economic considerations (e.g., link costs). More importantly, by relying on data from Abilene, a Tier-1 ISP, and the Rocketfuel project, we provide empirical evidence in support of the proposed approach and its consistency with networking reality. To illustrate its utility, we: 1) show that our approach provides insight into the origin of high variability in measured or inferred router-level maps; 2) demonstrate that it easily accommodates the incorporation of additional objectives of network design (e.g., robustness to router failure); and 3) discuss how it complements ongoing community efforts to reverse-engineer the Internet
- …