123 research outputs found

    A Smart Products Lifecycle Management (sPLM) Framework - Modeling for Conceptualization, Interoperability, and Modularity

    Get PDF
    Autonomy and intelligence have been built into many of today’s mechatronic products, taking advantage of low-cost sensors and advanced data analytics technologies. Design of product intelligence (enabled by analytics capabilities) is no longer a trivial or additional option for the product development. The objective of this research is aimed at addressing the challenges raised by the new data-driven design paradigm for smart products development, in which the product itself and the smartness require to be carefully co-constructed. A smart product can be seen as specific compositions and configurations of its physical components to form the body, its analytics models to implement the intelligence, evolving along its lifecycle stages. Based on this view, the contribution of this research is to expand the “Product Lifecycle Management (PLM)” concept traditionally for physical products to data-based products. As a result, a Smart Products Lifecycle Management (sPLM) framework is conceptualized based on a high-dimensional Smart Product Hypercube (sPH) representation and decomposition. First, the sPLM addresses the interoperability issues by developing a Smart Component data model to uniformly represent and compose physical component models created by engineers and analytics models created by data scientists. Second, the sPLM implements an NPD3 process model that incorporates formal data analytics process into the new product development (NPD) process model, in order to support the transdisciplinary information flows and team interactions between engineers and data scientists. Third, the sPLM addresses the issues related to product definition, modular design, product configuration, and lifecycle management of analytics models, by adapting the theoretical frameworks and methods for traditional product design and development. An sPLM proof-of-concept platform had been implemented for validation of the concepts and methodologies developed throughout the research work. The sPLM platform provides a shared data repository to manage the product-, process-, and configuration-related knowledge for smart products development. It also provides a collaborative environment to facilitate transdisciplinary collaboration between product engineers and data scientists

    Supply chain risk analysis

    Get PDF
    A new decision support system is proposed and developed that will help sustaining business in a high-risk business environment. The system is developed as a web application to better integrate the supply chain entities and to provide a common platform for performing risk analysis in a supply chain. The system performs a risk analysis and calculates risk factor with each activity in the supply considering its interrelationship with other activities. Bayesian networks along with fault tree structures are embedded in the system and logical rules are used to perform a qualitative fault tree analysis, as the data required to calculate the frequency of occurrence is rarely available. The developed system guides the risk assessment process: from asset identification to consequence analysis before estimating the risk factor associated with each activity in the supply chain. The system is tested with a sample case study on a highly explosive product. Results show that the system is capable of identifying high-risk threats. The system further needs to be developed to add a safeguard analysis module and to enable automatic data extraction from the enterprise resource planning and legacy databases. It is expected that the system on complete development and induction will help supply chain managers to manage business risks and operations more efficiently and effectively by providing a complete picture of the risk environment and safeguards required to reduce the risk level

    Data Fusion for Materials Location Estimation in Construction

    Get PDF
    Effective automated tracking and locating of the thousands of materials on construction sites improves material distribution and project performance and thus has a significant positive impact on construction productivity. Many locating technologies and data sources have therefore been developed, and the deployment of a cost-effective, scalable, and easy-to-implement materials location sensing system at actual construction sites has very recently become both technically and economically feasible. However, considerable opportunity still exists to improve the accuracy, precision, and robustness of such systems. The quest for fundamental methods that can take advantage of the relative strengths of each individual technology and data source motivated this research, which has led to the development of new data fusion methods for improving materials location estimation. In this study a data fusion model is used to generate an integrated solution for the automated identification, location estimation, and relocation detection of construction materials. The developed model is a modified functional data fusion model. Particular attention is paid to noisy environments where low-cost RFID tags are attached to all materials, which are sometimes moved repeatedly around the site. A portion of the work focuses partly on relocation detection because it is closely coupled with location estimation and because it can be used to detect the multi-handling of materials, which is a key indicator of inefficiency. This research has successfully addressed the challenges of fusing data from multiple sources of information in a very noisy and dynamic environment. The results indicate potential for the proposed model to improve location estimation and movement detection as well as to automate the calculation of the incidence of multi-handling

    First Annual Workshop on Space Operations Automation and Robotics (SOAR 87)

    Get PDF
    Several topics relative to automation and robotics technology are discussed. Automation of checkout, ground support, and logistics; automated software development; man-machine interfaces; neural networks; systems engineering and distributed/parallel processing architectures; and artificial intelligence/expert systems are among the topics covered

    Fault management via dynamic reconfiguration for integrated modular avionics

    Get PDF
    The purpose of this research is to investigate fault management methodologies within Integrated Modular Avionics (IMA) systems, and develop techniques by which the use of dynamic reconfiguration can be implemented to restore higher levels of systems redundancy in the event of a systems fault. A proposed concept of dynamic configuration has been implemented on a test facility that allows controlled injection of common faults to a representative IMA system. This facility allows not only the observation of the response of the system management activities to manage the fault, but also analysis of real time data across the network to ensure distributed control activities are maintained. IMS technologies have evolved as a feasible direction for the next generation of avionic systems. Although federated systems are logical to design, certify and implement, they have some inherent limitations that are not cost beneficial to the customer over long life-cycles of complex systems, and hence the fundamental modular design, i.e. common processors running modular software functions, provides a flexibility in terms of configuration, implementation and upgradability that cannot be matched by well-established federated avionic system architectures. For example, rapid advances of computing technology means that dedicated hardware can become outmoded by component obsolescence which almost inevitably makes replacements unavailable during normal life-cycles of most avionic systems. To replace the obsolete part with a newer design involves a costly re-design and re-certification of any relevant or interacting functions with this unit. As such, aircraft are often known to go through expensive mid-life updates to upgrade all avionics systems. In contrast, a higher frequency of small capability upgrades would maximise the product performance, including cost of development and procurement, in constantly changing platform deployment environments. IMA is by no means a new concept and work has been carried out globally in order to mature the capability. There are even examples where this technology has been implemented as subsystems on service aircraft. However, IMA flexible configuration properties are yet to be exploited to their full extent; it is feasible that identification of faults or failures within the system would lead to the exploitation of these properties in order to dynamically reconfigure and maintain high levels of redundancy in the event of component failure. It is also conceivable to install redundant components such that an IMS can go through a process of graceful degradation, whereby the system accommodates a number of active failures, but can still maintain appropriate levels of reliability and service. This property extends the average maintenance-free operating period, ensuring that the platform has considerably less unscheduled down time and therefore increased availability. The content of this research work involved a number of key activities in order to investigate the feasibility of the issues outlined above. The first was the creation of a representative IMA system and the development of a systems management capability that performs the required configuration controls. The second aspect was the development of hardware test rig in order to facilitate a tangible demonstration of the IMA capability. A representative IMA was created using LabVIEW Embedded Tool Suit (ETS) real time operating system for minimal PC systems. Although this required further code written to perform IMS middleware functions and does not match up to the stringent air safety requirements, it provided a suitable test bed to demonstrate systems management capabilities. The overall IMA was demonstrated with a 100kg scale Maglev vehicle as a test subject. This platform provides a challenging real-time control problem, analogous to an aircraft flight control system, requiring the calculation of parallel control loops at a high sampling rate in order to maintain magnetic suspension. Although the dynamic properties of the test rig are not as complex as a modern aircraft, it has much less stringent operating requirements and therefore substantially less risk associated with failure to provide service. The main research contributions for the PhD are: 1.A solution for the dynamic reconfiguration problem for assigning required systems functions (namely a distributed, real-time control function with redundant processing channels) to available computing resources whilst protecting the functional concurrency and time critical needs of the control actions. 2.A systems management strategy that utilises the dynamic reconfiguration properties of an IMA System to restore high levels of redundancy in the presence of failures. The conclusion summarises the level of success of the implemented system in terms of an appropriate dynamic reconfiguration to the response of a fault signal. In addition, it highlights the issues with using an IMA to as a solution to operational goals of the target hardware, in terms of design and build complexity, overhead and resources

    Hybrid approaches based on computational intelligence and semantic web for distributed situation and context awareness

    Get PDF
    2011 - 2012The research work focuses on Situation Awareness and Context Awareness topics. Specifically, Situation Awareness involves being aware of what is happening in the vicinity to understand how information, events, and one’s own actions will impact goals and objectives, both immediately and in the near future. Thus, Situation Awareness is especially important in application domains where the information flow can be quite high and poor decisions making may lead to serious consequences. On the other hand Context Awareness is considered a process to support user applications to adapt interfaces, tailor the set of application-relevant data, increase the precision of information retrieval, discover services, make the user interaction implicit, or build smart environments. Despite being slightly different, Situation and Context Awareness involve common problems such as: the lack of a support for the acquisition and aggregation of dynamic environmental information from the field (i.e. sensors, cameras, etc.); the lack of formal approaches to knowledge representation (i.e. contexts, concepts, relations, situations, etc.) and processing (reasoning, classification, retrieval, discovery, etc.); the lack of automated and distributed systems, with considerable computing power, to support the reasoning on a huge quantity of knowledge, extracted by sensor data. So, the thesis researches new approaches for distributed Context and Situation Awareness and proposes to apply them in order to achieve some related research objectives such as knowledge representation, semantic reasoning, pattern recognition and information retrieval. The research work starts from the study and analysis of state of art in terms of techniques, technologies, tools and systems to support Context/Situation Awareness. The main aim is to develop a new contribution in this field by integrating techniques deriving from the fields of Semantic Web, Soft Computing and Computational Intelligence. From an architectural point of view, several frameworks are going to be defined according to the multi-agent paradigm. Furthermore, some preliminary experimental results have been obtained in some application domains such as Airport Security, Traffic Management, Smart Grids and Healthcare. Finally, future challenges is going to the following directions: Semantic Modeling of Fuzzy Control, Temporal Issues, Automatically Ontology Elicitation, Extension to other Application Domains and More Experiments. [edited by author]XI n.s

    An overview of the Copernicus C4I architecture

    Get PDF
    The purpose of this thesis is to provide the reader with an overview of the U.S. Navy's Copernicus C4I Architecture. The acronym "C4I" emphasizes the intimate relationship between command, control, communications and intelligence, as well as their significance to the modern day warrior. Never in the history of the U.S> Navy has the importance of an extremely flexible C4I architecture been made more apparent than in the last decade. Included are discussions of the Copernicus concept, its command and control doctrine, its architectural goals and components, and Copernicus-related programs. Also included is a discussion on joint service efforts and the initiatives being conducted by the U.S. Marine Corps, the U.S. Air Force and the U.S. Army. Finally, a discussion of the Copernicus Phase I Requirements Definition Document's compliance with the acquisition process as required by DoD Instruction 5000.2 is presented.http://archive.org/details/overviewofcopern00dearLieutenant, United States NavyApproved for public release; distribution is unlimited

    Towards semantics-driven modelling and simulation of context-aware manufacturing systems

    Get PDF
    Systems modelling and simulation are two important facets for thoroughly and effectively analysing manufacturing processes. The ever-growing complexity of the latter, the increasing amount of knowledge, and the use of Semantic Web techniques adhering meaning to data have led researchers to explore and combine together methodologies by exploiting their best features with the purpose of supporting manufacturing system's modelling and simulation applications. In the past two decades, the use of ontologies has proven to be highly effective for context modelling and knowledge management. Nevertheless, they are not meant for any kind of model simulations. The latter, instead, can be achieved by using a well-known workflow-oriented mathematical modelling language such as Petri Net (PN), which brings in modelling and analytical features suitable for creating a digital copy of an industrial system (also known as "digital twin"). The theoretical framework presented in this dissertation aims to exploit W3C standards, such as Semantic Web Rule Language (SWRL) and Web Ontology Language (OWL), to transform each piece of knowledge regarding a manufacturing system into Petri Net modelling primitives. In so doing, it supports the semantics-driven instantiation, analysis and simulation of what we call semantically-enriched PN-based manufacturing system digital twins. The approach proposed by this exploratory research is therefore based on the exploitation of the best features introduced by state-of-the-art developments in W3C standards for Linked Data, such as OWL and SWRL, together with a multipurpose graphical and mathematical modelling tool known as Petri Net. The former is used for gathering, classifying and properly storing industrial data and therefore enhances our PN-based digital copy of an industrial system with advanced reasoning features. This makes both the system modelling and analysis phases more effective and, above all, paves the way towards a completely new field, where semantically-enriched PN-based manufacturing system digital twins represent one of the drivers of the digital transformation already in place in all companies facing the industrial revolution. As a result, it has been possible to outline a list of indications that will help future efforts in the application of complex digital twin support oriented solutions, which in turn is based on semantically-enriched manufacturing information systems. Through the application cases, five key topics have been tackled, namely: (i) semantic enrichment of industrial data using the most recent ontological models in order to enhance its value and enable new uses; (ii) context-awareness, or context-adaptiveness, aiming to enable the system to capture and use information about the context of operations; (iii) reusability, which is a core concept through which we want to emphasize the importance of reusing existing assets in some form within the industrial modelling process, such as industrial process knowledge, process data, system modelling primitives, and the like; (iv) the ultimate goal of semantic Interoperability, which can be accomplished by adding data about the metadata, linking each data element to a controlled, shared vocabulary; finally, (v) the impact on modelling and simulation applications, which shows how we could automate the translation process of industrial knowledge into a digital manufacturing system and empower it with quantitative and qualitative analytical technics

    Integrated helicopter survivability

    Get PDF
    A high level of survivability is important to protect military personnel and equipment and is central to UK defence policy. Integrated Survivability is the systems engineering methodology to achieve optimum survivability at an affordable cost, enabling a mission to be completed successfully in the face of a hostile environment. “Integrated Helicopter Survivability” is an emerging discipline that is applying this systems engineering approach within the helicopter domain. Philosophically the overall survivability objective is ‘zero attrition’, even though this is unobtainable in practice. The research question was: “How can helicopter survivability be assessed in an integrated way so that the best possible level of survivability can be achieved within the constraints and how will the associated methods support the acquisition process?” The research found that principles from safety management could be applied to the survivability problem, in particular reducing survivability risk to as low as reasonably practicable (ALARP). A survivability assessment process was developed to support this approach and was linked into the military helicopter life cycle. This process positioned the survivability assessment methods and associated input data derivation activities. The system influence diagram method was effective at defining the problem and capturing the wider survivability interactions, including those with the defence lines of development (DLOD). Influence diagrams and Quality Function Deployment (QFD) methods were effective visual tools to elicit stakeholder requirements and improve communication across organisational and domain boundaries. The semi-quantitative nature of the QFD method leads to numbers that are not real. These results are suitable for helping to prioritise requirements early in the helicopter life cycle, but they cannot provide the quantifiable estimate of risk needed to demonstrate ALARP. The probabilistic approach implemented within the Integrated Survivability Assessment Model (ISAM) was developed to provide a quantitative estimate of ‘risk’ to support the approach of reducing survivability risks to ALARP. Limitations in available input data for the rate of encountering threats leads to a probability of survival that is not a real number that can be used to assess actual loss rates. However, the method does support an assessment across platform options, provided that the ‘test environment’ remains consistent throughout the assessment. The survivability assessment process and ISAM have been applied to an acquisition programme, where they have been tested to support the survivability decision making and design process. The survivability ‘test environment’ is an essential element of the survivability assessment process and is required by integrated survivability tools such as ISAM. This test environment, comprising of threatening situations that span the complete spectrum of helicopter operations requires further development. The ‘test environment’ would be used throughout the helicopter life cycle from selection of design concepts through to test and evaluation of delivered solutions. It would be updated as part of the through life capability management (TLCM) process. A framework of survivability analysis tools requires development that can provide probabilistic input data into ISAM and allow derivation of confidence limits. This systems level framework would be capable of informing more detailed survivability design work later in the life cycle and could be enabled through a MATLAB¼ based approach. Survivability is an emerging system property that influences the whole system capability. There is a need for holistic capability level analysis tools that quantify survivability along with other influencing capabilities such as: mobility (payload / range), lethality, situational awareness, sustainability and other mission capabilities. It is recommended that an investigation of capability level analysis methods across defence should be undertaken to ensure a coherent and compliant approach to systems engineering that adopts best practice from across the domains. Systems dynamics techniques should be considered for further use by Dstl and the wider MOD, particularly within the survivability and operational analysis domains. This would improve understanding of the problem space, promote a more holistic approach and enable a better balance of capability, within which survivability is one essential element. There would be value in considering accidental losses within a more comprehensive ‘survivability’ analysis. This approach would enable a better balance to be struck between safety and survivability risk mitigations and would lead to an improved, more integrated overall design
    • 

    corecore