435,071 research outputs found

    Flexibility-Driven Planning Of Flow-Based Mixed-Model Assembly Structures

    Get PDF
    Trends such as mass customization, changing customer preferences and resulting output fluctuations increasingly challenge the production industry. Mixed-model assembly lines are affected by the rising product variety, which ultimately leads to ascending cycle time spreads and efficiency losses. Matrix assembly addresses these challenges by decoupling workstations and dissolving cycle time constraints while maintaining the flow. Both matrix and line assembly are flow-based assembly structures characterized by assembly objects moving along the stations. In assembly system planning, competing assembly structures are developed and the one best meeting the use case's requirements is selected for realization. During assessing requirements and selecting the superior assembly structure, the systematic consideration of flexibility is often not ensured within the planning approach. Therefore, a preferred assembly structure may not have the flexibility required for a use case. The systematic and data-driven assessment of required and provided flexibility in assembly system planning is necessary. This paper presents an assessment model that matches a use case's requirements with the flexibility of flow-based assembly structures based on production program and process data. On the one hand, requirements are defined by flexibility criteria that evaluate representative product mixes and process time heterogeneity. On the other hand, provided flexibility of flow-based assembly structures is assessed in a level-based classification. A method for comparing the requirements and the classification's levels to prioritize assembly structures for application in a case is developed. The flexibility requirements and assembly structure of an exemplary use case are determined and discussed under the planning project's insights to evaluate the developed model. This work contributes to the objective and data-driven selection of assembly structures by utilizing use case-specific data available during the phase of structural planning to meet flexibility requirements and ensure their consideration along the assembly planning process

    Data-driven model based design and analysis of antenna structures

    Get PDF
    Data-driven models, or metamodels, offer an efficient way to mimic the behaviour of computation-intensive simulators. Subsequently, the usage of such computationally cheap metamodels is indispensable in the design of contemporary antenna structures where computation-intensive simulations are often performed in a large scale. Although metamodels offer sufficient flexibility and speed, they often suffer from an exponential growth of required training samples as the dimensionality of the problem increases. In order to alleviate this issue, a Gaussian process based approach, known as Gradient-Enhanced Kriging (GEK), is proposed in this work to achieve cost-efficient modelling of antenna structures. The GEK approach incorporates adjoint-based sensitivity data in addition to function data obtained from electromagnetic simulations. The approach is illustrated using a dielectric resonator and an ultra-wideband antenna structures. The method demonstrates significant accuracy improvement with the less number of training samples over the Ordinary Kriging (OK) approach which utilises function data only. The discussed technique has been favourably compared with OK in terms of computational cost

    Structure-type classification and flexibility-based detection of earthquake-induced damage in full-scale RC buildings

    Get PDF
    Detecting early damage in civil structures is highly desirable. In the area of vibration-based damage detection, modal flexibility (MF)-based methods have proven to be promising tools for promptly identifying changes in the global structural behavior. Many of these methods have been developed for specific types of structures, giving rise to different approaches and damage-sensitive features (DSFs). Although structural type classification is an important part of the damage detection process, this part of the process has received little attention in most literature and often relies on the use of a-priori engineering knowledge. Moreover, in general, experimental validations are only performed on small-scale laboratory structures with controlled artificial damage (e.g., imposed stiffness reductions). This paper proposes data-driven criteria for structure-type classification usable in the framework of MF-based damage identification methods to select the most appropriate algorithms and DSFs for detecting and localizing structural anomalies. This paper also tests the applicability of the proposed classification criteria and the damage identification methods on full-scale reinforced concrete (RC) structures that have experienced earthquake-induced damage. The considered structures are a seven-story RC wall building and a five-story RC frame building, which were both tested on the large-scale University of California, San Diego-Network for Earthquake Engineering Simulation (UCSD-NEES) shaking table

    Towards Data-driven Simulation Modeling for Mobile Agent-based Systems

    Get PDF
    Simulation modeling provides insight into how dynamic systems work. Current simulation modeling approaches are primarily knowledge-driven, which involves a process of converting expert knowledge into models and simulating them to understand more about the system. Knowledge-driven models are useful for exploring the dynamics of systems, but are handcrafted which means that they are expensive to develop and reflect the bias and limited knowledge of their creators. To address limitations of knowledge-driven simulation modeling, this dissertation develops a framework towards data-driven simulation modeling that discovers simulation models in an automated way based on data or behavior patterns extracted from systems under study. By using data, simulation models can be discovered automatically and with less bias than through knowledge-driven methods. Additionally, multiple models can be discovered that replicate the desired behavior. Each of these models can be thought of as a hypothesis about how the real system generates the observed behavior. This framework was developed based on the application of mobile agent-based systems. The developed framework is composed of three components: 1) model space specification; 2) search method; and 3) framework measurement metrics. The model space specification provides a formal specification for the general model structure from which various models can be generated. The search method is used to efficiently search the model space for candidate models that exhibit desired behavior. The five framework measurement metrics: flexibility, comprehensibility, controllability, compossability, and robustness, are developed to evaluate the overall framework. Furthermore, to incorporate knowledge into the data-driven simulation modeling framework, a method was developed that uses System Entity Structures (SESs) to specify incomplete knowledge to be used by the model search process. This is significant because knowledge-driven modeling requires a complete understanding of a system before it can be modeled, whereas the framework can find a model with incomplete knowledge. The developed framework has been applied to mobile agent-based systems and the results demonstrate that it is possible to discover a variety of interesting models using the framework

    A VLSI Architecture for Concurrent Data Structures

    Get PDF
    Concurrent data structures simplify the development of concurrent programs by encapsulating commonly used mechanisms for synchronization and communication into data structures. This thesis develops a notation for describing concurrent data structures, presents examples of concurrent data structures, and describes an architecture to support concurrent data structures. Concurrent Smalltalk (CST), a derivative of Smalltalk-80 with extensions for concurrency, is developed to describe concurrent data structures. CST allows the programmer to specify objects that are distributed over the nodes of a concurrent computer. These distributed objects have many constituent objects and thus can process many messages simultaneously. They are the foundation upon which concurrent data structures are built. The balanced cube is a concurrent data structure for ordered sets. The set is distributed by a balanced recursive partition that maps to the subcubes of a binary n-cube using a Gray code. A search algorithm, VW search, based on the distance properties of the Gray code, searches a balanced cube in O(log N) time. Because it does not have the root bottleneck that limits all tree-based data structures to O(1) concurrency, the balanced cube achieves O(~og ) concurrency. Considering graphs as concurrent data structures, graph algorithms are presented for the shortest path problem, the mix-flow problem, and graph partitioning. These algorithms introduce new synchronization techniques to achieve better performance than existing algorithms. A message-passing, concurrent architecture is developed that exploits the characteristics of VLSI technology to support concurrent data structures. Interconnection topologies are compared on the basis of dimension. It is shown that minimum latency is achieved with a very low dimensional network. A deadlock-free routing strategy is developed for this class of networks, and a prototype VLSI chip implementing this strategy is described. A message-driven processor complements the network by responding to messages with a very low latency. The processor directly executes messages, eliminating a level of interpretation. To take advantage of the performance offered by specialization while at the same time retaining flexibility, processing elements can be specialized to operate on a single class of objects. These object experts accelerate the performance of all applications using this class

    Barriers to and facilitators of implementing complex workplace dietary interventions: Process evaluation results of a cluster controlled trial

    Get PDF
    Background: Ambiguity exists regarding the effectiveness of workplace dietary interventions. Rigorous process evaluation is vital to understand this uncertainty. This study was conducted as part of the Food Choice at Work trial which assessed the comparative effectiveness of a workplace environmental dietary modification intervention and an educational intervention both alone and in combination versus a control workplace. Effectiveness was assessed in terms of employees’ dietary intakes, nutrition knowledge and health status in four large manufacturing workplaces. The study aimed to examine barriers to and facilitators of implementing complex workplace interventions, from the perspectives of key workplace stakeholders and researchers involved in implementation. Methods: A detailed process evaluation monitored and evaluated intervention implementation. Interviews were conducted at baseline (27 interviews) and at 7–9 month follow-up (27 interviews) with a purposive sample of workplace stakeholders (managers and participating employees). Topic guides explored factors which facilitated or impeded implementation. Researchers involved in recruitment and data collection participated in focus groups at baseline and at 7–9 month follow-up to explore their perceptions of intervention implementation. Data were imported into NVivo software and analysed using a thematic framework approach. Results: Four major themes emerged; perceived benefits of participation, negotiation and flexibility of the implementation team, viability and intensity of interventions and workplace structures and cultures. The latter three themes either positively or negatively affected implementation, depending on context. The implementation team included managers involved in coordinating and delivering the interventions and the researchers who collected data and delivered intervention elements. Stakeholders’ perceptions of the benefits of participating, which facilitated implementation, included managers’ desire to improve company image and employees seeking health improvements. Other facilitators included stakeholder buy-in, organisational support and stakeholder cohesiveness with regards to the level of support provided to the intervention. Anticipation of employee resistance towards menu changes, workplace restructuring and target-driven workplace cultures impeded intervention implementation. Conclusions: Contextual factors such as workplace structures and cultures need to be considered in the implementation of future workplace dietary interventions. Negotiation and flexibility of key workplace stakeholders plays an integral role in overcoming the barriers of workplace cultures, structures and resistance to change

    Designing Reusable Systems that Can Handle Change - Description-Driven Systems : Revisiting Object-Oriented Principles

    Full text link
    In the age of the Cloud and so-called Big Data systems must be increasingly flexible, reconfigurable and adaptable to change in addition to being developed rapidly. As a consequence, designing systems to cater for evolution is becoming critical to their success. To be able to cope with change, systems must have the capability of reuse and the ability to adapt as and when necessary to changes in requirements. Allowing systems to be self-describing is one way to facilitate this. To address the issues of reuse in designing evolvable systems, this paper proposes a so-called description-driven approach to systems design. This approach enables new versions of data structures and processes to be created alongside the old, thereby providing a history of changes to the underlying data models and enabling the capture of provenance data. The efficacy of the description-driven approach is exemplified by the CRISTAL project. CRISTAL is based on description-driven design principles; it uses versions of stored descriptions to define various versions of data which can be stored in diverse forms. This paper discusses the need for capturing holistic system description when modelling large-scale distributed systems.Comment: 8 pages, 1 figure and 1 table. Accepted by the 9th Int Conf on the Evaluation of Novel Approaches to Software Engineering (ENASE'14). Lisbon, Portugal. April 201

    Bioengineered Textiles and Nonwovens – the convergence of bio-miniaturisation and electroactive conductive polymers for assistive healthcare, portable power and design-led wearable technology

    Get PDF
    Today, there is an opportunity to bring together creative design activities to exploit the responsive and adaptive ‘smart’ materials that are a result of rapid development in electro, photo active polymers or OFEDs (organic thin film electronic devices), bio-responsive hydrogels, integrated into MEMS/NEMS devices and systems respectively. Some of these integrated systems are summarised in this paper, highlighting their use to create enhanced functionality in textiles, fabrics and non-woven large area thin films. By understanding the characteristics and properties of OFEDs and bio polymers and how they can be transformed into implementable physical forms, innovative products and services can be developed, with wide implications. The paper outlines some of these opportunities and applications, in particular, an ambient living platform, dealing with human centred needs, of people at work, people at home and people at play. The innovative design affords the accelerated development of intelligent materials (interactive, responsive and adaptive) for a new product & service design landscape, encompassing assistive healthcare (smart bandages and digital theranostics), ambient living, renewable energy (organic PV and solar textiles), interactive consumer products, interactive personal & beauty care (e-Scent) and a more intelligent built environment
    • 

    corecore