1,717,649 research outputs found

    Optimal Design of Experiments for Functional Responses

    Get PDF
    abstract: Functional or dynamic responses are prevalent in experiments in the fields of engineering, medicine, and the sciences, but proposals for optimal designs are still sparse for this type of response. Experiments with dynamic responses result in multiple responses taken over a spectrum variable, so the design matrix for a dynamic response have more complicated structures. In the literature, the optimal design problem for some functional responses has been solved using genetic algorithm (GA) and approximate design methods. The goal of this dissertation is to develop fast computer algorithms for calculating exact D-optimal designs. First, we demonstrated how the traditional exchange methods could be improved to generate a computationally efficient algorithm for finding G-optimal designs. The proposed two-stage algorithm, which is called the cCEA, uses a clustering-based approach to restrict the set of possible candidates for PEA, and then improves the G-efficiency using CEA. The second major contribution of this dissertation is the development of fast algorithms for constructing D-optimal designs that determine the optimal sequence of stimuli in fMRI studies. The update formula for the determinant of the information matrix was improved by exploiting the sparseness of the information matrix, leading to faster computation times. The proposed algorithm outperforms genetic algorithm with respect to computational efficiency and D-efficiency. The third contribution is a study of optimal experimental designs for more general functional response models. First, the B-spline system is proposed to be used as the non-parametric smoother of response function and an algorithm is developed to determine D-optimal sampling points of a spectrum variable. Second, we proposed a two-step algorithm for finding the optimal design for both sampling points and experimental settings. In the first step, the matrix of experimental settings is held fixed while the algorithm optimizes the determinant of the information matrix for a mixed effects model to find the optimal sampling times. In the second step, the optimal sampling times obtained from the first step is held fixed while the algorithm iterates on the information matrix to find the optimal experimental settings. The designs constructed by this approach yield superior performance over other designs found in literature.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201

    A method and application of machine learning in design

    Get PDF
    This thesis addresses the issue of developing machine learning techniques for the acquisition and organization of design knowledge to be used in knowledge-based design systems. It presents a general method of developing machine learning tools in the design domain. An identification tree is introduced to distinguish different approaches and strategies of machine learning in design. Three existing approaches are identified: the knowledge-oriented, the learner-oriented, and the design-oriented approach. The learner-oriented approach is critical, which focuses on the development of new machine learning tools for design knowledge acquisition. Four strategies that are suitable for this approach are: specialization, generalization, integration and exploration. A general method, called MLDS (Machine Learning in Design with 5 steps), of developing machine learning techniques in the design domain is presented. It consists of the following steps: 1) identify source data and target knowledge; 2) determine source representation and target representation; 3) identify the background knowledge available; 4) identify the features of data, knowledge and domain; and 5) develop (specialize, generalize, integrate or explore) a machine learning tool. The method is elaborated step by step and the dependencies between the components are illustrated with a corresponding framework. To assist in characterising the data, knowledge and domain, a set of formal measures are introduced. They include density of dataset, size of description space, homogeneity of dataset, complexity of domain, difficulty of domain, stability of domain, and usage of knowledge. Design knowledge is partitioned into two main types: empirical and causal. Empirical knowledge is modelled as empirical associations in categories of design attributes or empirical mappings between these meaningful categories. Eight types of empirical mappings are distinguished. Among them the mappings from one multiple dimensional space to another are recognized as the most important for both knowledge-based design systems and machine learning in design. The MLDS method is applied to the preliminary design of a learning model for the integration of design cases and design prototypes. Both source and target representations use the framework of design prototypes. The function-behaviour-structure categorization of design prototypes is used as background knowledge to improve both supervised and unsupervised learning in this task. Many-to-many mappings and time- or order-dependent data are discovered as the most important characteristics of the design domain for machine learning. Multiple attribute prediction and the capture of design concept ‘drift’ are identified as challenging tasks for machine learning in design. After the possibilities and limitations of solving the problem by modifying existing learning methods (both supervised and unsupervised) are considered, a learning model is created by integrating several learning techniques. The basic scheme of this model is that of goal-driven concept formation, which consists of flexible categorization, extensive generalization, temporary suspension, and cognitively-based sequence prediction in design. The learning process is described as follows: each time one category of attributes is treated as the predictive feature set and the remaining as the predicted feature set; a conceptual hierarchy or decision tree is constructed incrementally according the predictive features of design cases (but statistical information is generalized with both feature sets); whenever the predictive or the predicted feature set of a node becomes homogeneous, the construction process at that branch will temporarily suspend until a new case arrives and breaks this homogeneity; frequency—based prediction at indeterminate nodes is replaced with a cognitively-based sequence prediction, which allows the more recent cases to have stronger influence on the determination of the default or predicted values. An advantage of this scheme is that with the single learning algorithm, all the types of empirical mappings between function, behaviour and structure or between design problem specification and design solution description can be generalized from design cases. To enrich the indexing facilities in a conceptual hierarchy and improve its case retrieval ability, extensive generalization based memory organizations are investigated as alternatives for concept formation. An integration of the above learning techniques reduces the memory requirement of some existing extensive generalization models to a level applicable to practical problems in the design domain. The MLD5 method is particularly useful in the preliminary design of a learning system for the identification of a learning problem and of suitable strategies for solving the problem in the domain. Although the MLDS method is developed and demonstrated in the context of design, it is independent of any particular design problems and is applicable to some other domains as well. The cognitive model of sequence-based prediction developed with this method can be integrated with general concept formation methods to improve their performance in those domains where concepts drift or knowledge changes quickly, and where the degree of indeterminacy is high

    Human behaviour in office environments. Finding patterns of activity and spatial configuration in large workplace datasets

    Get PDF
    The study of human behaviour in office spaces has a long and varied history from the 1970s to a recent resurgence of interest today, examining elements of collaboration and activity and how those elements are affected by the design and configuration of space. These studies however produced scattered and some times contradictory results, due to the lack of larger datasets and common sets of methodologies. This thesis examines one such large dataset using a newly developed unified framework that includes a common structure for all data, a spatial model to represent configuration in multiple scales and a set of statistical methods to extract meaningful information. The dataset contains around 40 companies in the UK, most of which are based around London and are of different scales, from single-floor workplaces to large multiple-campus offices. A complete workflow for working with this dataset is described, including existing metrics from Visibility Graph Analysis in extensive detail, but also newly developed ones such as 'Travel Concentration', a metric meant to capture attractor-driven effects. The analysis focuses on examining spatial configuration against movement and interaction in three scales, at the floor level (macro), the room level (meso) and the location level (micro), allowing for insights to emerge for the various parts of the design process. A variety of statistical models is presented with different levels of predictive strength, and which can be used for different purposes, but which may also depend on the size of the dataset and how biased an approach is to be taken. The results show that, in general, movement was more predictable than interaction, but also that the latter was a much more complex activity that became more predictable when broken down to other types, such as visiting and chatting interactions. More specifically, it was found that in the larger scales, both activities were mainly affected by the seat density of the workplaces, while in smaller scales the attractor-driven nature of movement became more apparent. Interaction on the other hand was found to relate very much to the availability of space and thus potential people to interact with as it happened mainly in workspaces. The thesis provides these results in the form of characteristics of spaces that tend to attract each activity (as predicted by each statistical model), but also as actionable insights that a designer might use in the design process

    Bioprocess economics and optimization of continuous and pre-packed disposable chromatography

    Get PDF
    The biotech sector is facing increasing pressures to design more cost-efficient, robust and flexible manufacturing processes. Standard batch chromatography (BATCH) is an established but expensive approach to separate impurities related with both E.coli and mammalian cells expression systems. This study uses a computational framework to investigate if the application of continuous chromatography (CONTI) and disposable technologies can provide a competitive alternative to BATCH and reusable equipment. A set of general assumptions is presented on how some of the key downstream processing characteristics, such as chromatography operating conditions, resin properties and equipment requirements, vary as a function of the chromatography mode adopted, BATCH vs CONTI, and the column type used, self-packed glass (SP GLASS) vs pre-packed disposable (PP DISPO). These assumptions are then used within the framework, which comprises a detailed process economics model, to explore switching points between the two chromatography modes and column types for different upstream configurations and resin properties focusing on a single chromatography step. Following this, an evolutionary optimization algorithm is linked to the framework to optimize the setup of an entire antibody purification train consisting of multiple chromatography steps: Alongside the chromatography mode and column type, the framework optimized also critical decisions relating to the chromatography sequence, equipment sizing strategy and the operating conditions adopted for each chromatography step, subject to multiple demand and process-related (resin requirement) constraints. The framework is validated for different production scales including early phase, phase III, and commercial scale. To facilitate decision making, methods for visualizing the switching points and trade-offs exhibited by the optimal purification processes found by the framework are provided

    A semiotic approach to the use of metaphor in human-computer interfaces

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 20/9/1999.Although metaphors are common in computing, particularly in human-computer interfaces, opinion is divided on their usefulness to users and little evidence is available to help the designer in choosing or implementing them. Effective use of metaphors depends on understanding their role in the computer interface, which in tum means building a model of the metaphor process. This thesis examines some of the approaches which might be taken in constructing such a model before choosing one and testing its applicability to interface design. Earlier research into interface metaphors used experimental psychology techniques which proved useful in showing the benefits or drawbacks of specific metaphors, but did not give a general model of the metaphor process. A cognitive approach based on mental models has proved more successful in offering an overall model of the process, although this thesis questions whether the researchers tested it adequately. Other approaches which have examined the metaphor process (though not in the context of human-computer interaction) have come from linguistic fields, most notably semiotics, which extends linguistics to non-verbal communication and thus could cover graphical user interfaces (GUls). The main work described in this thesis was the construction of a semiotic model of human-computer interaction. The basic principle of this is that even the simplest element of the user interface will signify many simultaneous meanings to the user. Before building the model, a set of assertions and questions was developed to check the validity of the principles on which the model was based. Each of these was then tested by a technique appropriate to the type of issue raised. Rhetorical analysis was used to establish that metaphor is commonplace in command-line languages, in addition to its more obvious use in GUIs. A simple semiotic analysis, or deconstruction, of the Macintosh user interface was then used to establish the validity of viewing user interfaces as semiotic systems. Finally, an experiment was carried out to test a mental model approach proposed by previous researchers. By extending their original experiment to more realistically complex interfaces and tasks and using a more typical user population, it was shown that users do not always develop mental models of the type proposed in the original research. The experiment also provided evidence to support the existence of multiple layers of signification. Based on the results of the preliminary studies, a simple means of testing the semiotic model's relevance to interface design was developed, using an interview technique. The proposed interview technique was then used to question two groups of users about a simple interface element. Two independent researchers then carried out a content analysis of the responses. The mean number of significations in each interview, as categorised by the researchers, was 15. The levels of signification were rapidly revealed, with the mean time for each interview being under two minutes, providing effective evidence that interfaces signify many meanings to users, a substantial number of which are easily retrievable. It is proposed that the interview technique could provide a practical and valuable tool for systems analysis and interface designers. Finally, areas for further research are proposed, in particular to ascertain how the model and the interview technique could be integrated with other design methods
    • …
    corecore