30 research outputs found

    Interactive optimisation for high-lift design.

    Get PDF
    Interactivity always involves two entities; one of them by default is a human user. The specialised subject of human factors is introduced in the context of computational aerodynamics and optimisation, specifically a high-lift aerofoil. The trial and error nature of a design process hinges on designer’s knowledge, skill and intuition. A basic, important assumption of a man-machine system is that in solving a problem, there are some steps in which the computer has an advantageous edge while in other steps a human has dominance. Computational technologies are now an indispensable part of aerospace technology; algorithms involving significant user interaction, either during the process of generating solutions or as a component of post-optimisation evaluation where human decision making is involved are increasingly becoming popular, multi-objective particle swarm is one such optimiser. Several design optimisation problems in engineering are by nature multi-objective; the interest of a designer lies in simultaneous optimisation against two or more objectives which are usually in conïŹ‚ict. Interactive optimisation allows the designer to understand trade-offs between various objectives, and is generally used as a tool for decision making. The solution to a multi-objective problem, one where betterment in one objective occurs over the deterioration of at least one other objective is called a Pareto set. There are multiple solutions to a problem and multiple betterment ideas to an already existing design. The final responsibility of identifying an optimal solution or idea rests on the design engineers and decision making is done based on quantitative metrics, displayed as numbers or graphs. However, visualisation, ergonomics and human factors influence and impact this decision making process. A visual, graphical depiction of the Pareto front is oftentimes used as a design aid tool for purposes of decision making with chances of errors and fallacies fundamentally existing in engineering design. An effective visualisation tool beneïŹts complex engineering analyses by providing the decision-maker with a good imagery of the most important information. Two high-lift aerofoil data-sets have been used as test-case examples; a multi-element solver, an optimiser based on swarm intelligence technique, and visual techniques which include parallel co-ordinates, heat map, scatter plot, self-organising map and radial coordinate visualisation comprise the module. Factors that affect optima and various evaluation criteria have been studied in light of the human user. This research enquires into interactive optimisation by adapting three interactive approaches: information trade-off, reference point and classification, and investigates selected visualisation techniques which act as chief aids in the context of high-lift design trade studies. Human-in-the-loop engineering, man-machine interaction & interface along with influencing factors, reliability, validation and verification in the presence of design uncertainty are considered. The research structure, choice of optimiser and visual aids adapted in this work are influenced by and streamlined to fit with the parallel on-going development work on Airbus’ Python based tool. Results, analysis, together with literature survey are presented in this report. The words human, user, engineer, aerodynamicist, designer, analyst and decision-maker/ DM are synonymous, and are used interchangeably in this research. In a virtual engineering setting, for an efficient interactive optimisation task, a suitable visualisation tool is a crucial prerequisite. Various optimisation design tools & methods are most useful when combined with a human engineer's insight is the underlying premise of this work; questions such as why, what, how might help aid aeronautical technical innovation.PhD in Aerospac

    Using Particle Swarm Optimization for Market Timing Strategies

    Get PDF
    Market timing is the issue of deciding when to buy or sell a given asset on the market. As one of the core issues of algorithmic trading systems, designers of such system have turned to computational intelligence methods to aid them in this task. In this thesis, we explore the use of Particle Swarm Optimization (PSO) within the domain of market timing.nPSO is a search metaheuristic that was first introduced in 1995 [28] and is based on the behavior of birds in flight. Since its inception, the PSO metaheuristic has seen extensions to adapt it to a variety of problems including single objective optimization, multiobjective optimization, niching and dynamic optimization problems. Although popular in other domains, PSO has seen limited application to the issue of market timing. The current incumbent algorithm within the market timing domain is Genetic Algorithms (GA), based on the volume of publications as noted in [40] and [84]. In this thesis, we use PSO to compose market timing strategies using technical analysis indicators. Our first contribution is to use a formulation that considers both the selection of components and the tuning of their parameters in a simultaneous manner, and approach market timing as a single objective optimization problem. Current approaches only considers one of those aspects at a time: either selecting from a set of components with fixed values for their parameters or tuning the parameters of a preset selection of components. Our second contribution is proposing a novel training and testing methodology that explicitly exposes candidate market timing strategies to numerous price trends to reduce the likelihood of overfitting to a particular trend and give a better approximation of performance under various market conditions. Our final contribution is to consider market timing as a multiobjective optimization problem, optimizing five financial metrics and comparing the performance of our PSO variants against a well established multiobjective optimization algorithm. These algorithms address unexplored research areas in the context of PSO algorithms to the best of our knowledge, and are therefore original contributions. The computational results over a range of datasets shows that the proposed PSO algorithms are competitive to GAs using the same formulation. Additionally, the multiobjective variant of our PSO algorithm achieve statistically significant improvements over NSGA-II

    Comparative Study of Dimension Reduction Approaches With Respect to Visualization in 3-Dimensional Space

    Get PDF
    In the present big data era, there is a need to process large amounts of unlabeled data and find some patterns in the data to use it further. If data has many dimensions, it is very hard to get any insight of it. It is possible to convert high-dimensional data to low-dimensional data using different techniques, this dimension reduction is important and makes tasks such as classification, visualization, communication and storage much easier. The loss of information should be less while mapping data from high-dimensional space to low-dimensional space. Dimension reduction has been a significant problem in many fields as it needs to discard features that are unimportant and discover only the representations that are needed, hence it gathers our interest in this problem and basis of the research. We consider different techniques prevailing for dimension reduction like PCA (Principal Component Analysis), SVD (Singular Value Decomposition), DBN (Deep Belief Networks) and Stacked Auto-encoders. This thesis is intended to ultimately show which technique performs best for dimension reduction with the help of studied experiments

    From visual data exploration to visual data mining: a survey.

    Get PDF
    We survey work on the different uses of graphical mapping and interaction techniques for visual data mining of large data sets represented as table data. Basic terminology related to data mining, data sets, and visualization is introduced. Previous work on information visualization is reviewed in light of different categorizations of techniques and systems. The role of interaction techniques is discussed, in addition to work addressing the question of selecting and evaluating visualization techniques. We review some representative work on the use of information visualization techniques in the context of mining data. This includes both visual data exploration and visually expressing the outcome of specific mining algorithms. We also review recent innovative approaches that attempt to integrate visualization into the DM/KDD process, using it to enhance user interaction and comprehension

    Analyzing textual data by multiple word clouds

    Get PDF
    Word clouds are a good way to represent textual data with meta information. However, when it comes to analyzing multiple data sources, they are difficult to use. This is due to their poor comparability. The proposed RadCloud merges multiple standard clouds into a single one, while retaining the information about the origin of the words. Using separate clouds as well as this new visualization technique, a tool for textual data analysis is created: MuWoC. The resulting software is then evaluated by using it with multiple data sources ranging from encyclopedia articles, over literature to custom CSV files

    A constraint-based approach for assessing the capabilities of existing designs to handle product variation

    Get PDF
    All production machinery is designed with an inherent capability to handle slight variations in product. This is initially achieved by simply providing adjustments to allow, for example, changes that occur in pack sizes to be accommodated, through user settings or complete sets of change parts. By the appropriate use of these abilities most variations in product can be handled. However when extreme conditions of setups, major changes in product size and configuration, are considered there is no guarantee that the existing machines are able to cope. The problem is even more difficult to deal with when completely new product families are proposed to be made on an existing product line. Such changes in product range are becoming more common as producers respond to demands for ever increasing customization and product differentiation. An issue exists due to the lack of knowledge on the capabilities of the machines being employed. This often forces the producer to undertake a series of practical product trials. These however can only be undertaken once the product form has been decided and produced in sufficient numbers. There is then little opportunity to make changes that could greatly improve the potential output of the line and reduce waste. There is thus a need for a supportive modelling approach that allows the effect of variation in products to be analyzed together with an understanding of the manufacturing machine capability. Only through their analysis and interaction can the capabilities be fully understood and refined to make production possible. This thesis presents a constraint-based approach that offers a solution to the problems above. While employing this approach it has been shown that, a generic process can be formed to identify the limiting factors (constraints) of variant products to be processed. These identified constraints can be mapped to form the potential limits of performance for the machine. The limits of performance of a system (performance envelopes) can be employed to assess the design capability to cope with product variation. The approach is successfully demonstrated on three industrial case studies.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A constraint-based approach for assessing the capabilities of existing designs to handle product variation

    Get PDF
    All production machinery is designed with an inherent capability to handle slight variations in product. This is initially achieved by simply providing adjustments to allow, for example, changes that occur in pack sizes to be accommodated, through user settings or complete sets of change parts. By the appropriate use of these abilities most variations in product can be handled. However when extreme conditions of setups, major changes in product size and configuration, are considered there is no guarantee that the existing machines are able to cope. The problem is even more difficult to deal with when completely new product families are proposed to be made on an existing product line. Such changes in product range are becoming more common as producers respond to demands for ever increasing customization and product differentiation. An issue exists due to the lack of knowledge on the capabilities of the machines being employed. This often forces the producer to undertake a series of practical product trials. These however can only be undertaken once the product form has been decided and produced in sufficient numbers. There is then little opportunity to make changes that could greatly improve the potential output of the line and reduce waste. There is thus a need for a supportive modelling approach that allows the effect of variation in products to be analyzed together with an understanding of the manufacturing machine capability. Only through their analysis and interaction can the capabilities be fully understood and refined to make production possible. This thesis presents a constraint-based approach that offers a solution to the problems above. While employing this approach it has been shown that, a generic process can be formed to identify the limiting factors (constraints) of variant products to be processed. These identified constraints can be mapped to form the potential limits of performance for the machine. The limits of performance of a system (performance envelopes) can be employed to assess the design capability to cope with product variation. The approach is successfully demonstrated on three industrial case studies.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Iterative Visual Analytics and its Applications in Bioinformatics

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)You, Qian. Ph.D., Purdue University, December, 2010. Iterative Visual Analytics and its Applications in Bioinformatics. Major Professors: Shiaofen Fang and Luo Si. Visual Analytics is a new and developing field that addresses the challenges of knowledge discoveries from the massive amount of available data. It facilitates humans‘ reasoning capabilities with interactive visual interfaces for exploratory data analysis tasks, where automatic data mining methods fall short due to the lack of the pre-defined objective functions. Analyzing the large volume of data sets for biological discoveries raises similar challenges. The domain knowledge of biologists and bioinformaticians is critical in the hypothesis-driven discovery tasks. Yet developing visual analytics frameworks for bioinformatic applications is still in its infancy. In this dissertation, we propose a general visual analytics framework – Iterative Visual Analytics (IVA) – to address some of the challenges in the current research. The framework consists of three progressive steps to explore data sets with the increased complexity: Terrain Surface Multi-dimensional Data Visualization, a new multi-dimensional technique that highlights the global patterns from the profile of a large scale network. It can lead users‘ attention to characteristic regions for discovering otherwise hidden knowledge; Correlative Multi-level Terrain Surface Visualization, a new visual platform that provides the overview and boosts the major signals of the numeric correlations among nodes in interconnected networks of different contexts. It enables users to gain critical insights and perform data analytical tasks in the context of multiple correlated networks; and the Iterative Visual Refinement Model, an innovative process that treats users‘ perceptions as the objective functions, and guides the users to form the optimal hypothesis by improving the desired visual patterns. It is a formalized model for interactive explorations to converge to optimal solutions. We also showcase our approach with bio-molecular data sets and demonstrate its effectiveness in several biomarker discovery applications
    corecore