2,964 research outputs found
Optimization as a design strategy. Considerations based on building simulation-assisted experiments about problem decomposition
In this article the most fundamental decomposition-based optimization method
- block coordinate search, based on the sequential decomposition of problems in
subproblems - and building performance simulation programs are used to reason
about a building design process at micro-urban scale and strategies are defined
to make the search more efficient. Cyclic overlapping block coordinate search
is here considered in its double nature of optimization method and surrogate
model (and metaphore) of a sequential design process. Heuristic indicators apt
to support the design of search structures suited to that method are developed
from building-simulation-assisted computational experiments, aimed to choose
the form and position of a small building in a plot. Those indicators link the
sharing of structure between subspaces ("commonality") to recursive
recombination, measured as freshness of the search wake and novelty of the
search moves. The aim of these indicators is to measure the relative
effectiveness of decomposition-based design moves and create efficient block
searches. Implications of a possible use of these indicators in genetic
algorithms are also highlighted.Comment: 48 pages. 12 figures, 3 table
Quality measures for ETL processes: from goals to implementation
Extraction transformation loading (ETL) processes play an increasingly important role for the support of modern business operations. These business processes are centred around artifacts with high variability and diverse lifecycles, which correspond to key business entities. The apparent complexity of these activities has been examined through the prism of business process management, mainly focusing on functional requirements and performance optimization. However, the quality dimension has not yet been thoroughly investigated, and there is a need for a more human-centric approach to bring them closer to business-users requirements. In this paper, we take a first step towards this direction by defining a sound model for ETL process quality characteristics and quantitative measures for each characteristic, based on existing literature. Our model shows dependencies among quality characteristics and can provide the basis for subsequent analysis using goal modeling techniques. We showcase the use of goal modeling for ETL process design through a use case, where we employ the use of a goal model that includes quantitative components (i.e., indicators) for evaluation and analysis of alternative design decisions.Peer ReviewedPostprint (author's final draft
Data generator for evaluating ETL process quality
Obtaining the right set of data for evaluating the fulfillment of different quality factors in the extract-transform-load (ETL) process design is rather challenging. First, the real data might be out of reach due to different privacy constraints, while manually providing a synthetic set of data is known as a labor-intensive task that needs to take various combinations of process parameters into account. More importantly, having a single dataset usually does not represent the evolution of data throughout the complete process lifespan, hence missing the plethora of possible test cases. To facilitate such demanding task, in this paper we propose an automatic data generator (i.e., Bijoux). Starting from a given ETL process model, Bijoux extracts the semantics of data transformations, analyzes the constraints they imply over input data, and automatically generates testing datasets. Bijoux is highly modular and configurable to enable end-users to generate datasets for a variety of interesting test scenarios (e.g., evaluating specific parts of an input ETL process design, with different input dataset sizes, different distributions of data, and different operation selectivities). We have developed a running prototype that implements the functionality of our data generation framework and here we report our experimental findings showing the effectiveness and scalability of our approach.Peer ReviewedPostprint (author's final draft
CROSS-DB: a feature-extended multidimensional data model for statistical and scientific databases
Statistical and scientific computing applications exhibit characteristics that are fundamentally different from classical database system application domains. The CROSS-DB data model presented in this paper is optimized for use in such applications by providing advanced data modelling methods and application-oriented query facilities, thus providing a framework for optimized data management procedures. CROSS-DB (which stands for Classification-oriented, Redundancy-based Optimization of Statistical and Scientific DataBases) is based on a multidimensional data view. The model differs from other approaches by o~ering two complementary rnechanisrnsfor structuring qualifying information, classification and feature description. Using these mechanisms results in a normalized, low-dimensional database schema which ensures both, modelling uniqueness and understandability while providing enhanced modelling flexibility
Which heuristics can aid financial-decision-making?
© 2015 Elsevier Inc. We evaluate the contribution of Nobel Prize-winner Daniel Kahneman, often in association with his late co-author Amos Tversky, to the development of our understanding of financial decision-making and the evolution of behavioural finance as a school of thought within Finance. Whilst a general evaluation of the work of Kahneman would be a massive task, we constrain ourselves to a more narrow discussion of his vision of financial-decision making compared to a possible alternative advanced by Gerd Gigerenzer along with numerous co-authors. Both Kahneman and Gigerenzer agree on the centrality of heuristics in decision making. However, for Kahneman heuristics often appear as a fall back when the standard von-Neumann-Morgenstern axioms of rational decision-making do not describe investors' choices. In contrast, for Gigerenzer heuristics are simply a more effective way of evaluating choices in the rich and changing decision making environment investors must face. Gigerenzer challenges Kahneman to move beyond substantiating the presence of heuristics towards a more tangible, testable, description of their use and disposal within the ever changing decision-making environment financial agents inhabit. Here we see the emphasis placed by Gigerenzer on how context and cognition interact to form new schemata for fast and frugal reasoning as offering a productive vein of new research. We illustrate how the interaction between cognition and context already characterises much empirical research and it appears the fast and frugal reasoning perspective of Gigerenzer can provide a framework to enhance our understanding of how financial decisions are made
Rationality, Behavior, Institutional and Economic Change in Schumpeter
In 1940 Schumpeter wrote a paper entitled: âThe Meaning of Rationality in the Social Sciencesâ, which was intended to one of the meetings of a seminar including Talcott Parsons, Wassilly LĂ©ontief, Paul Sweezy and other Harvard scholars, that he took the initiative to start. In this paper Schumpeter develops thoroughly his own conception of rationality in economics. First, this paper is interesting in itself because it is based on a sophisticated methodological analysis. Schumpeter indeed interestingly anticipates some important debates concerning the problem of rationality and behavior in economics and presents arguments tha t make his ideas very topical. Second Schumpeterâs conception of rationality is linked to his methodological background (both individualistic and holistic), which is rooted in his economic sociology and explains the relationships he stresses between individual behavior and collective entities. In this contribution we present the arguments developed by Schumpeter in his 1940 paper and analyze the reason why his notion of rationality can be seen as a key component of his conception of economic and institutional change.
- âŠ