237 research outputs found

    Performance Assessment Strategies:

    Get PDF
    Using engineering performance evaluations to explore design alternatives during the conceptual phase of architectural design helps to understand the relationships between form and performance; and is crucial for developing well-performing final designs. Computer aided conceptual design has the potential to aid the design team in discovering and highlighting these relationships; especially by means of procedural and parametric geometry to support the generation of geometric design, and building performance simulation tools to support performance assessments. However, current tools and methods for computer aided conceptual design in architecture do not explicitly reveal nor allow for backtracking the relationships between performance and geometry of the design. They currently support post-engineering, rather than the early design decisions and the design exploration process. Focusing on large roofs, this research aims at developing a computational design approach to support designers in performance driven explorations. The approach is meant to facilitate the multidisciplinary integration and the learning process of the designer; and not to constrain the process in precompiled procedures or in hard engineering formulations, nor to automatize it by delegating the design creativity to computational procedures. PAS (Performance Assessment Strategies) as a method is the main output of the research. It consists of a framework including guidelines and an extensible library of procedures for parametric modelling. It is structured on three parts. Pre-PAS provides guidelines for a design strategy-definition, toward the parameterization process. Model-PAS provides guidelines, procedures and scripts for building the parametric models. Explore-PAS supports the solutions-assessment based on numeric evaluations and performance simulations, until the identification of a suitable design solution. PAS has been developed based on action research. Several case studies have focused on each step of PAS and on their interrelationships. The relations between the knowledge available in pre-PAS and the challenges of the solution space exploration in explore-PAS have been highlighted. In order to facilitate the explore-PAS phase in case of large solution spaces, the support of genetic algorithms has been investigated and the exiting method ParaGen has been further implemented. Final case studies have focused on the potentials of ParaGen to identify well performing solutions; to extract knowledge during explore-PAS; and to allow interventions of the designer as an alternative to generations driven solely by coded criteria. Both the use of PAS and its recommended future developments are addressed in the thesis

    Performance Assessment Strategies

    Get PDF
    Using engineering performance evaluations to explore design alternatives during the conceptual phase of architectural design helps to understand the relationships between form and performance; and is crucial for developing well-performing final designs. Computer aided conceptual design has the potential to aid the design team in discovering and highlighting these relationships; especially by means of procedural and parametric geometry to support the generation of geometric design, and building performance simulation tools to support performance assessments. However, current tools and methods for computer aided conceptual design in architecture do not explicitly reveal nor allow for backtracking the relationships between performance and geometry of the design. They currently support post-engineering, rather than the early design decisions and the design exploration process. Focusing on large roofs, this research aims at developing a computational design approach to support designers in performance driven explorations. The approach is meant to facilitate the multidisciplinary integration and the learning process of the designer; and not to constrain the process in precompiled procedures or in hard engineering formulations, nor to automatize it by delegating the design creativity to computational procedures. PAS (Performance Assessment Strategies) as a method is the main output of the research. It consists of a framework including guidelines and an extensible library of procedures for parametric modelling. It is structured on three parts. Pre-PAS provides guidelines for a design strategy-definition, toward the parameterization process. Model-PAS provides guidelines, procedures and scripts for building the parametric models. Explore-PAS supports the solutions-assessment based on numeric evaluations and performance simulations, until the identification of a suitable design solution. PAS has been developed based on action research. Several case studies have focused on each step of PAS and on their interrelationships. The relations between the knowledge available in pre-PAS and the challenges of the solution space exploration in explore-PAS have been highlighted. In order to facilitate the explore-PAS phase in case of large solution spaces, the support of genetic algorithms has been investigated and the exiting method ParaGen has been further implemented. Final case studies have focused on the potentials of ParaGen to identify well performing solutions; to extract knowledge during explore-PAS; and to allow interventions of the designer as an alternative to generations driven solely by coded criteria. Both the use of PAS and its recommended future developments are addressed in the thesis

    Performance Assessment Strategies:

    Get PDF
    Using engineering performance evaluations to explore design alternatives during the conceptual phase of architectural design helps to understand the relationships between form and performance; and is crucial for developing well-performing final designs. Computer aided conceptual design has the potential to aid the design team in discovering and highlighting these relationships; especially by means of procedural and parametric geometry to support the generation of geometric design, and building performance simulation tools to support performance assessments. However, current tools and methods for computer aided conceptual design in architecture do not explicitly reveal nor allow for backtracking the relationships between performance and geometry of the design. They currently support post-engineering, rather than the early design decisions and the design exploration process. Focusing on large roofs, this research aims at developing a computational design approach to support designers in performance driven explorations. The approach is meant to facilitate the multidisciplinary integration and the learning process of the designer; and not to constrain the process in precompiled procedures or in hard engineering formulations, nor to automatize it by delegating the design creativity to computational procedures. PAS (Performance Assessment Strategies) as a method is the main output of the research. It consists of a framework including guidelines and an extensible library of procedures for parametric modelling. It is structured on three parts. Pre-PAS provides guidelines for a design strategy-definition, toward the parameterization process. Model-PAS provides guidelines, procedures and scripts for building the parametric models. Explore-PAS supports the solutions-assessment based on numeric evaluations and performance simulations, until the identification of a suitable design solution. PAS has been developed based on action research. Several case studies have focused on each step of PAS and on their interrelationships. The relations between the knowledge available in pre-PAS and the challenges of the solution space exploration in explore-PAS have been highlighted. In order to facilitate the explore-PAS phase in case of large solution spaces, the support of genetic algorithms has been investigated and the exiting method ParaGen has been further implemented. Final case studies have focused on the potentials of ParaGen to identify well performing solutions; to extract knowledge during explore-PAS; and to allow interventions of the designer as an alternative to generations driven solely by coded criteria. Both the use of PAS and its recommended future developments are addressed in the thesis

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    An insider Action Research study focusing on synergy realizations during post-merger integration phase between competing organizations

    Get PDF
    Mergers and acquisitions are gaining a lot of prominence in global corporate dynamics as a strategic way for organizations to grow and diversify rapidly. The significance of post-merger integration cannot be overstated (Shrivastava, 1986). Although the main purpose behind organizational mergers and acquisitions is “One plus one makes three”, most post-merger integration studies focus upon failures (Dutta, Dutta, and Das, 2011). This research study is based in the context of post-merger integration between two competing firms and presents an empirical Action Research study focusing on “synergy realization”. This research study builds upon Clayton’s (2010) work on Complex Adaptive System (CAS) (Stacey, 2011) for realizing synergies amidst post-merger integration. CAS has been complemented by Complexity Leadership Theory (Uhl-Bien, Marion and McKelvey, 2007) which provides some control over the otherwise unpredictable nature of CAS. This study also attempts to utilize proven methodologies and methods oriented around organizational behavior, change management, emergence, co-evolution, and other leadership concepts that are anchored in Mode 2 research. The research methods, as well as the issues related to the research context, have continuously evolved while conducting this research study due to reflections offered by the double loop learning process. Although the primary focus of this study was to identify synergy realizations during the merger integration phase, the research study also explored the underpinning issues, problems, and challenges faced by organizational members while adjusting to or reconciling the different ways of functioning and behaving that were affecting synergy realizations. This study therefore also includes findings associated with organizational merger associated concerns such as interpersonal issues, human resource, knowledge management, communication, organizational management, leadership, and organizational ethos. This study makes 3 main contributions. First, this research study presents innovative insights towards resolving some of the mysteries attached with organizational mergers, by focusing upon positive merger objectives through synergy realizations by heeding to Clayton’s (2010) appeal for scholars and practitioners to go beyond traditional M&A methodologies. Second, this study presents an empirical account of some of the Mode 2 knowledge creation concepts such as Action Research, CAS, CLT, SSM and LiC, which has the potential of inspiring similar experimentation in organizational learning and management research. And third, this study demonstrates how researching practitioners can make an impact on successful adaptations to organizational change management situations such as the ones presented by organizational mergers by bridging the gap between theory and practice, and building upon research-oriented knowledge through AR and professional doctorate programmes

    E-Learning and Digital Education in the Twenty-First Century

    Get PDF
    E-learning and digital education approaches are evolving and changing the landscape of teaching and learning at all levels of education throughout the world. Innovation of emerging learning technologies is assisting e-learning and digital education to meet the needs of the 21st century. Due to the digital transformation of everyday practice, the process of learning and education has become more self-paced and accessible at any time from anywhere. The new generations of digital natives are growing up with a set of skills through their engagement with the digital world. In this context, this book includes a collection of chapters to facilitate continuous improvements including flexibility and accessibility in e-learning and digital education by exploring the challenges and opportunities of innovative approaches through the lenses of current theories, policies, and practices

    Towards Practical and Secure Channel Impulse Response-based Physical Layer Key Generation

    Get PDF
    Der derzeitige Trend hin zu “smarten” Geräten bringt eine Vielzahl an Internet-fähigen und verbundenen Geräten mit sich. Die entsprechende Kommunikation dieser Geräte muss zwangsläufig durch geeignete Maßnahmen abgesichert werden, um die datenschutz- und sicherheitsrelevanten Anforderungen an die übertragenen Informationen zu erfüllen. Jedoch zeigt die Vielzahl an sicherheitskritischen Vorfällen im Kontext von “smarten” Geräten und des Internets der Dinge auf, dass diese Absicherung der Kommunikation derzeit nur unzureichend umgesetzt wird. Die Ursachen hierfür sind vielfältig: so werden essentielle Sicherheitsmaßnahmen im Designprozess mitunter nicht berücksichtigt oder auf Grund von Preisdruck nicht realisiert. Darüber hinaus erschwert die Beschaffenheit der eingesetzten Geräte die Anwendung klassischer Sicherheitsverfahren. So werden in diesem Kontext vorrangig stark auf Anwendungsfälle zugeschnittene Lösungen realisiert, die auf Grund der verwendeten Hardware meist nur eingeschränkte Rechen- und Energieressourcen zur Verfügung haben. An dieser Stelle können die Ansätze und Lösungen der Sicherheit auf physikalischer Schicht (physical layer security, PLS) eine Alternative zu klassischer Kryptografie bieten. Im Kontext der drahtlosen Kommunikation können hier die Eigenschaften des Übertragungskanals zwischen zwei legitimen Kommunikationspartnern genutzt werden, um Sicherheitsprimitive zu implementieren und damit Sicherheitsziele zu realisieren. Konkret können etwa reziproke Kanaleigenschaften verwendet werden, um einen Vertrauensanker in Form eines geteilten, symmetrischen Geheimnisses zu generieren. Dieses Verfahren wird Schlüsselgenerierung basierend auf Kanalreziprozität (channel reciprocity based key generation, CRKG) genannt. Auf Grund der weitreichenden Verfügbarkeit wird dieses Verfahren meist mit Hilfe der Kanaleigenschaft des Empfangsstärkenindikators (received signal strength indicator, RSSI) realisiert. Dies hat jedoch den Nachteil, dass alle physikalischen Kanaleigenschaften auf einen einzigen Wert heruntergebrochen werden und somit ein Großteil der verfügbaren Informationen vernachlässigt wird. Dem gegenüber steht die Verwendung der vollständigen Kanalzustandsinformationen (channel state information, CSI). Aktuelle technische Entwicklungen ermöglichen es zunehmend, diese Informationen auch in Alltagsgeräten zur Verfügung zu stellen und somit für PLS weiterzuverwenden. In dieser Arbeit analysieren wir Fragestellungen, die sich aus einem Wechsel hin zu CSI als verwendetes Schlüsselmaterial ergeben. Konkret untersuchen wir CSI in Form von Ultrabreitband-Kanalimpulsantworten (channel impulse response, CIR). Für die Untersuchungen haben wir initial umfangreiche Messungen vorgenommen und damit analysiert, in wie weit die grundlegenden Annahmen von PLS und CRKG erfüllt sind und die CIRs sich grundsätzlich für die Schlüsselgenerierung eignen. Hier zeigen wir, dass die CIRs der legitimen Kommunikationspartner eine höhere Ähnlichkeit als die eines Angreifers aufzeigen und das somit ein Vorteil gegenüber diesem auf der physikalischen Schicht besteht, der für die Schlüsselgenerierung ausgenutzt werden kann. Basierend auf den Ergebnissen der initialen Untersuchung stellen wir dann grundlegende Verfahren vor, die notwendig sind, um die Ähnlichkeit der legitimen Messungen zu verbessern und somit die Schlüsselgenerierung zu ermöglichen. Konkret werden Verfahren vorgestellt, die den zeitlichen Versatz zwischen reziproken Messungen entfernen und somit die Ähnlichkeit erhöhen, sowie Verfahren, die das in den Messungen zwangsläufig vorhandene Rauschen entfernen. Gleichzeitig untersuchen wir, inwieweit die getroffenen fundamentalen Sicherheitsannahmen aus Sicht eines Angreifers erfüllt sind. Zu diesem Zweck präsentieren, implementieren und analysieren wir verschiedene praktische Angriffsmethoden. Diese Verfahren umfassen etwa Ansätze, bei denen mit Hilfe von deterministischen Kanalmodellen oder durch ray tracing versucht wird, die legitimen CIRs vorherzusagen. Weiterhin untersuchen wir Machine Learning Ansätze, die darauf abzielen, die legitimen CIRs direkt aus den Beobachtungen eines Angreifers zu inferieren. Besonders mit Hilfe des letzten Verfahrens kann hier gezeigt werden, dass große Teile der CIRs deterministisch vorhersagbar sind. Daraus leitet sich der Schluss ab, dass CIRs nicht ohne adäquate Vorverarbeitung als Eingabe für Sicherheitsprimitive verwendet werden sollten. Basierend auf diesen Erkenntnissen entwerfen und implementieren wir abschließend Verfahren, die resistent gegen die vorgestellten Angriffe sind. Die erste Lösung baut auf der Erkenntnis auf, dass die Angriffe aufgrund von vorhersehbaren Teilen innerhalb der CIRs möglich sind. Daher schlagen wir einen klassischen Vorverarbeitungsansatz vor, der diese deterministisch vorhersagbaren Teile entfernt und somit das Eingabematerial absichert. Wir implementieren und analysieren diese Lösung und zeigen ihre Effektivität sowie ihre Resistenz gegen die vorgeschlagenen Angriffe. In einer zweiten Lösung nutzen wir die Fähigkeiten des maschinellen Lernens, indem wir sie ebenfalls in das Systemdesign einbringen. Aufbauend auf ihrer starken Leistung bei der Mustererkennung entwickeln, implementieren und analysieren wir eine Lösung, die lernt, die zufälligen Teile aus den rohen CIRs zu extrahieren, durch die die Kanalreziprozität definiert wird, und alle anderen, deterministischen Teile verwirft. Damit ist nicht nur das Schlüsselmaterial gesichert, sondern gleichzeitig auch der Abgleich des Schlüsselmaterials, da Differenzen zwischen den legitimen Beobachtungen durch die Merkmalsextraktion effizient entfernt werden. Alle vorgestellten Lösungen verzichten komplett auf den Austausch von Informationen zwischen den legitimen Kommunikationspartnern, wodurch der damit verbundene Informationsabfluss sowie Energieverbrauch inhärent vermieden wird

    Physics-Based Earthquake Ground Shaking Scenarios in Large Urban Areas

    Get PDF
    With the ongoing progress of computing power made available not only by large supercomputer facilities but also by relatively common workstations and desktops, physics-based source-to-site 3D numerical simulations of seismic ground motion will likely become the leading and most reliable tool to construct ground shaking scenarios from future earthquakes. This paper aims at providing an overview of recent progress on this subject, by taking advantage of the experience gained during a recent research contract between Politecnico di Milano, Italy, and Munich RE, Germany, with the objective to construct ground shaking scenarios from hypothetical earthquakes in large urban areas worldwide. Within this contract, the SPEED computer code was developed, based on a spectral element formulation enhanced by the Discontinuous Galerkin approach to treat non-conforming meshes. After illustrating the SPEED code, different case studies are overviewed, while the construction of shaking scenarios in the Po river Plain, Italy, is considered in more detail. Referring, in fact, to this case study, the comparison with strong motion records allows one to derive some interesting considerations on the pros and on the present limitations of such approach

    Technical Evaluation Report for Symposium AVT-147: Computational Uncertainty in Military Vehicle Design

    Get PDF
    The complexity of modern military systems, as well as the cost and difficulty associated with experimentally verifying system and subsystem design makes the use of high-fidelity based simulation a future alternative for design and development. The predictive ability of such simulations such as computational fluid dynamics (CFD) and computational structural mechanics (CSM) have matured significantly. However, for numerical simulations to be used with confidence in design and development, quantitative measures of uncertainty must be available. The AVT 147 Symposium has been established to compile state-of-the art methods of assessing computational uncertainty, to identify future research and development needs associated with these methods, and to present examples of how these needs are being addressed and how the methods are being applied. Papers were solicited that address uncertainty estimation associated with high fidelity, physics-based simulations. The solicitation included papers that identify sources of error and uncertainty in numerical simulation from either the industry perspective or from the disciplinary or cross-disciplinary research perspective. Examples of the industry perspective were to include how computational uncertainty methods are used to reduce system risk in various stages of design or development
    corecore