28,913 research outputs found

    Application of shape grammar theory to underground rail station design and passenger evacuation

    Get PDF
    This paper outlines the development of a computer design environment that generates station ‘reference’ plans for analysis by designers at the project feasibility stage. The developed program uses the theoretical concept of shape grammar, based upon principles of recognition and replacement of a particular shape to enable the generation of station layouts. The developed novel shape grammar rules produce multiple plans of accurately sized infrastructure faster than by traditional means. A finite set of station infrastructure elements and a finite set of connection possibilities for them, directed by regulations and the logical processes of station usage, allows for increasingly complex composite shapes to be automatically produced, some of which are credible station layouts at ‘reference’ block plan level. The proposed method of generating shape grammar plans is aligned to London Underground standards, in particular to the Station Planning Standards and Guidelines 5th edition (SPSG5 2007) and the BS-7974 fire safety engineering process. Quantitative testing is via existing evacuation modelling software. The prototype system, named SGEvac, has both the scope and potential for redevelopment to any other country’s design legislation

    iStarJSON : a lightweight data-format for i* models

    Get PDF
    JSON is one of the most widely used data-interchange format. There is a large number of tools open for modelling with i*. However, none of them provides supporting for JSON. In this paper we propose iStarJSON language, a JSON-based proposal for interchanging i* models. We also, present an open source software that transforms XML-based format models to JSON models that expose a set of web services for mining iStarJSON models.Peer ReviewedPostprint (author's final draft

    Designing Web-enabled services to provide damage estimation maps caused by natural hazards

    Get PDF
    The availability of building stock inventory data and demographic information is an important requirement for risk assessment studies when attempting to predict and estimate losses due to natural hazards such as earthquakes, storms, floods or tsunamis. The better this information is provided, the more accurate are predictions on damage to structures and lifelines and the better can expected impacts on the population be estimated. When a disaster strikes, a map is often one of the first requirements for answering questions related to location, casualties and damage zones caused by the event. Maps of appropriate scale that represent relative and absolute damage distributions may be of great importance for rescuing lives and properties, and for providing relief. However, this type of maps is often difficult to obtain during the first hours or even days after the occurrence of a natural disaster. The Open Geospatial Consortium Web Services (OWS) Specifications enable access to datasets and services using shared, distributed and interoperable environments through web-enabled services. In this paper we propose the use of OWS in view of these advantages as a possible solution for issues related to suitable dataset acquisition for risk assessment studies. The design of web-enabled services was carried out using the municipality of Managua (Nicaragua) and the development of damage and loss estimation maps caused by earthquakes as a first case study. Four organizations located in different places are involved in this proposal and connected through web services, each one with a specific role

    Precoding in multigateway multibeam satellite systems

    Get PDF
    This paper considers a multigateway multibeam satellite system with multiple feeds per beam. In these systems, each gateway serves a set of beams (cluster) so that the overall data traffic is generated at different geographical areas. Full frequency reuse among beams is considered so that interference mitigation techniques are mandatory. Precisely, this paper aims at designing the precoding scheme which, in contrast to single gateway schemes, entails two main challenges. First, the precoding matrix shall be separated into feed groups assigned to each gateway. Second, complete channel state information (CSI) is required at each gateway, leading to a large communication overhead. In order to solve these problems, a design based on a regularized singular value block decomposition of the channel matrix is presented so that both intercluster (i.e., beams of different clusters) and intracluster (i.e., beams of the same cluster) interference is minimized. In addition, different gateway cooperative schemes are analyzed in order to keep the inter-gateway communication low. Furthermore, the impact of the feeder link interference (i.e., interference between different feeder links) is analyzed and it is shown both numerically and analytically that the system performance is reduced severely whenever this interference occurs even though precoding reverts this additional interference. Finally, multicast transmission is also considered. Numerical simulations are shown considering the latest fixed broadband communication standard DVB-S2X so that the quantized feedback effect is evaluated. The proposed precoding technique results to achieve a performance close to the single gateway operation even when the cooperation among gateways is low.Postprint (author's final draft

    Precoding in multigateway multibeam satellite systems

    Get PDF
    This paper considers a multigateway multibeam satellite system with multiple feeds per beam. In these systems, each gateway serves a set of beams (cluster) so that the overall data traffic is generated at different geographical areas. Full frequency reuse among beams is considered so that interference mitigation techniques are mandatory. Precisely, this paper aims at designing the precoding scheme which, in contrast to single gateway schemes, entails two main challenges. First, the precoding matrix shall be separated into feed groups assigned to each gateway. Second, complete channel state information (CSI) is required at each gateway, leading to a large communication overhead. In order to solve these problems, a design based on a regularized singular value block decomposition of the channel matrix is presented so that both intercluster (i.e., beams of different clusters) and intracluster (i.e., beams of the same cluster) interference is minimized. In addition, different gateway cooperative schemes are analyzed in order to keep the inter-gateway communication low. Furthermore, the impact of the feeder link interference (i.e., interference between different feeder links) is analyzed and it is shown both numerically and analytically that the system performance is reduced severely whenever this interference occurs even though precoding reverts this additional interference. Finally, multicast transmission is also considered. Numerical simulations are shown considering the latest fixed broadband communication standard DVB-S2X so that the quantized feedback effect is evaluated. The proposed precoding technique results to achieve a performance close to the single gateway operation even when the cooperation among gateways is low.Postprint (author's final draft

    Transformations of High-Level Synthesis Codes for High-Performance Computing

    Full text link
    Specialized hardware architectures promise a major step in performance and energy efficiency over the traditional load/store devices currently employed in large scale computing systems. The adoption of high-level synthesis (HLS) from languages such as C/C++ and OpenCL has greatly increased programmer productivity when designing for such platforms. While this has enabled a wider audience to target specialized hardware, the optimization principles known from traditional software design are no longer sufficient to implement high-performance codes. Fast and efficient codes for reconfigurable platforms are thus still challenging to design. To alleviate this, we present a set of optimizing transformations for HLS, targeting scalable and efficient architectures for high-performance computing (HPC) applications. Our work provides a toolbox for developers, where we systematically identify classes of transformations, the characteristics of their effect on the HLS code and the resulting hardware (e.g., increases data reuse or resource consumption), and the objectives that each transformation can target (e.g., resolve interface contention, or increase parallelism). We show how these can be used to efficiently exploit pipelining, on-chip distributed fast memory, and on-chip streaming dataflow, allowing for massively parallel architectures. To quantify the effect of our transformations, we use them to optimize a set of throughput-oriented FPGA kernels, demonstrating that our enhancements are sufficient to scale up parallelism within the hardware constraints. With the transformations covered, we hope to establish a common framework for performance engineers, compiler developers, and hardware developers, to tap into the performance potential offered by specialized hardware architectures using HLS

    Support for collaborative component-based software engineering

    Get PDF
    Collaborative system composition during design has been poorly supported by traditional CASE tools (which have usually concentrated on supporting individual projects) and almost exclusively focused on static composition. Little support for maintaining large distributed collections of heterogeneous software components across a number of projects has been developed. The CoDEEDS project addresses the collaborative determination, elaboration, and evolution of design spaces that describe both static and dynamic compositions of software components from sources such as component libraries, software service directories, and reuse repositories. The GENESIS project has focussed, in the development of OSCAR, on the creation and maintenance of large software artefact repositories. The most recent extensions are explicitly addressing the provision of cross-project global views of large software collections and historical views of individual artefacts within a collection. The long-term benefits of such support can only be realised if OSCAR and CoDEEDS are widely adopted and steps to facilitate this are described. This book continues to provide a forum, which a recent book, Software Evolution with UML and XML, started, where expert insights are presented on the subject. In that book, initial efforts were made to link together three current phenomena: software evolution, UML, and XML. In this book, focus will be on the practical side of linking them, that is, how UML and XML and their related methods/tools can assist software evolution in practice. Considering that nowadays software starts evolving before it is delivered, an apparent feature for software evolution is that it happens over all stages and over all aspects. Therefore, all possible techniques should be explored. This book explores techniques based on UML/XML and a combination of them with other techniques (i.e., over all techniques from theory to tools). Software evolution happens at all stages. Chapters in this book describe that software evolution issues present at stages of software architecturing, modeling/specifying, assessing, coding, validating, design recovering, program understanding, and reusing. Software evolution happens in all aspects. Chapters in this book illustrate that software evolution issues are involved in Web application, embedded system, software repository, component-based development, object model, development environment, software metrics, UML use case diagram, system model, Legacy system, safety critical system, user interface, software reuse, evolution management, and variability modeling. Software evolution needs to be facilitated with all possible techniques. Chapters in this book demonstrate techniques, such as formal methods, program transformation, empirical study, tool development, standardisation, visualisation, to control system changes to meet organisational and business objectives in a cost-effective way. On the journey of the grand challenge posed by software evolution, the journey that we have to make, the contributory authors of this book have already made further advances
    corecore