4,651 research outputs found

    A comparison of integration architectures

    Get PDF
    This paper presents GenSIF, a Generic Systems Integration Framework. GenSIF features a pre-planned development process on a domain-wide basis and facilitates system integration and project coordination for very large, complex and distributed systems. Domain analysis, integration architecture design and infrastructure design are identified as the three main components of GenSIF. In the next step we map Beilcore\u27s OSCA interoperability architecture, ANSA, IBM\u27s SAA and Bull\u27s DCM into GenSIF. Using the GenSIF concepts we compare each of these architectures. GenSIF serves as a general framework to evaluate and position specific architecture. The OSCA architecture is used to discuss the impact of vendor architectures on application development. All opinions expressed in this paper, especially with regard to the OSCA architecture, are the opinions of the author and do not necessarily reflect the point of view of any of the mentioned companies

    MegSDF Mega-system development framework

    Get PDF
    A framework for developing large, complex software systems, called Mega-Systems, is specified. The framework incorporates engineering, managerial, and technological aspects of development, concentrating on an engineering process. MegSDF proposes developing Mega-Systems as open distributed systems, pre-planned to be integrated with other systems, and designed for change. At the management level, MegSDF divides the development of a Mega-System into multiple coordinated projects, distinguishing between a meta-management for the whole development effort, responsible for long-term, global objectives, and local managements for the smaller projects, responsible for local, temporary objectives. At the engineering level, MegSDF defines a process model which specifies the tasks required for developing Mega-Systems, including their deliverables and interrelationships. The engineering process emphasizes the coordination required to develop the constituent systems. The process is active for the life time of the Mega-System and compatible with different approaches for performing its tasks. The engineering process consists of System, Mega-System, Mega-System Synthesis, and Meta-Management tasks. System tasks develop constituent systems. Mega-Systems tasks provide a means for engineering coordination, including Domain Analysis, Mega-System Architecture Design. and Infrastructure Acquisition tasks. Mega-System Synthesis tasks assemble Mega-Systems from the constituent systems. The Meta-Management task plans and controls the entire process. The domain analysis task provides a general, comprehensive, non-constructive domain model, which is used as a common basis for understanding the domain. MegSDF builds the domain model by integrating multiple significant perceptions of the domain. It recommends using a domain modeling schema to facilitate modeling and integrating the multiple perceptions. The Mega-System architecture design task specifies a conceptual architecture and an application architecture. The conceptual architecture specifies common design and implementation concepts and is defined using multiple views. The application architecture maps the domain model into an implementation and defines the overall structure of the Mega-System, its boundaries, components, and interfaces. The infrastructure acquisition task addresses the technological aspects of development. It is responsible for choosing, developing or purchasing, validating, and supporting an infrastructure. The infrastructure integrates the enabling technologies into a unified platform which is used as a common solution for handling technologies. The infrastructure facilitates portability of systems and incorporation of new technologies. It is implemented as a set of services, divided into separate service groups which correspond to the views identified in the conceptual architecture

    Architectures and Key Technical Challenges for 5G Systems Incorporating Satellites

    Get PDF
    Satellite Communication systems are a promising solution to extend and complement terrestrial networks in unserved or under-served areas. This aspect is reflected by recent commercial and standardisation endeavours. In particular, 3GPP recently initiated a Study Item for New Radio-based, i.e., 5G, Non-Terrestrial Networks aimed at deploying satellite systems either as a stand-alone solution or as an integration to terrestrial networks in mobile broadband and machine-type communication scenarios. However, typical satellite channel impairments, as large path losses, delays, and Doppler shifts, pose severe challenges to the realisation of a satellite-based NR network. In this paper, based on the architecture options currently being discussed in the standardisation fora, we discuss and assess the impact of the satellite channel characteristics on the physical and Medium Access Control layers, both in terms of transmitted waveforms and procedures for enhanced Mobile BroadBand (eMBB) and NarrowBand-Internet of Things (NB-IoT) applications. The proposed analysis shows that the main technical challenges are related to the PHY/MAC procedures, in particular Random Access (RA), Timing Advance (TA), and Hybrid Automatic Repeat reQuest (HARQ) and, depending on the considered service and architecture, different solutions are proposed.Comment: Submitted to Transactions on Vehicular Technologies, April 201

    Increasing the value of research: a comparison of the literature on critical success factors for projects, IT projects and enterprise resource planning projects

    Get PDF
    Since the beginning of modern project management in the 1960s, academic researchers have sought to identify a definitive list of Critical Success Factors (CSFs), the key things that project managers must get right in order to deliver a successful product. With the advent of Information Technology (IT) projects and, more recently, projects to deliver Enterprise Resource Planning (ERP) systems, attention has turned to identifying definitive lists of CSFs for these more specific project types. The purpose of this paper is to take stock of this research effort by examining how thinking about each type of project has evolved over time, before producing a consolidated list of CSFs for each as a basis for comparison. This process reveals a high degree of similarity, leading to the conclusion that the goal of identifying a generic list of CSFs for project management has been achieved. Therefore, rather than continuing to describe lists of CSFs, researchers could increase the value of their contribution by taking a step forward and focusing on why, despite this apparent knowledge of how to ensure their success, ERP projects continue to fai

    Improving interface management on a mega construction project

    Get PDF
    ABSTRACT Interface management has in recent years become a key area of focus within the construction sector as the industry undertakes more complex projects. These mega projects are characterised by their complexity, huge scale, high cost and longer duration. This study was conducted in an attempt to understand interface management in its entirety and its role within mega construction projects with the aim of developing a workflow to be used for the management of interfaces on mega projects. To address the objectives of this study a case study method was adopted and questionnaires were utilized to gather data. A total of 50 questionnaires were sent out to ten specialist contractors on the selected mega project and only 36 questionnaires were returned. Through the process of content analysis the results were as follows: Firstly, a number of different types of interfaces were found to exist within the project environment including design interface, design-construction interface, systems interface, contractual interface, organizational interface and construction interface. Secondly, a number of issues exist within the project environment which influence interface challenges. These root causes were found to be just to name a few, poor scope definition, different contracting strategies, poor co-ordination, scope gaps, access delays, poor planning, lack of communication, lack of interface management strategies and so forth. These issues can therefore be referred to as a catalyst in causing interface problems to occur within the project environment. To meet the third objective a number of improvements to the current interface management strategies were noted. These improvements included using software’s such as the building information model, efficient scheduling methods, interface management team, interface management procedure, contractually identified interfaces, proper communication, better resource planning, better stakeholder management and so forth. Through the study of these improvements this study proposed a stage gate workflow process for the management of interfaces on a mega construction project so as to eliminate possible interface risks. Each stage gate introduces an interface management workflow and the items to evaluate at that particular stage gate to ensure that interfaces are addressed collectively throughout the project. Keywords: Mega construction project, Project complexity, Interfaces, Interface management.EM201

    Buildings Temporary Yet Efficient

    Get PDF
    With the aim of valorising and spreading the environmental legacy of Expo Milano 2015, the Italian Ministry for the Environment has realised a technical-educational publication in cooperation with Expo 2015 SpA, Politecnico di Milano and IEFE – Università Bocconi. “The EXPO we learned. The legacy of a mega-event in a circular economy perspective” is a well reasoned evaluation about the sustainable achievements obtained thanks to the “best practices” applied during the Event and the “lessons learned”: from the construction of temporary buildings with energy efficiency and materials reusage expedients, and the prescription of green procurement requirements to waste management

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    Chinese Experience with Global G3 Standard-Setting

    Get PDF
    China’s growth strategy as set out in the 11th 5-year plan in 2005 called for upgrading of product quality, the development of an innovation society, and reduced reliance on foreign intellectual property with high license fees. Consistent with this policy, China has been involved in recent years with the development of a Chinese standard in third generation (3G) mobile phone technology, both in negotiating the standard and seeing it through to commercialization. This is the first case of a developing country both originating and successfully negotiating a telecommunications standard and this experience raises issues for China’s future development strategy based on product and process upgrading in manufacturing. We argue that while precedent setting from an international negotiating point of view, the experience has thus far is unproven commercially. But the lessons learned will benefit future related efforts in follow-on technologies if similar Chinese efforts are made.This paper documents Chinese standard-setting efforts from proposal submission to ITU to the current large-scale trial network deployment in China and overseas trial networks deployment. We discuss the underlying objectives for this initiative, evaluate its effectiveness, and assess its broader implications for Chinese development policy.

    Evaluation and Analysis of Node Localization Power Cost in Ad-Hoc Wireless Sensor Networks with Mobility

    Get PDF
    One of the key concerns with location-aware Ad-hoc Wireless Sensor Networks (AWSNs) is how sensor nodes determine their position. The inherent power limitations of an AWSN along with the requirement for long network lifetimes makes achieving fast and power-efficient localization vital. This research examines the cost (in terms of power) of network irregularities on communications and localization in an AWSN. The number of data bits transmitted and received are significantly affected by varying levels of mobility, node degree, and network shape. The concurrent localization approach, used by the APS-Euclidean algorithm, has significantly more accurate position estimates with a higher percentage of nodes localized, while requiring 50% less data communications overhead, than the Map-Growing algorithm. Analytical power models capable of estimating the power required to localize are derived. The average amount of data communications required by either of these algorithms in a highly mobile network with a relatively high degree consumes less than 2.0% of the power capacity of an average 560mA-hr battery. This is less than expected and contrary to the common perception that localization algorithms consume a significant amount of a node\u27s power

    GFZ Wireless Seismic Array (GFZ-WISE), a Wireless Mesh Network of Seismic Sensors: New Perspectives for Seismic Noise Array Investigations and Site Monitoring

    Get PDF
    Over the last few years, the analysis of seismic noise recorded by two dimensional arrays has been confirmed to be capable of deriving the subsoil shear-wave velocity structure down to several hundred meters depth. In fact, using just a few minutes of seismic noise recordings and combining this with the well known horizontal-to-vertical method, it has also been shown that it is possible to investigate the average one dimensional velocity structure below an array of stations in urban areas with a sufficient resolution to depths that would be prohibitive with active source array surveys, while in addition reducing the number of boreholes required to be drilled for site-effect analysis. However, the high cost of standard seismological instrumentation limits the number of sensors generally available for two-dimensional array measurements (i.e., of the order of 10), limiting the resolution in the estimated shear-wave velocity profiles. Therefore, new themes in site-effect estimation research by two-dimensional arrays involve the development and application of low-cost instrumentation, which potentially allows the performance of dense-array measurements, and the development of dedicated signal-analysis procedures for rapid and robust estimation of shear-wave velocity profiles. In this work, we present novel low-cost wireless instrumentation for dense two-dimensional ambient seismic noise array measurements that allows the real–time analysis of the surface-wavefield and the rapid estimation of the local shear-wave velocity structure for site response studies. We first introduce the general philosophy of the new system, as well as the hardware and software that forms the novel instrument, which we have tested in laboratory and field studies
    corecore