56 research outputs found

    Development of mobile agent framework in wireless sensor networks for multi-sensor collaborative processing

    Get PDF
    Recent advances in processor, memory and radio technology have enabled production of tiny, low-power, low-cost sensor nodes capable of sensing, communication and computation. Although a single node is resource constrained with limited power, limited computation and limited communication bandwidth, these nodes deployed in large number form a new type of network called the wireless sensor network (WSN). One of the challenges brought by WSNs is an efficient computing paradigm to support the distributed nature of the applications built on these networks considering the resource limitations of the sensor nodes. Collaborative processing between multiple sensor nodes is essential to generate fault-tolerant, reliable information from the densely-spatial sensing phenomenon. The typical model used in distributed computing is the client/server model. However, this computing model is not appropriate in the context of sensor networks. This thesis develops an energy-efficient, scalable and real-time computing model for collaborative processing in sensor networks called the mobile agent computing paradigm. In this paradigm, instead of each sensor node sending data or result to a central server which is typical in the client/server model, the information processing code is moved to the nodes using mobile agents. These agents carry the execution code and migrate from one node to another integrating result at each node. This thesis develops the mobile agent framework on top of an energy-efficient routing protocol called directed diffusion. The mobile agent framework described has been mapped to collaborative target classification application. This application has been tested in three field demos conducted at Twentynine palms, CA; BAE Austin, TX; and BBN Waltham, MA

    The Digital Transformation of Automotive Businesses: THREE ARTEFACTS TO SUPPORT DIGITAL SERVICE PROVISION AND INNOVATION

    Get PDF
    Digitalisation and increasing competitive pressure drive original equipment manufacturers (OEMs) to switch their focus towards the provision of digital services and open-up towards increased collaboration and customer integration. This shift implies a significant transformational change from product to product-service providers, where OEMs realign themselves within strategic, business and procedural dimensions. Thus, OEMs must manage digital transformation (DT) processes in order to stay competitive and remain adaptable to changing customer demands. However, OEMs aspiring to become participants or leaders in their domain, struggle to initiate activities as there is a lack of applicable instruments that can guide and support them during this process. Compared to the practical importance of DT, empirical studies are not comprehensive. This study proposes three artefacts, validated within case companies that intend to support automotive OEMs in digital service provisioning. Artefact one, a layered conceptual model for a digital automotive ecosystem, was developed by means of 26 expert interviews. It can serve as a useful instrument for decision makers to strategically plan and outline digital ecosystems. Artefact two is a conceptual reference framework for automotive service systems. The artefact was developed based on an extensive literature review, and the mapping of the business model canvas to the service system domain. The artefact intends to assist OEMs in the efficient conception of digital services under consideration of relevant stakeholders and the necessary infrastructures. Finally, artefact three proposes a methodology by which to transform software readiness assessment processes to fit into the agile software development approach with consideration of the existing operational infrastructure. Overall, the findings contribute to the empirical body of knowledge about the digital transformation of manufacturing industries. The results suggest value creation for digital automotive services occurs in networks among interdependent stakeholders in which customers play an integral role during the services’ life-cycle. The findings further indicate the artefacts as being useful instruments, however, success is dependent on the integration and collaboration of all contributing departments.:Table of Contents Bibliographic Description II Acknowledgment III Table of Contents IV List of Figures VI List of Tables VII List of Abbreviations VIII 1 Introduction 1 1.1 Motivation and Problem Statement 1 1.2 Objective and Research Questions 6 1.3 Research Methodology 7 1.4 Contributions 10 1.5 Outline 12 2 Background 13 2.1 From Interdependent Value Creation to Digital Ecosystems 13 2.1.1 Digitalisation Drives Collaboration 13 2.1.2 Pursuing an Ecosystem Strategy 13 2.1.3 Research Gaps and Strategy Formulation Obstacles 20 2.2 From Products to Product-Service Solutions 22 2.2.1 Digital Service Fulfilment Requires Co-Creational Networks 22 2.2.2 Enhancing Business Models with Digital Services 28 2.2.3 Research Gaps and Service Conception Obstacles 30 2.3 From Linear Development to Continuous Innovation 32 2.3.1 Digital Innovation Demands Digital Transformation 32 2.3.2 Assessing Digital Products 36 2.3.3 Research Gaps and Implementation Obstacles 38 3 Artefact 1: Digital Automotive Ecosystems 41 3.1 Meta Data 41 3.2 Summary 42 3.3 Designing a Layered Conceptual Model of a Digital Ecosystem 45 4 Artefact 2: Conceptual Reference Framework 79 4.1 Meta Data 79 4.2 Summary 80 4.3 On the Move Towards Customer-Centric Automotive Business Models 83 5 Artefact 3: Agile Software Readiness Assessment Procedures 121 5.1 Meta Data 121 5.2 Meta Data 122 5.3 Summary 123 5.4 Adding Agility to Software Readiness Assessment Procedures 126 5.5 Continuous Software Readiness Assessments for Agile Development 147 6 Conclusion and Future Work 158 6.1 Contributions 158 6.1.1 Strategic Dimension: Artefact 1 158 6.1.2 Business Dimension: Artefact 2 159 6.1.3 Process Dimension: Artefact 3 161 6.1.4 Synthesis of Contributions 163 6.2 Implications 167 6.2.1 Scientific Implications 167 6.2.2 Managerial Implications 168 6.2.3 Intelligent Parking Service Example (ParkSpotHelp) 171 6.3 Concluding Remarks 174 6.3.1 Threats to Validity 174 6.3.2 Outlook and Future Research Recommendations 174 Appendix VII Bibliography XX Wissenschaftlicher Werdegang XXXVII Selbständigkeitserklärung XXXVII

    An Adaptive Integration Architecture for Software Reuse

    Get PDF
    The problem of building large, reliable software systems in a controlled, cost-effective way, the so-called software crisis problem, is one of computer science\u27s great challenges. From the very outset of computing as science, software reuse has been touted as a means to overcome the software crisis issue. Over three decades later, the software community is still grappling with the problem of building large reliable software systems in a controlled, cost effective way; the software crisis problem is alive and well. Today, many computer scientists still regard software reuse as a very powerful vehicle to improve the practice of software engineering. The advantage of amortizing software development cost through reuse continues to be a major objective in the art of building software, even though the tools, methods, languages, and overall understanding of software engineering have changed significantly over the years. Our work is primarily focused on the development of an Adaptive Application Integration Architecture Framework. Without good integration tools and techniques, reuse is difficult and will probably not happen to any significant degree. In the development of the adaptive integration architecture framework, the primary enabling concept is object-oriented design supported by the unified modeling language. The concepts of software architecture, design patterns, and abstract data views are used in a structured and disciplined manner to established a generic framework. This framework is applied to solve the Enterprise Application Integration (EM) problem in the telecommunications operations support system (OSS) enterprise marketplace. The proposed adaptive application integration architecture framework facilitates application reusability and flexible business process re-engineering. The architecture addresses the need for modern businesses to continuously redefine themselves to address changing market conditions in an increasingly competitive environment. We have developed a number of Enterprise Application Integration design patterns to enable the implementation of an EAI framework in a definite and repeatable manner. The design patterns allow for integration of commercial off-the-shelf applications into a unified enterprise framework facilitating true application portfolio interoperability. The notion of treating application services as infrastructure services and using business processes to combine them arbitrarily provides a natural way of thinking about adaptable and reusable software systems. We present a mathematical formalism for the specification of design patterns. This specification constitutes an extension of the basic concepts from many-sorted algebra. In particular, the notion of signature is extended to that of a vector, consisting of a set of linearly independent signatures. The approach can be used to reason about various properties including efforts for component reuse and to facilitate complex largescale software development by providing the developer with design alternatives and support for automatic program verification

    Identifying Nearest Fog Nodes With Network Coordinate Systems

    Full text link
    Identifying the closest fog node is crucial for mobile clients to benefit from fog computing. Relying on geographical location alone us insufficient for this as it ignores real observed client access latency. In this paper, we analyze the performance of the Meridian and Vivaldi network coordinate systems in identifying nearest fog nodes. To that end, we simulate a dense fog environment with mobile clients. We find that while network coordinate systems really find fog nodes in close network proximity, a purely latency-oriented identification approach ignores the larger problem of balancing load across fog nodes

    Architecture for intelligent power systems management, optimization, and storage.

    Get PDF
    The management of power and the optimization of systems generating and using power are critical technologies. A new architecture is developed to advance the current state of the art by providing an intelligent and autonomous solution for power systems management. The architecture is two-layered and implements a decentralized approach by defining software objects, similar to software agents, which provide for local optimization of power devices such as power generating, storage, and load devices. These software device objects also provide an interface to a higher level of optimization. This higher level of optimization implements the second layer in a centralized approach by coordinating the individual software device objects with an intelligent expert system thus resulting in architecture for total system power management. In this way, the architecture acquires the benefits of both the decentralized and centralized approaches. The architecture is designed to be portable, scalable, simple, and autonomous, with respect to devices and missions. Metrics for evaluating these characteristics are also defined. Decentralization achieves scalability and simplicity through modularization using software device objects that can be added and deleted as modules based on the devices of the power system are being optimized. Centralization coordinates these software device objects to bring autonomy and intelligence of the whole power system and mission to the architecture. The centralization approach is generic since it always coordinates software device objects; therefore it becomes another modular component of the architecture. Three example implementations illustrate the evolution of this power management system architecture. The first implementation is a coal-fired power generating station that utilized a neural network optimization for the reduction of nitrogen oxide emissions. This illustrates the limitations of this type of black-box optimization and serves as a motivation for developing a more functional architecture. The second implementation is of a hydro-generating power station where a white-box, software agent approach illustrates some of the benefits and provides initial justification of moving towards the proposed architecture. The third implementation applies the architecture to a vehicle to grid application where the previous hydro-generating application is ported and a new hybrid vehicle application is defined. This demonstrates portability and scalability in the architecture, and linking these two applications demonstrates autonomy. The simplicity of building this application is also evaluated

    A novel smart energy management as a service over a cloud computing platform for nanogrid appliances

    Get PDF
    There will be a dearth of electrical energy in the world in the future due to exponential increase in electrical energy demand of rapidly growing world population. With the development of Internet of Things (IoT), more smart appliances will be integrated into homes in smart cities that actively participate in the electricity market by demand response programs to efficiently manage energy in order to meet this increasing energy demand. Thus, with this incitement, the energy management strategy using a price-based demand response program is developed for IoT-enabled residential buildings. We propose a new EMS for smart homes for IoT-enabled residential building smart devices by scheduling to minimize cost of electricity, alleviate peak-to-average ratio, correct power factor, automatic protective appliances, and maximize user comfort. In this method, every home appliance is interfaced with an IoT entity (a data acquisition module) with a specific IP address, which results in a wide wireless system of devices. There are two components of the proposed system: software and hardware. The hardware is composed of a base station unit (BSU) and many terminal units (TUs). The software comprises Wi-Fi network programming as well as system protocol. In this study, a message queue telemetry transportation (MQTT) broker was installed on the boards of BSU and TU. In this paper, we present a low-cost platform for the monitoring and helping decision making about different areas in a neighboring community for efficient management and maintenance, using information and communication technologies. The findings of the experiments demonstrated the feasibility and viability of the proposed method for energy management in various modes. The proposed method increases effective energy utilization, which in turn increases the sustainability of IoT-enabled homes in smart cities. The proposed strategy automatically responds to power factor correction, to protective home appliances, and to price-based demand response programs to combat the major problem of the demand response programs, which is the limitation of consumer’s knowledge to respond upon receiving demand response signals. The schedule controller proposed in this paper achieved an energy saving of 6.347 kWh real power per day, this paper achieved saving 7.282 kWh apparent power per day, and the proposed algorithm in our paper saved $2.3228388 per day

    Modelling and Co-simulation of Multi-Energy Systems: Distributed Software Methods and Platforms

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Concept of a distribution and infrastructure model for mobile applications development across multiple mobile platforms

    Get PDF
    Der Markt fuer mobile Applikationen ist in den letzten Jahre drastisch gewachsen, vorallem durch die staendige steigende Zahl and Mobiltelefonen. Gruende fuer den raschen Anstieg sind unter anderem die steigende Anzahl an Applikationsportalen von Endgeraeteherstellern sowie Telekomunternehmen. Durch die Vielzahl an unterschiedlichen Endgeraeten mit konkurrierenden Betriebssystemen, Entwicklungsplatformen, physische Charaktersistika sowie Netzwerk Infrastrukturen ist ein in sich komplexes Oekosystem entstanden. Durch die Unterschiede der Systeme ist vorallem auf Seiten der Applikationsentwicker ein hoher Grad an Unsicherheit in Bezug auf die Entwicklungsstrategie entstanden was die Technologie und vorallem auch die Nachfrage betrifft. Das Ziel dieser Arbeit ist es ein “aggregiertes” Modell fuer die Distribution von Applikationen ueber mehrere Platformen zu finden. Im ersten Schritt wird die Analyse der existierenden Literatur in der Fachpresse, Internetquellen und Experteninterviews zum Thema Distributions- und Infrastrukturlandschaft in Form eines „Top-Down“ Ansatzes durchgefuehrt um eine Vergleichsbasis aufzubauen und eine Bewertung durchfuehren zu koennen. Im Folgenden wird die Modellierung der analysierten Geschaeftsprozesse mit dem ADONIS® Business Process Management Toolkit durchgefuehrt sowie fuer die Erstellung der Infrastrukturmodelle ADOit® IT Architecture- & Service Management Toolkit verwendet. Die daraus resultierenden Ergebnisse werden analysiert und gegen die „ideal“ Charakteristika verglichen und ein aggregiertes Modell erstellt. Im Anschluss wird der Ansatz eines aggregierten Models in Form der Meta Platform WAC getestet indem deren Entwicklungsumgebung fuer die Erstellung einer Beispielsapplikation verwendet und die Kompatibilitaet auf verschiedenen Platformen getestet wird.The mobile application market continues to grow drastically due to the explosion in the sales of mobile device. One of the drivers behind that increase is the development and penetration of application stores provided by different stakeholders in the mobile space especially handset manufacturers, operating system developers and network operators. Therefore handsets nowadays contain competing operating systems, development platforms and physical characteristics. This diversity leads to a large degree of uncertainty in the mobile space on a strategic, technological, and demand level for mobile application developers. Currently developers need to decide which platform to develop and distribute for. Decision factors include among others the target market, compatibility issue, development time, hardware requirements and scalability. This work provides an overview of the existing mobile application and app store market, investigating in business models, processes and infrastructures to develop and distribute mobile applications across multiple platforms. As the goal is to find an aggregated model for the distribution of cross-platform applications I will start with a top-down approach to identify the existing distribution and infrastructure landscape, therefore I will conduct a research of the literature, internet i.e. Application store developer sites, specialized press and expert talks. The modelling of the business processes will be done with ADONIS® Business Process Management Toolkit and the modelling of infrastructures with ADOit® IT Architecture- & Service Management Toolkit. The final part of the thesis describes the development of a sample application using the WAC environment and the compatibility of on different platforms will be tested

    Quality assessment framework for business processes as a service in a heterogeneous cloud environment

    Get PDF
    A business process is an activity or set of multiple activities that will fulfill a particular objective of the organization. Business process management (BPM) is a methodological way for the improvement of those processes. Due to increased competition in the market, companies are shifting their business processes online using some sophisticated Business process management (BPM) tools and methods. The focus of this thesis is to design and implementation of an initial testing system for business processes which are published on a heterogeneous Cloud environment. This thesis documents researched the state of the art of business process testing (BPT) which covers some testing techniques and selected the best-suited method for testing business process which is responsible for the quality of the system. Also, it concentrates on the state of the art of testing the Cloud. It focus on different methodologies to test SaaS, PaaS, and IaaS. The main objective for that, is to understand the way how testing Cloud environment works, hence, it will lead to the understanding of testing of business processes as a service. Additionally, it explains the general architecture of the TTCN-3. A design of a test system to test the business process based on TTCN-3 is presented. A case study of CloudSocket has been studied and according to the requirement, we have introduced an initial work for testing BPaaS in a heterogeneous Cloud environment. This initial test system was implemented and validated in the CloudSocket Marketplace
    • …
    corecore