676 research outputs found

    A Survey on Service Composition Middleware in Pervasive Environments

    Get PDF
    The development of pervasive computing has put the light on a challenging problem: how to dynamically compose services in heterogeneous and highly changing environments? We propose a survey that defines the service composition as a sequence of four steps: the translation, the generation, the evaluation, and finally the execution. With this powerful and simple model we describe the major service composition middleware. Then, a classification of these service composition middleware according to pervasive requirements - interoperability, discoverability, adaptability, context awareness, QoS management, security, spontaneous management, and autonomous management - is given. The classification highlights what has been done and what remains to do to develop the service composition in pervasive environments

    User-centric Composable Services: A New Generation of Personal Data Analytics

    Get PDF
    Machine Learning (ML) techniques, such as Neural Network, are widely used in today's applications. However, there is still a big gap between the current ML systems and users' requirements. ML systems focus on improving the performance of models in training, while individual users cares more about response time and expressiveness of the tool. Many existing research and product begin to move computation towards edge devices. Based on the numerical computing system Owl, we propose to build the Zoo system to support construction, compose, and deployment of ML models on edge and local devices

    Large AI Model Empowered Multimodal Semantic Communications

    Full text link
    Multimodal signals, including text, audio, image and video, can be integrated into Semantic Communication (SC) for providing an immersive experience with low latency and high quality at the semantic level. However, the multimodal SC has several challenges, including data heterogeneity, semantic ambiguity, and signal fading. Recent advancements in large AI models, particularly in Multimodal Language Model (MLM) and Large Language Model (LLM), offer potential solutions for these issues. To this end, we propose a Large AI Model-based Multimodal SC (LAM-MSC) framework, in which we first present the MLM-based Multimodal Alignment (MMA) that utilizes the MLM to enable the transformation between multimodal and unimodal data while preserving semantic consistency. Then, a personalized LLM-based Knowledge Base (LKB) is proposed, which allows users to perform personalized semantic extraction or recovery through the LLM. This effectively addresses the semantic ambiguity. Finally, we apply the Conditional Generative adversarial networks-based channel Estimation (CGE) to obtain Channel State Information (CSI). This approach effectively mitigates the impact of fading channels in SC. Finally, we conduct simulations that demonstrate the superior performance of the LAM-MSC framework.Comment: To be submitted for journal publicatio

    Community-of-Interest (COI) Model-Based Languages Enabling Composable Net-Centric Services

    Get PDF
    Net-centric services shall be designed to collaborate with other services used within the supported Community of Interest (COI). This requires that such services not only be integratable on the technical level and interoperable on the implementation level, but also that they are composable in the sense that they are semantically and pragmatically consistent and able to exchange information in a consistent and unambiguous way. In order to support Command-and-Control with Composable Net-centric Services, the human-machine interoperation must be supported as well as the machine-machine interoperation. This paper shows that techniques of computer linguistic can support the human-machine interface by structuring human-oriented representations into machine-oriented regular expressions that implement the unambiguous data exchange between machines. Distinguishing between these two domains is essential, as some requirements are mutually exclusive. In order to get the best of both worlds, an aligned approach based on a COI model is needed. This COI model starts with the partners and their respective services and business processes, identifies the resulting infrastructure components, and derives the information exchange requirements. Model-based Data Engineering leads to the configuration of data exchange specifications between the services in form of an artificial language comprising regular expressions for the machine-machine communication. Computer linguistic methods are applied to accept and generate human-oriented representations, which potentially extend the information exchange specifications to capture new information not represented in the system requirements. The paper presents the framework that was partially applied for homeland security applications and in support of the joint rapid scenario generation activities of US Joint Forces Command.

    Semantic Service Description Framework for Efficient Service Discovery and Composition

    Get PDF
    Web services have been widely adopted as a new distributed system technology by industries in the areas of, enterprise application integration, business process management, and virtual organisation. However, lack of semantics in current Web services standards has been a major barrier in the further improvement of service discovery and composition. For the last decade, Semantic Web Services have become an important research topic to enrich the semantics of Web services. The key objective of Semantic Web Services is to achieve automatic/semi-automatic Web service discovery, invocation, and composition. There are several existing semantic Web service description frameworks, such as, OWL-S, WSDL-S, and WSMF. However, existing frameworks have several issues, such as insufficient service usage context information, precisely specified requirements needed to locate services, lacking information about inter-service relationships, and insufficient/incomplete information handling, make the process of service discovery and composition not as efficient as it should be. To address these problems, a context-based semantic service description framework is proposed in this thesis. This framework focuses on not only capabilities of Web services, but also the usage context information of Web services, which we consider as an important factor in efficient service discovery and composition. Based on this framework, an enhanced service discovery mechanism is proposed. It gives service users more flexibility to search for services in more natural ways rather than only by technical specifications of required services. The service discovery mechanism also demonstrates how the features provided by the framework can facilitate the service discovery and composition processes. Together with the framework, a transformation method is provided to transform exiting service descriptions into the new framework based descriptions. The framework is evaluated through a scenario based analysis in comparison with OWL-S and a prototype based performance evaluation in terms of query response time, the precision and recall ratio, and system scalability

    Choosy and Picky: Configuration of Language Product Lines

    Get PDF
    Although most programming languages naturally share several language features, they are typically implemented as a monolithic product. Language features cannot be plugged and unplugged from a language and reused in another language. Some modular approaches to language construction do exist but composing language features requires a deep understanding of its implementation hampering their use. The choose and pick approach from software product lines provides an easy way to compose a language out of a set of language features. However, current approaches to language product lines are not sufficient enough to cope with the complexity and evolution of real world programming languages. In this work, we propose a general light-weight bottom-up approach to automatically extract a feature model from a set of tagged language components. We applied this approach to the Neverlang language development framework and developed the AiDE tool to guide language developers towards a valid language composition. The approach has been evaluated on a decomposed version of Javascript to highlight the benefits of such a language product line

    Exploring software practitioners perceptions and experience in requirements reuse : a survey in Malaysia

    Get PDF
    In Software Product Lines (SPL) development, reuse process is planned ahead of time, while in traditional software development reuse can occur opportunistically: unplanned or in ad hoc manner. Although many research efforts in SPL focus on issues related to architecture, designs and codes reuse, research on requirements reuse has received slightly less attention from researchers and practitioners. Requirements Reuse (RR) in SPL is the process of systematically reusing previously defined and validated requirements for an earlier software product and applying them to a new and slightly different product within a similar domain. This paper presents a survey pertaining to RR practice that was conducted in Malaysia with two objectives: a) to identify the factors influencing software practitioners in RR, and b) to assess the factors hindering software practitioners from reusing requirements in software development. The survey results have confirmed seven factors that can influence RR practice in Malaysia. The survey results have also revealed three main impediments to RR practice in Malaysia: the unavailability of RR tools or framework to select requirements for reuse, the conditions of existing requirements to be reused (incomplete, poorly structured or not kept updated), and the lack of awareness and RR education among software practitioners pertaining to the systematic R
    corecore