3,584 research outputs found

    Open semantic service networks

    Get PDF
    Online service marketplaces will soon be part of the economy to scale the provision of specialized multi-party services through automation and standardization. Current research, such as the *-USDL service description language family, is already defining the basic building blocks to model the next generation of business services. Nonetheless, the developments being made do not target to interconnect services via service relationships. Without the concept of relationship, marketplaces will be seen as mere functional silos containing service descriptions. Yet, in real economies, all services are related and connected. Therefore, to address this gap we introduce the concept of open semantic service network (OSSN), concerned with the establishment of rich relationships between services. These networks will provide valuable knowledge on the global service economy, which can be exploited for many socio-economic and scientific purposes such as service network analysis, management, and control

    Impact and Challenges of Software in 2025: Collected Papers

    Get PDF
    Today (2014), software is the key ingredient of most products and services. Software generates innovation and progress in many modern industries. Software is an indispensable element of evolution, of quality of life, and of our future. Software development is (slowly) evolving from a craft to an industrial discipline. Software – and the ability to efficiently produce and evolve high-quality software – is the single most important success factor for many highly competitive industries. Software technology, development methods and tools, and applications in more and more areas are rapidly evolving. The impact of software in 2025 in nearly all areas of life, work, relationships, culture, and society is expected to be massive. The question of the future of software is therefore important. However – like all predictions – quite difficult. Some market forces, industrial developments, social needs, and technology trends are visible today. How will they develop and influence the software we will have in 2025?:Impact of Heterogeneous Processor Architectures and Adaptation Technologies on the Software of 2025 (Kay Bierzynski) 9 Facing Future Software Engineering Challenges by Means of Software Product Lines (David Gollasch) 19 Capabilities of Digital Search and Impact on Work and Life in 2025 (Christina Korger) 27 Transparent Components for Software Systems (Paul Peschel) 37 Functionality, Threats and Influence of Ubiquitous Personal Assistants with Regard to the Society (Jonas Rausch) 47 Evolution-driven Changes of Non-Functional Requirements and Their Architecture (Hendrik Schön) 5

    Towards Exascale Scientific Metadata Management

    Full text link
    Advances in technology and computing hardware are enabling scientists from all areas of science to produce massive amounts of data using large-scale simulations or observational facilities. In this era of data deluge, effective coordination between the data production and the analysis phases hinges on the availability of metadata that describe the scientific datasets. Existing workflow engines have been capturing a limited form of metadata to provide provenance information about the identity and lineage of the data. However, much of the data produced by simulations, experiments, and analyses still need to be annotated manually in an ad hoc manner by domain scientists. Systematic and transparent acquisition of rich metadata becomes a crucial prerequisite to sustain and accelerate the pace of scientific innovation. Yet, ubiquitous and domain-agnostic metadata management infrastructure that can meet the demands of extreme-scale science is notable by its absence. To address this gap in scientific data management research and practice, we present our vision for an integrated approach that (1) automatically captures and manipulates information-rich metadata while the data is being produced or analyzed and (2) stores metadata within each dataset to permeate metadata-oblivious processes and to query metadata through established and standardized data access interfaces. We motivate the need for the proposed integrated approach using applications from plasma physics, climate modeling and neuroscience, and then discuss research challenges and possible solutions

    Management and Visualisation of Non-linear History of Polygonal 3D Models

    Get PDF
    The research presented in this thesis concerns the problems of maintenance and revision control of large-scale three dimensional (3D) models over the Internet. As the models grow in size and the authoring tools grow in complexity, standard approaches to collaborative asset development become impractical. The prevalent paradigm of sharing files on a file system poses serious risks with regards, but not limited to, ensuring consistency and concurrency of multi-user 3D editing. Although modifications might be tracked manually using naming conventions or automatically in a version control system (VCS), understanding the provenance of a large 3D dataset is hard due to revision metadata not being associated with the underlying scene structures. Some tools and protocols enable seamless synchronisation of file and directory changes in remote locations. However, the existing web-based technologies are not yet fully exploiting the modern design patters for access to and management of alternative shared resources online. Therefore, four distinct but highly interconnected conceptual tools are explored. The first is the organisation of 3D assets within recent document-oriented No Structured Query Language (NoSQL) databases. These "schemaless" databases, unlike their relational counterparts, do not represent data in rigid table structures. Instead, they rely on polymorphic documents composed of key-value pairs that are much better suited to the diverse nature of 3D assets. Hence, a domain-specific non-linear revision control system 3D Repo is built around a NoSQL database to enable asynchronous editing similar to traditional VCSs. The second concept is that of visual 3D differencing and merging. The accompanying 3D Diff tool supports interactive conflict resolution at the level of scene graph nodes that are de facto the delta changes stored in the repository. The third is the utilisation of HyperText Transfer Protocol (HTTP) for the purposes of 3D data management. The XML3DRepo daemon application exposes the contents of the repository and the version control logic in a Representational State Transfer (REST) style of architecture. At the same time, it manifests the effects of various 3D encoding strategies on the file sizes and download times in modern web browsers. The fourth and final concept is the reverse-engineering of an editing history. Even if the models are being version controlled, the extracted provenance is limited to additions, deletions and modifications. The 3D Timeline tool, therefore, implies a plausible history of common modelling operations such as duplications, transformations, etc. Given a collection of 3D models, it estimates a part-based correspondence and visualises it in a temporal flow. The prototype tools developed as part of the research were evaluated in pilot user studies that suggest they are usable by the end users and well suited to their respective tasks. Together, the results constitute a novel framework that demonstrates the feasibility of a domain-specific 3D version control

    Generative Artificial Intelligence for Software Engineering -- A Research Agenda

    Full text link
    Generative Artificial Intelligence (GenAI) tools have become increasingly prevalent in software development, offering assistance to various managerial and technical project activities. Notable examples of these tools include OpenAIs ChatGPT, GitHub Copilot, and Amazon CodeWhisperer. Although many recent publications have explored and evaluated the application of GenAI, a comprehensive understanding of the current development, applications, limitations, and open challenges remains unclear to many. Particularly, we do not have an overall picture of the current state of GenAI technology in practical software engineering usage scenarios. We conducted a literature review and focus groups for a duration of five months to develop a research agenda on GenAI for Software Engineering. We identified 78 open Research Questions (RQs) in 11 areas of Software Engineering. Our results show that it is possible to explore the adoption of GenAI in partial automation and support decision-making in all software development activities. While the current literature is skewed toward software implementation, quality assurance and software maintenance, other areas, such as requirements engineering, software design, and software engineering education, would need further research attention. Common considerations when implementing GenAI include industry-level assessment, dependability and accuracy, data accessibility, transparency, and sustainability aspects associated with the technology. GenAI is bringing significant changes to the field of software engineering. Nevertheless, the state of research on the topic still remains immature. We believe that this research agenda holds significance and practical value for informing both researchers and practitioners about current applications and guiding future research

    AI-native Interconnect Framework for Integration of Large Language Model Technologies in 6G Systems

    Full text link
    The evolution towards 6G architecture promises a transformative shift in communication networks, with artificial intelligence (AI) playing a pivotal role. This paper delves deep into the seamless integration of Large Language Models (LLMs) and Generalized Pretrained Transformers (GPT) within 6G systems. Their ability to grasp intent, strategize, and execute intricate commands will be pivotal in redefining network functionalities and interactions. Central to this is the AI Interconnect framework, intricately woven to facilitate AI-centric operations within the network. Building on the continuously evolving current state-of-the-art, we present a new architectural perspective for the upcoming generation of mobile networks. Here, LLMs and GPTs will collaboratively take center stage alongside traditional pre-generative AI and machine learning (ML) algorithms. This union promises a novel confluence of the old and new, melding tried-and-tested methods with transformative AI technologies. Along with providing a conceptual overview of this evolution, we delve into the nuances of practical applications arising from such an integration. Through this paper, we envisage a symbiotic integration where AI becomes the cornerstone of the next-generation communication paradigm, offering insights into the structural and functional facets of an AI-native 6G network

    How to Create an Innovation Accelerator

    Full text link
    Too many policy failures are fundamentally failures of knowledge. This has become particularly apparent during the recent financial and economic crisis, which is questioning the validity of mainstream scholarly paradigms. We propose to pursue a multi-disciplinary approach and to establish new institutional settings which remove or reduce obstacles impeding efficient knowledge creation. We provided suggestions on (i) how to modernize and improve the academic publication system, and (ii) how to support scientific coordination, communication, and co-creation in large-scale multi-disciplinary projects. Both constitute important elements of what we envision to be a novel ICT infrastructure called "Innovation Accelerator" or "Knowledge Accelerator".Comment: 32 pages, Visioneer White Paper, see http://www.visioneer.ethz.c

    Vision Science and Technology at NASA: Results of a Workshop

    Get PDF
    A broad review is given of vision science and technology within NASA. The subject is defined and its applications in both NASA and the nation at large are noted. A survey of current NASA efforts is given, noting strengths and weaknesses of the NASA program
    corecore