12,158 research outputs found

    Forecasting the Spreading of Technologies in Research Communities

    Get PDF
    Technologies such as algorithms, applications and formats are an important part of the knowledge produced and reused in the research process. Typically, a technology is expected to originate in the context of a research area and then spread and contribute to several other fields. For example, Semantic Web technologies have been successfully adopted by a variety of fields, e.g., Information Retrieval, Human Computer Interaction, Biology, and many others. Unfortunately, the spreading of technologies across research areas may be a slow and inefficient process, since it is easy for researchers to be unaware of potentially relevant solutions produced by other research communities. In this paper, we hypothesise that it is possible to learn typical technology propagation patterns from historical data and to exploit this knowledge i) to anticipate where a technology may be adopted next and ii) to alert relevant stakeholders about emerging and relevant technologies in other fields. To do so, we propose the Technology-Topic Framework, a novel approach which uses a semantically enhanced technology-topic model to forecast the propagation of technologies to research areas. A formal evaluation of the approach on a set of technologies in the Semantic Web and Artificial Intelligence areas has produced excellent results, confirming the validity of our solution

    Toward a Unified Description of Battery Data

    Get PDF
    Battery research initiatives and giga-scale production generate an abundance of diverse data spanning myriad fields of science and engineering. Modern battery development is driven by the confluence of traditional domains of natural science with emerging fields like artificial intelligence and the vast engineering and logistical knowledge needed to sustain the global reach of battery Gigafactories. Despite the unprecedented volume of dedicated research targeting affordable, high-performance, and sustainable battery designs, these endeavours are held back by the lack of common battery data and vocabulary standards, as well as, machine readable tools to support interoperability. An ontology is a data model that represents domain knowledge as a map of concepts and the relations between them. A battery ontology offers an effective means to unify battery-related activities across different fields, accelerate the flow of knowledge in both human- and machine-readable formats, and support the integration of artificial intelligence in battery development. Furthermore, a logically consistent and expansive ontology is essential to support battery digitalization and standardization efforts, such as, the battery passport. This review summarizes the current state of ontology development, the needs for an ontology in the battery field, and current activities to meet this need.publishedVersio

    Automating Security Risk and Requirements Management for Cyber-Physical Systems

    Get PDF
    Cyber-physische Systeme ermöglichen zahlreiche moderne AnwendungsfĂ€lle und GeschĂ€ftsmodelle wie vernetzte Fahrzeuge, das intelligente Stromnetz (Smart Grid) oder das industrielle Internet der Dinge. Ihre SchlĂŒsselmerkmale KomplexitĂ€t, HeterogenitĂ€t und Langlebigkeit machen den langfristigen Schutz dieser Systeme zu einer anspruchsvollen, aber unverzichtbaren Aufgabe. In der physischen Welt stellen die Gesetze der Physik einen festen Rahmen fĂŒr Risiken und deren Behandlung dar. Im Cyberspace gibt es dagegen keine vergleichbare Konstante, die der Erosion von Sicherheitsmerkmalen entgegenwirkt. Hierdurch können sich bestehende Sicherheitsrisiken laufend Ă€ndern und neue entstehen. Um SchĂ€den durch böswillige Handlungen zu verhindern, ist es notwendig, hohe und unbekannte Risiken frĂŒhzeitig zu erkennen und ihnen angemessen zu begegnen. Die BerĂŒcksichtigung der zahlreichen dynamischen sicherheitsrelevanten Faktoren erfordert einen neuen Automatisierungsgrad im Management von Sicherheitsrisiken und -anforderungen, der ĂŒber den aktuellen Stand der Wissenschaft und Technik hinausgeht. Nur so kann langfristig ein angemessenes, umfassendes und konsistentes Sicherheitsniveau erreicht werden. Diese Arbeit adressiert den dringenden Bedarf an einer Automatisierungsmethodik bei der Analyse von Sicherheitsrisiken sowie der Erzeugung und dem Management von Sicherheitsanforderungen fĂŒr Cyber-physische Systeme. Das dazu vorgestellte Rahmenwerk umfasst drei Komponenten: (1) eine modelbasierte Methodik zur Ermittlung und Bewertung von Sicherheitsrisiken; (2) Methoden zur Vereinheitlichung, Ableitung und Verwaltung von Sicherheitsanforderungen sowie (3) eine Reihe von Werkzeugen und Verfahren zur Erkennung und Reaktion auf sicherheitsrelevante Situationen. Der Schutzbedarf und die angemessene Stringenz werden durch die Sicherheitsrisikobewertung mit Hilfe von Graphen und einer sicherheitsspezifischen Modellierung ermittelt und bewertet. Basierend auf dem Modell und den bewerteten Risiken werden anschließend fundierte Sicherheitsanforderungen zum Schutz des Gesamtsystems und seiner FunktionalitĂ€t systematisch abgeleitet und in einer einheitlichen, maschinenlesbaren Struktur formuliert. Diese maschinenlesbare Struktur ermöglicht es, Sicherheitsanforderungen automatisiert entlang der Lieferkette zu propagieren. Ebenso ermöglicht sie den effizienten Abgleich der vorhandenen FĂ€higkeiten mit externen Sicherheitsanforderungen aus Vorschriften, Prozessen und von GeschĂ€ftspartnern. Trotz aller getroffenen Maßnahmen verbleibt immer ein gewisses Restrisiko einer Kompromittierung, worauf angemessen reagiert werden muss. Dieses Restrisiko wird durch Werkzeuge und Prozesse adressiert, die sowohl die lokale und als auch die großrĂ€umige Erkennung, Klassifizierung und Korrelation von VorfĂ€llen verbessern. Die Integration der Erkenntnisse aus solchen VorfĂ€llen in das Modell fĂŒhrt hĂ€ufig zu aktualisierten Bewertungen, neuen Anforderungen und verbessert weitere Analysen. Abschließend wird das vorgestellte Rahmenwerk anhand eines aktuellen Anwendungsfalls aus dem Automobilbereich demonstriert.Cyber-Physical Systems enable various modern use cases and business models such as connected vehicles, the Smart (power) Grid, or the Industrial Internet of Things. Their key characteristics, complexity, heterogeneity, and longevity make the long-term protection of these systems a demanding but indispensable task. In the physical world, the laws of physics provide a constant scope for risks and their treatment. In cyberspace, on the other hand, there is no such constant to counteract the erosion of security features. As a result, existing security risks can constantly change and new ones can arise. To prevent damage caused by malicious acts, it is necessary to identify high and unknown risks early and counter them appropriately. Considering the numerous dynamic security-relevant factors requires a new level of automation in the management of security risks and requirements, which goes beyond the current state of the art. Only in this way can an appropriate, comprehensive, and consistent level of security be achieved in the long term. This work addresses the pressing lack of an automation methodology for the security-risk assessment as well as the generation and management of security requirements for Cyber-Physical Systems. The presented framework accordingly comprises three components: (1) a model-based security risk assessment methodology, (2) methods to unify, deduce and manage security requirements, and (3) a set of tools and procedures to detect and respond to security-relevant situations. The need for protection and the appropriate rigor are determined and evaluated by the security risk assessment using graphs and a security-specific modeling. Based on the model and the assessed risks, well-founded security requirements for protecting the overall system and its functionality are systematically derived and formulated in a uniform, machine-readable structure. This machine-readable structure makes it possible to propagate security requirements automatically along the supply chain. Furthermore, they enable the efficient reconciliation of present capabilities with external security requirements from regulations, processes, and business partners. Despite all measures taken, there is always a slight risk of compromise, which requires an appropriate response. This residual risk is addressed by tools and processes that improve the local and large-scale detection, classification, and correlation of incidents. Integrating the findings from such incidents into the model often leads to updated assessments, new requirements, and improves further analyses. Finally, the presented framework is demonstrated by a recent application example from the automotive domain

    Contemporary analysis and architecture for a generic cloud-based sensor data management platform.

    Get PDF
    An increasing volume of data is being generated by sensors and smart devices deployed in different areas, often far from computing facilities such as data centres. These data can be difficult to gather and process using local computing infrastructure. This is due to cost and limited resources. Cloud computing provides scalable resources that are capable of addressing such problems. However, platform-independent methods of gathering and transmitting sensor data to Clouds are not widely available. This paper presents a state-of-the-art analysis of Cloud-based sensor monitoring and data gathering platforms. It discusses their strengths and weaknesses and reviews the current trends in this area. Informed by the analysis, the paper further proposes a generic conceptual architecture for achieving a platform-neutral Cloud-based sensor monitoring and data gathering platform. We also discuss the objectives, design decisions and the implementation considerations for the conceptual architecture.IC

    What Does this Device Do?

    Get PDF
    Postprin

    Memos and Mega Projects: Applying Planners’ Perceptions of Their Software to a Framework for the Future of Planning

    Get PDF
    Software powers the modern urban planning department. However, the majority of academic attention on software in the planning profession has focused on highly specialized land use models, ignoring the importance of common applications that most planners rely upon throughout their workdays. For example, email’s impact on planning has gone largely undiscussed in the literature despite its role as one of the most commonly used software by planners. This report has a twofold purpose: 1) create a protocol for interviewing planners about the software they use routinely; 2) synthesize needs and expectations of planners gathered during interviews with relevant literature on planning technologies into a framework for the future of planning software. The framework presented in this report unifies, for the first time, disparate fields of research on software related to urban planning into a single set of guidelines for developing the future of software for public agencies. This framework provides a research agenda for urban planning software systems that mutually strengthen one another, and a valuable conceptual overview of the diverse information systems involved in the planning profession. Eleven interviews were conducted with mid- and senior-level planners in local governments across Santa Clara County, better known around the world as Silicon Valley. Santa Clara County was selected as the study area for two reasons: well-resourced governments in the area can invest in modern planning software, and to question if the stereotype of the area’s technological leadership extends to its local governments. Senior-level planners were interviewed in a semi-structured format with the interview adjusted based on a short survey about the software most used in the individual’s professional role. Key findings from the interviews informing the framework include: Planners in local governments in Silicon Valley are transitioning into modern software tools, like electronic plan review and permit management systems. There is no special technological advantage in Silicon Valley among public agencies. Planners were eager to fully implement and adopt software features available to them, particularly features that would improve communication about project status with applicants; Planners were unafraid of software automation. Limited automation features available in electronic plan review systems were yet to be fully implemented, and planners embraced the time-saving potential; The volume of email burdened interviewees. This draws attention to the significance of generalized productivity software in the practice of planning; Planners had no immediate need for “big data,” despite the recognized importance of big data in the urban planning technology literature. Perceptions from planners about the software that they use informed key problems and set goals for the framework developed here. Extensive research into emerging software targeting the construction and engineering trades with relevance to planners, as well as software designed to assist creative knowledge workers, informed the development of the future framework for planning software. Features of the framework include: A planning data model that underpins land use codes, development guidelines, and planning department procedures, providing machine-readable logic that underpins rulebased systems in email, project tracking, permit management, electronic plan review, and staff reports; Template-based and data type-aware word processing that encodes standardized practices for writing documents and requires numeric data be stored and represented as such. Electronic plan review systems that assist in checking both objective zoning codes and subjective design guidelines using generalized adaptable rule language; Integrated BIM-GIS supporting both the plan review and permit management process by organizing and visualizing spatial and physical data about the built environment; and Predictable, structured times to respond to email from applicants and the public and process-integrated calendars that recover time for focusing on long-term planning efforts; The generalized productivity software that planners have been using for over thirty years is inadequate for the predicted era of big data generated by networked urban environments. Excel is not designed to support real-time analytics, Word is not designed to assist in describing or associating analytics with textual information, and no application has yet been designed to visualize or organize such data for engaging the public. This framework gives planners and researchers of planning technology insight into the range of software used by planners and develop an innovative class of software fit for stewarding the cities of the coming century

    Intelligent Management and Efficient Operation of Big Data

    Get PDF
    This chapter details how Big Data can be used and implemented in networking and computing infrastructures. Specifically, it addresses three main aspects: the timely extraction of relevant knowledge from heterogeneous, and very often unstructured large data sources, the enhancement on the performance of processing and networking (cloud) infrastructures that are the most important foundational pillars of Big Data applications or services, and novel ways to efficiently manage network infrastructures with high-level composed policies for supporting the transmission of large amounts of data with distinct requisites (video vs. non-video). A case study involving an intelligent management solution to route data traffic with diverse requirements in a wide area Internet Exchange Point is presented, discussed in the context of Big Data, and evaluated.Comment: In book Handbook of Research on Trends and Future Directions in Big Data and Web Intelligence, IGI Global, 201

    DRIVER Technology Watch Report

    Get PDF
    This report is part of the Discovery Workpackage (WP4) and is the third report out of four deliverables. The objective of this report is to give an overview of the latest technical developments in the world of digital repositories, digital libraries and beyond, in order to serve as theoretical and practical input for the technical DRIVER developments, especially those focused on enhanced publications. This report consists of two main parts, one part focuses on interoperability standards for enhanced publications, the other part consists of three subchapters, which give a landscape picture of current and surfacing technologies and communities crucial to DRIVER. These three subchapters contain the GRID, CRIS and LTP communities and technologies. Every chapter contains a theoretical explanation, followed by case studies and the outcomes and opportunities for DRIVER in this field

    The interaction of lean and building information modeling in construction

    Get PDF
    Lean construction and Building Information Modeling are quite different initiatives, but both are having profound impacts on the construction industry. A rigorous analysis of the myriad specific interactions between them indicates that a synergy exists which, if properly understood in theoretical terms, can be exploited to improve construction processes beyond the degree to which it might be improved by application of either of these paradigms independently. Using a matrix that juxtaposes BIM functionalities with prescriptive lean construction principles, fifty-six interactions have been identified, all but four of which represent constructive interaction. Although evidence for the majority of these has been found, the matrix is not considered complete, but rather a framework for research to explore the degree of validity of the interactions. Construction executives, managers, designers and developers of IT systems for construction can also benefit from the framework as an aid to recognizing the potential synergies when planning their lean and BIM adoption strategies
    • 

    corecore