1,481 research outputs found

    Cybersecurity in Motion: A Survey of Challenges and Requirements for Future Test Facilities of CAVs

    Get PDF
    The way we travel is changing rapidly and Cooperative Intelligent Transportation Systems (C-ITSs) are at the forefront of this evolution. However, the adoption of C-ITSs introduces new risks and challenges, making cybersecurity a top priority for ensuring safety and reliability. Building on this premise, this paper introduces an envisaged Cybersecurity Centre of Excellence (CSCE) designed to bolster researching, testing, and evaluating the cybersecurity of C-ITSs. We explore the design, functionality, and challenges of CSCE's testing facilities, outlining the technological, security, and societal requirements. Through a thorough survey and analysis, we assess the effectiveness of these systems in detecting and mitigating potential threats, highlighting their flexibility to adapt to future C-ITSs. Finally, we identify current unresolved challenges in various C-ITS domains, with the aim of motivating further research into the cybersecurity of C-ITSs

    A Process Model for Continuous Public Service Improvement: Demonstrated in Local Government Context for Smart Cities.

    Get PDF
    The new era of the smart city is accompanied by Information and Communication Technology (ICT) and many other technologies to improve the quality of life for the citizen of the modern city, that in turn, has brought immense opportunities as well as challenges for government and organizations. Local authorities of the cities provide multiple services across different domains to the citizens (e.g. transport, health, environment, housing, etc.). Citizens are involved during different stages of smart city services and provide their feedback across those domains. Existing smart city initiatives provide various technological platforms for gathering citizens’ feedback to provide improved quality of services to them. Even though technological developments have resulted in a higher degree of digitalization, there is a need for improvement in the services provided by municipalities. There are multiple engagement platforms to obtain citizens’ feedback for the improvement of smart city services and to transform public services. However, limited studies consider the challenges faced by practitioners at the local level during the incorporation of those feedback for further service improvement. As a result, city services fail to fulfil the need of citizens and do not meet the goals set by existing engagement platforms. Technology-oriented solutions in the public sector domain require a logical and structured approach for the transformation of public services and digitalization. Enterprise Architecture (EA) can provide this structured approach to transform public services by providing a medium to manage change, and to respond to the need of multiple stakeholders including citizens. Thus, this research proposes a process model based on the guidelines of EA and the collaboration with practitioners that would assist local authorities to provide improved services to the citizens and fulfil their needs

    Effectiveness and safety of Levofloxacin containing regimen in the treatment of Isoniazid mono-resistant pulmonary Tuberculosis: a systematic review

    Get PDF
    BackgroundWe aimed to determine the effectiveness and safety of the Levofloxacin-containing regimen that the World Health Organization is currently recommending for the treatment of Isoniazid mono-resistant pulmonary Tuberculosis.MethodsOur eligible criteria for the studies to be included were; randomized controlled trials or cohort studies that focused on adults with Isoniazid mono-resistant tuberculosis (HrTB) and treated with a Levofloxacin-containing regimen along with first-line anti-tubercular drugs; they should have had a control group treated with first-line without Levofloxacin; should have reported treatment success rate, mortality, recurrence, progression to multidrug-resistant Tuberculosis. We performed the search in MEDLINE, EMBASE, Epistemonikos, Google Scholar, and Clinical trials registry. Two authors independently screened the titles/abstracts and full texts that were retained after the initial screening, and a third author resolved disagreements.ResultsOur search found 4,813 records after excluding duplicates. We excluded 4,768 records after screening the titles and abstracts, retaining 44 records. Subsequently, 36 articles were excluded after the full-text screening, and eight appeared to have partially fulfilled the inclusion criteria. We contacted the respective authors, and none responded positively. Hence, no articles were included in the meta-analysis.ConclusionWe found no “quality” evidence currently on the effectiveness and safety of Levofloxacin in treating HrTB.Systematic review registrationhttps://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42022290333, identifier: CRD42022290333

    Towards a Digital Capability Maturity Framework for Tertiary Institutions

    Get PDF
    Background: The Digital Capability (DC) of an Institution is the extent to which the institution's culture, policies, and infrastructure enable and support digital practices (Killen et al., 2017), and maturity is the continuous improvement of those capabilities. As technology continues to evolve, it is likely to give rise to constant changes in teaching and learning, potentially disrupting Tertiary Education Institutions (TEIs) and making existing organisational models less effective. An institution’s ability to adapt to continuously changing technology depends on the change in culture and leadership decisions within the individual institutions. Change without structure leads to inefficiencies, evident across the Nigerian TEI landscape. These inefficiencies can be attributed mainly to a lack of clarity and agreement on a development structure. Objectives: This research aims to design a structure with a pathway to maturity, to support the continuous improvement of DC in TEIs in Nigeria and consequently improve the success of digital education programmes. Methods: I started by conducting a Systematic Literature Review (SLR) investigating the body of knowledge on DC, its composition, the relationship between its elements and their respective impact on the Maturity of TEIs. Findings from the review led me to investigate further the key roles instrumental in developing Digital Capability Maturity in Tertiary Institutions (DCMiTI). The results of these investigations formed the initial ideas and constructs upon which the proposed structure was built. I then explored a combination of quantitative and qualitative methods to substantiate the initial constructs and gain a deeper understanding of the relationships between elements/sub-elements. Next, I used triangulation as a vehicle to expand the validity of the findings by replicating the methods in a case study of TEIs in Nigeria. Finally, after using the validated constructs and knowledge base to propose a structure based on CMMI concepts, I conducted an expert panel workshop to test the model’s validity. Results: I consolidated the body of knowledge from the SLR into a universal classification of 10 elements, each comprising sub-elements. I also went on to propose a classification for DCMiTI. The elements/sub-elements in the classification indicate the success factors for digital maturity, which were also found to positively impact the ability to design, deploy and sustain digital education. These findings were confirmed in a UK University and triangulated in a case study of Northwest Nigeria. The case study confirmed the literature findings on the status of DCMiTI in Nigeria and provided sufficient evidence to suggest that a maturity structure would be a well-suited solution to supporting DCM in the region. I thus scoped, designed, and populated a domain-specific framework for DCMiTI, configured to support the educational landscape in Northwest Nigeria. Conclusion: The proposed DCMiTI framework enables TEIs to assess their maturity level across the various capability elements and reports on DCM as a whole. It provides guidance on the criteria that must be satisfied to achieve higher levels of digital maturity. The framework received expert validation, as domain experts agreed that the proposed Framework was well applicable to developing DCMiTI and would be a valuable tool to support TEIs in delivering successful digital education. Recommendations were made to engage in further iterations of testing by deploying the proposed framework for use in TEI to confirm the extent of its generalisability and acceptability

    Federated Data Modeling for Built Environment Digital Twins

    Get PDF
    The digital twin (DT) approach is an enabler for data-driven decision making in architecture, engineering, construction, and operations. Various open data models that can potentially support the DT developments, at different scales and application domains, can be found in the literature. However, many implementations are based on organization-specific information management processes and proprietary data models, hindering interoperability. This article presents the process and information management approaches developed to generate a federated open data model supporting DT applications. The business process modeling notation and transaction and interaction modeling techniques are applied to formalize the federated DT data modeling framework, organized in three main phases: requirements definition, federation, validation and improvement. The proposed framework is developed adopting the cross-disciplinary and multiscale principles. A validation on the development of the federated building-level DT data model for the West Cambridge Campus DT research facility is conducted. The federated data model is used to enable DT-based asset management applications at the building and built environment levels

    Towards a human-centric data economy

    Get PDF
    Spurred by widespread adoption of artificial intelligence and machine learning, “data” is becoming a key production factor, comparable in importance to capital, land, or labour in an increasingly digital economy. In spite of an ever-growing demand for third-party data in the B2B market, firms are generally reluctant to share their information. This is due to the unique characteristics of “data” as an economic good (a freely replicable, non-depletable asset holding a highly combinatorial and context-specific value), which moves digital companies to hoard and protect their “valuable” data assets, and to integrate across the whole value chain seeking to monopolise the provision of innovative services built upon them. As a result, most of those valuable assets still remain unexploited in corporate silos nowadays. This situation is shaping the so-called data economy around a number of champions, and it is hampering the benefits of a global data exchange on a large scale. Some analysts have estimated the potential value of the data economy in US$2.5 trillion globally by 2025. Not surprisingly, unlocking the value of data has become a central policy of the European Union, which also estimated the size of the data economy in 827C billion for the EU27 in the same period. Within the scope of the European Data Strategy, the European Commission is also steering relevant initiatives aimed to identify relevant cross-industry use cases involving different verticals, and to enable sovereign data exchanges to realise them. Among individuals, the massive collection and exploitation of personal data by digital firms in exchange of services, often with little or no consent, has raised a general concern about privacy and data protection. Apart from spurring recent legislative developments in this direction, this concern has raised some voices warning against the unsustainability of the existing digital economics (few digital champions, potential negative impact on employment, growing inequality), some of which propose that people are paid for their data in a sort of worldwide data labour market as a potential solution to this dilemma [114, 115, 155]. From a technical perspective, we are far from having the required technology and algorithms that will enable such a human-centric data economy. Even its scope is still blurry, and the question about the value of data, at least, controversial. Research works from different disciplines have studied the data value chain, different approaches to the value of data, how to price data assets, and novel data marketplace designs. At the same time, complex legal and ethical issues with respect to the data economy have risen around privacy, data protection, and ethical AI practices. In this dissertation, we start by exploring the data value chain and how entities trade data assets over the Internet. We carry out what is, to the best of our understanding, the most thorough survey of commercial data marketplaces. In this work, we have catalogued and characterised ten different business models, including those of personal information management systems, companies born in the wake of recent data protection regulations and aiming at empowering end users to take control of their data. We have also identified the challenges faced by different types of entities, and what kind of solutions and technology they are using to provide their services. Then we present a first of its kind measurement study that sheds light on the prices of data in the market using a novel methodology. We study how ten commercial data marketplaces categorise and classify data assets, and which categories of data command higher prices. We also develop classifiers for comparing data products across different marketplaces, and we study the characteristics of the most valuable data assets and the features that specific vendors use to set the price of their data products. Based on this information and adding data products offered by other 33 data providers, we develop a regression analysis for revealing features that correlate with prices of data products. As a result, we also implement the basic building blocks of a novel data pricing tool capable of providing a hint of the market price of a new data product using as inputs just its metadata. This tool would provide more transparency on the prices of data products in the market, which will help in pricing data assets and in avoiding the inherent price fluctuation of nascent markets. Next we turn to topics related to data marketplace design. Particularly, we study how buyers can select and purchase suitable data for their tasks without requiring a priori access to such data in order to make a purchase decision, and how marketplaces can distribute payoffs for a data transaction combining data of different sources among the corresponding providers, be they individuals or firms. The difficulty of both problems is further exacerbated in a human-centric data economy where buyers have to choose among data of thousands of individuals, and where marketplaces have to distribute payoffs to thousands of people contributing personal data to a specific transaction. Regarding the selection process, we compare different purchase strategies depending on the level of information available to data buyers at the time of making decisions. A first methodological contribution of our work is proposing a data evaluation stage prior to datasets being selected and purchased by buyers in a marketplace. We show that buyers can significantly improve the performance of the purchasing process just by being provided with a measurement of the performance of their models when trained by the marketplace with individual eligible datasets. We design purchase strategies that exploit such functionality and we call the resulting algorithm Try Before You Buy, and our work demonstrates over synthetic and real datasets that it can lead to near-optimal data purchasing with only O(N) instead of the exponential execution time - O(2N) - needed to calculate the optimal purchase. With regards to the payoff distribution problem, we focus on computing the relative value of spatio-temporal datasets combined in marketplaces for predicting transportation demand and travel time in metropolitan areas. Using large datasets of taxi rides from Chicago, Porto and New York we show that the value of data is different for each individual, and cannot be approximated by its volume. Our results reveal that even more complex approaches based on the “leave-one-out” value, are inaccurate. Instead, more complex and acknowledged notions of value from economics and game theory, such as the Shapley value, need to be employed if one wishes to capture the complex effects of mixing different datasets on the accuracy of forecasting algorithms. However, the Shapley value entails serious computational challenges. Its exact calculation requires repetitively training and evaluating every combination of data sources and hence O(N!) or O(2N) computational time, which is unfeasible for complex models or thousands of individuals. Moreover, our work paves the way to new methods of measuring the value of spatio-temporal data. We identify heuristics such as entropy or similarity to the average that show a significant correlation with the Shapley value and therefore can be used to overcome the significant computational challenges posed by Shapley approximation algorithms in this specific context. We conclude with a number of open issues and propose further research directions that leverage the contributions and findings of this dissertation. These include monitoring data transactions to better measure data markets, and complementing market data with actual transaction prices to build a more accurate data pricing tool. A human-centric data economy would also require that the contributions of thousands of individuals to machine learning tasks are calculated daily. For that to be feasible, we need to further optimise the efficiency of data purchasing and payoff calculation processes in data marketplaces. In that direction, we also point to some alternatives to repetitively training and evaluating a model to select data based on Try Before You Buy and approximate the Shapley value. Finally, we discuss the challenges and potential technologies that help with building a federation of standardised data marketplaces. The data economy will develop fast in the upcoming years, and researchers from different disciplines will work together to unlock the value of data and make the most out of it. Maybe the proposal of getting paid for our data and our contribution to the data economy finally flies, or maybe it is other proposals such as the robot tax that are finally used to balance the power between individuals and tech firms in the digital economy. Still, we hope our work sheds light on the value of data, and contributes to making the price of data more transparent and, eventually, to moving towards a human-centric data economy.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Georgios Smaragdakis.- Secretario: Ángel Cuevas Rumín.- Vocal: Pablo Rodríguez Rodrígue

    Interoperability framework of virtual factory and business innovation

    Get PDF
    Interoperability framework of virtual factory and business innovationTask T51 Design a common schema and schema evolution framework for supporting interoperabilityTask T52 Design interoperability framework for supporting datainformation transformation service composition and business process cooperation among partnersA draft version is envisioned for month 44 which will be updated to reflect incremental changes driven by the other working packages for month 72 deliverable 7.
    corecore