1,441 research outputs found

    Service scheduling strategy for microservice and heterogeneous multi-cores-based edge computing apparatus in smart girds with high renewable energy penetration

    Get PDF
    The microservice-based smart grid service (SGS) organization and the heterogeneous multi-cores-based computing resource supply are the development direction of edge computing in smart grid with high penetration of renewable energy sources and high market-oriented. However, their application also challenges the service schedule for edge computing apparatus (ECA), the physical carrier of edge computing. In the traditional scheduling strategy of SGS, an SGS usually corresponds to an independent application or component, and the heterogeneous multi-core computing environment is also not considered, making it difficult to cope with the above challenges. In this paper, we propose an SGS scheduling strategy for the ECA. Specifically, we first present an SGS scheduling framework of ECA and give the essential element of meeting SGS scheduling. Then, considering the deadline and importance attributes of the SGS, a microservice scheduling prioritizing module is proposed. On this basis, the inset-based method is used to allocate the microservice task to the heterogeneous multi-cores to utilize computing resources and reduce the service response time efficiently. Furthermore, we design the scheduling unit dividing module to balance the delay requirement between the service with early arrival time and the service with high importance in high concurrency scenarios. An emergency mechanism (EM) is also presented for the timely completion of urgent SGSs. Finally, the effectiveness of the proposed service scheduling strategy is verified in a typical SGS scenario in the smart distribution transformer area

    Insights into software development approaches: mining Q &A repositories

    Get PDF
    © 2023 The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY), https://creativecommons.org/licenses/by/4.0/Context: Software practitioners adopt approaches like DevOps, Scrum, and Waterfall for high-quality software development. However, limited research has been conducted on exploring software development approaches concerning practitioners’ discussions on Q &A forums. Objective: We conducted an empirical study to analyze developers’ discussions on Q &A forums to gain insights into software development approaches in practice. Method: We analyzed 13,903 developers’ posts across Stack Overflow (SO), Software Engineering Stack Exchange (SESE), and Project Management Stack Exchange (PMSE) forums. A mixed method approach, consisting of the topic modeling technique (i.e., Latent Dirichlet Allocation (LDA)) and qualitative analysis, is used to identify frequently discussed topics of software development approaches, trends (popular, difficult topics), and the challenges faced by practitioners in adopting different software development approaches. Findings: We identified 15 frequently mentioned software development approaches topics on Q &A sites and observed an increase in trends for the top-3 most difficult topics requiring more attention. Finally, our study identified 49 challenges faced by practitioners while deploying various software development approaches, and we subsequently created a thematic map to represent these findings. Conclusions: The study findings serve as a useful resource for practitioners to overcome challenges, stay informed about current trends, and ultimately improve the quality of software products they develop.Peer reviewe

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Spectrum auctions: designing markets to benefit the public, industry and the economy

    Get PDF
    Access to the radio spectrum is vital for modern digital communication. It is an essential component for smartphone capabilities, the Cloud, the Internet of Things, autonomous vehicles, and multiple other new technologies. Governments use spectrum auctions to decide which companies should use what parts of the radio spectrum. Successful auctions can fuel rapid innovation in products and services, unlock substantial economic benefits, build comparative advantage across all regions, and create billions of dollars of government revenues. Poor auction strategies can leave bandwidth unsold and delay innovation, sell national assets to firms too cheaply, or create uncompetitive markets with high mobile prices and patchy coverage that stifles economic growth. Corporate bidders regularly complain that auctions raise their costs, while government critics argue that insufficient revenues are raised. The cross-national record shows many examples of both highly successful auctions and miserable failures. Drawing on experience from the UK and other countries, senior regulator Geoffrey Myers explains how to optimise the regulatory design of auctions, from initial planning to final implementation. Spectrum Auctions offers unrivalled expertise for regulators and economists engaged in practical auction design or company executives planning bidding strategies. For applied economists, teachers, and advanced students this book provides unrivalled insights in market design and public management. Providing clear analytical frameworks, case studies of auctions, and stage-by-stage advice, it is essential reading for anyone interested in designing public-interested and successful spectrum auctions

    Efficient concurrent data structure access parallelism techniques for increasing scalability

    Get PDF
    Multi-core processors have revolutionised the way data structures are designed by bringing parallelism to mainstream computing. Key to exploiting hardware parallelism available in multi-core processors are concurrent data structures. However, some concurrent data structure abstractions are inherently sequential and incapable of harnessing the parallelism performance of multi-core processors. Designing and implementing concurrent data structures to harness hardware parallelism is challenging due to the requirement of correctness, efficiency and practicability under various application constraints. In this thesis, our research contribution is towards improving concurrent data structure access parallelism to increase data structure performance. We propose new design frameworks that improve access parallelism of already existing concurrent data structure designs. Also, we propose new concurrent data structure designs with significant performance improvements. To give an insight into the interplay between hardware and concurrent data structure access parallelism, we give a detailed analysis and model the performance scalability with varying parallelism.In the first part of the thesis, we focus on data structure semantic relaxation. By relaxing the semantics of a data structure, a bigger design space, that allows weaker synchronization and more useful parallelism, is unveiled. Investigating new data structure designs, capable of trading semantics for achieving better performance in a monotonic way, is a major challenge in the area. We algorithmically address this challenge in this part of the thesis. We present an efficient, lock-free, concurrent data structure design framework for out-of-order semantic relaxation. We introduce a new two-dimensional algorithmic design, that uses multiple instances of a given data structure to improve access parallelism. In the second part of the thesis, we propose an efficient priority queue that improves access parallelism by reducing the number of synchronization points for each operation. Priority queues are fundamental abstract data types, often used to manage limited resources in parallel systems. Typical proposed parallel priority queue implementations are based on heaps or skip lists. In recent literature, skip lists have been shown to be the most efficient design choice for implementing priority queues. Though numerous intricate implementations of skip list based queues have been proposed in the literature, their performance is constrained by the high number of global atomic updates per operation and the high memory consumption, which are proportional to the number of sub-lists in the queue. In this part of the thesis, we propose an alternative approach for designing lock-free linearizable priority queues, that significantly improve memory efficiency and throughput performance, by reducing the number of global atomic updates and memory consumption as compared to skip-list based queues. To achieve this, our new design combines two structures; a search tree and a linked list, forming what we call a Tree Search List Queue (TSLQueue). Subsequently, we analyse and introduce a model for lock-free concurrent data structure access parallelism. The major impediment to scaling concurrent data structures is memory contention when accessing shared data structure access points, leading to thread serialisation, and hindering parallelism. Aiming to address this challenge, a significant amount of work in the literature has proposed multi-access techniques that improve concurrent data structure parallelism. However, there is little work on analysing and modelling the execution behaviour of concurrent multi-access data structures especially in a shared memory setting. In this part of the thesis, we analyse and model the general execution behaviour of concurrent multi-access data structures in the shared memory setting. We study and analyse the behaviour of the two popular random access patterns: shared (Remote) and exclusive (Local) access, and the behaviour of the two most commonly used atomic primitives for designing lock-free data structures: Compare and Swap, and, Fetch and Add

    Insights into Software Development Approaches: Mining Q&A Repositories

    Full text link
    Context: Software practitioners adopt approaches like DevOps, Scrum, and Waterfall for high-quality software development. However, limited research has been conducted on exploring software development approaches concerning practitioners discussions on Q&A forums. Objective: We conducted an empirical study to analyze developers discussions on Q&A forums to gain insights into software development approaches in practice. Method: We analyzed 13,903 developers posts across Stack Overflow (SO), Software Engineering Stack Exchange (SESE), and Project Management Stack Exchange (PMSE) forums. A mixed method approach, consisting of the topic modeling technique (i.e., Latent Dirichlet Allocation (LDA)) and qualitative analysis, is used to identify frequently discussed topics of software development approaches, trends (popular, difficult topics), and the challenges faced by practitioners in adopting different software development approaches. Findings: We identified 15 frequently mentioned software development approaches topics on Q&A sites and observed an increase in trends for the top-3 most difficult topics requiring more attention. Finally, our study identified 49 challenges faced by practitioners while deploying various software development approaches, and we subsequently created a thematic map to represent these findings. Conclusions: The study findings serve as a useful resource for practitioners to overcome challenges, stay informed about current trends, and ultimately improve the quality of software products they develop

    Security considerations in the open source software ecosystem

    Get PDF
    Open source software plays an important role in the software supply chain, allowing stakeholders to utilize open source components as building blocks in their software, tooling, and infrastructure. But relying on the open source ecosystem introduces unique challenges, both in terms of security and trust, as well as in terms of supply chain reliability. In this dissertation, I investigate approaches, considerations, and encountered challenges of stakeholders in the context of security, privacy, and trustworthiness of the open source software supply chain. Overall, my research aims to empower and support software experts with the knowledge and resources necessary to achieve a more secure and trustworthy open source software ecosystem. In the first part of this dissertation, I describe a research study investigating the security and trust practices in open source projects by interviewing 27 owners, maintainers, and contributors from a diverse set of projects to explore their behind-the-scenes processes, guidance and policies, incident handling, and encountered challenges, finding that participants’ projects are highly diverse in terms of their deployed security measures and trust processes, as well as their underlying motivations. More on the consumer side of the open source software supply chain, I investigated the use of open source components in industry projects by interviewing 25 software developers, architects, and engineers to understand their projects’ processes, decisions, and considerations in the context of external open source code, finding that open source components play an important role in many of the industry projects, and that most projects have some form of company policy or best practice for including external code. On the side of end-user focused software, I present a study investigating the use of software obfuscation in Android applications, which is a recommended practice to protect against plagiarism and repackaging. The study leveraged a multi-pronged approach including a large-scale measurement, a developer survey, and a programming experiment, finding that only 24.92% of apps are obfuscated by their developer, that developers do not fear theft of their own apps, and have difficulties obfuscating their own apps. Lastly, to involve end users themselves, I describe a survey with 200 users of cloud office suites to investigate their security and privacy perceptions and expectations, with findings suggesting that users are generally aware of basic security implications, but lack technical knowledge for envisioning some threat models. The key findings of this dissertation include that open source projects have highly diverse security measures, trust processes, and underlying motivations. That the projects’ security and trust needs are likely best met in ways that consider their individual strengths, limitations, and project stage, especially for smaller projects with limited access to resources. That open source components play an important role in industry projects, and that those projects often have some form of company policy or best practice for including external code, but developers wish for more resources to better audit included components. This dissertation emphasizes the importance of collaboration and shared responsibility in building and maintaining the open source software ecosystem, with developers, maintainers, end users, researchers, and other stakeholders alike ensuring that the ecosystem remains a secure, trustworthy, and healthy resource for everyone to rely on

    Intelligent interface agents for biometric applications

    Get PDF
    This thesis investigates the benefits of applying the intelligent agent paradigm to biometric identity verification systems. Multimodal biometric systems, despite their additional complexity, hold the promise of providing a higher degree of accuracy and robustness. Multimodal biometric systems are examined in this work leading to the design and implementation of a novel distributed multi-modal identity verification system based on an intelligent agent framework. User interface design issues are also important in the domain of biometric systems and present an exceptional opportunity for employing adaptive interface agents. Through the use of such interface agents, system performance may be improved, leading to an increase in recognition rates over a non-adaptive system while producing a more robust and agreeable user experience. The investigation of such adaptive systems has been a focus of the work reported in this thesis. The research presented in this thesis is divided into two main parts. Firstly, the design, development and testing of a novel distributed multi-modal authentication system employing intelligent agents is presented. The second part details design and implementation of an adaptive interface layer based on interface agent technology and demonstrates its integration with a commercial fingerprint recognition system. The performance of these systems is then evaluated using databases of biometric samples gathered during the research. The results obtained from the experimental evaluation of the multi-modal system demonstrated a clear improvement in the accuracy of the system compared to a unimodal biometric approach. The adoption of the intelligent agent architecture at the interface level resulted in a system where false reject rates were reduced when compared to a system that did not employ an intelligent interface. The results obtained from both systems clearly express the benefits of combining an intelligent agent framework with a biometric system to provide a more robust and flexible application

    Managing Data Replication and Distribution in the Fog with FReD

    Full text link
    The heterogeneous, geographically distributed infrastructure of fog computing poses challenges in data replication, data distribution, and data mobility for fog applications. Fog computing is still missing the necessary abstractions to manage application data, and fog application developers need to re-implement data management for every new piece of software. Proposed solutions are limited to certain application domains, such as the IoT, are not flexible in regard to network topology, or do not provide the means for applications to control the movement of their data. In this paper, we present FReD, a data replication middleware for the fog. FReD serves as a building block for configurable fog data distribution and enables low-latency, high-bandwidth, and privacy-sensitive applications. FReD is a common data access interface across heterogeneous infrastructure and network topologies, provides transparent and controllable data distribution, and can be integrated with applications from different domains. To evaluate our approach, we present a prototype implementation of FReD and show the benefits of developing with FReD using three case studies of fog computing applications

    Serverless Automated Assessment of Programming Assignments

    Get PDF
    This thesis explores the use of serverless technology for automating the assessment of programming assignments (AAPA). The aim of this study is to investigate the effectiveness of serverless solutions for AAPA, and to explore the potential benefits and challenges of using serverless technology in learning management systems. The research questions addressed in this study are: (1) How serverless solutions for AAPA affect the response time of grading? (2) How much does the serverless solution impact infrastructure cost? (3) What are the environmental aspects of moving from locally hosted VM-based solutions to cloud-based serverless solutions? To answer these research questions, a design science research methodology was used to design and implement a prototype solution for serverless AAPA. The solution was implemented using AWS API Gateway, SQS, DynamoDB and Lambda, and it was evaluated through experiments with real-world programming exercises. The results of the experiments showed that the serverless solution was able to significantly improve the response time of grading programming assignments in bursts, compared to a locally hosted VM-based solution. Additionally, the serverless solution was potentially able to reduce infrastructure costs, as it only used resources when needed, and it was able to scale automatically to handle varying levels of traffic. The environmental impact of serverless technology for programming exercise assessment was also explored. The findings suggest that serverless technology can have a positive impact on the environment by reducing energy consumption, carbon emissions and hardware lifecycle management. However, there are also limitations and challenges to using serverless technology for programming exercise assessment, such as data privacy and security. Overall, this thesis provides a framework for future research and development in the field of serverless AAPA and highlights the potential benefits and challenges of using serverless technology in learning management systems
    • …
    corecore