2,907 research outputs found
Microservices and serverless functions – lifecycle, performance, and resource utilisation of edge based real-time IoT analytics
Edge Computing harnesses resources close to the data sources to reduce end-to-end latency and allow real-time process automation for verticals such as Smart City, Healthcare and Industry 4.0. Edge resources are limited when compared to traditional Cloud data centres; hence the choice of proper resource management strategies in this context becomes paramount. Microservice and Function as a Service architectures support modular and agile patterns, compared to a monolithic design, through lightweight containerisation, continuous integration / deployment and scaling. The advantages brought about by these technologies may initially seem obvious, but we argue that their usage at the Edge deserves a more in-depth evaluation. By analysing both the software development and deployment lifecycle, along with performance and resource utilisation, this paper explores microservices and two alternative types of serverless functions to build edge real-time IoT analytics. In the experiments comparing these technologies, microservices generally exhibit slightly better end-to-end processing latency and resource utilisation than serverless functions. One of the serverless functions and the microservices excel at handling larger data streams with auto-scaling. Whilst serverless functions natively offer this feature, the choice of container orchestration framework may determine its availability for microservices. The other serverless function, while supporting a simpler lifecycle, is more suitable for low-invocation scenarios and faces challenges with parallel requests and inherent overhead, making it less suitable for real-time processing in demanding IoT settings
Recommended from our members
Valuing Low Carbon Energy - Insights for Fusion Commercialisation
The proponents of nuclear fusion believe that a small modular approach has the potential to achieve a viable source of energy in timescales smaller than those projected for the large scale multinational ITER/DEMO programme. If the numerous technical challenges can be overcome, the question still remains as to whether fusion small modular reactors (SMRs) will be commercially viable. This thesis aims to provide insight into this question and to identify whether approaches other than the generation of electricity to the grid have the potential to increase the value of a fusion SMR or a fleet of SMRs to a developer.
The work has three main components. Firstly, the Net Present Value (SMR) of a fusion SMR supplying electricity for sale to the grid in the UK was evaluated. This showed that there are combinations of electricity prices, capital cost and discount rates that will result in positive NPVs.
In the second component of the work, an existing approach to engineering flexibilities / real options has been extended and applied to the production of hydrogen from methane with carbon capture and storage. The results of this work demonstrate that the application of engineering flexibilities / real options has the potential to increase the value of a project.
In the final stage of the thesis, an engineering flexibility / real options approach has been combined with a portfolio approach to a fleet of fusion SMRs. This demonstrated that this approach has the potential to increase the value of a fleet of fusion SMRs to a developer.
The thesis has demonstrated that it is possible that fusion SMRs may be commercially viable. It has also demonstrated that the use of techniques such as engineering flexibilities and portfolio theory has the potential to increase the value to a developer of a fleet of fusion SMRs based on a tokomak design.</br
Accountability for Misbehavior in Threshold Decryption via Threshold Traitor Tracing
A -out-of- threshold decryption system assigns key shares to parties so that any of them can decrypt a well-formed ciphertext. Existing threshold decryption systems are not secure when these parties are rational actors: an adversary can offer to pay the parties for their key shares. The problem is that a quorum of parties, working together, can sell the adversary a decryption key that reveals nothing about the identity of the traitor parties. This provides a risk-free profit for the parties since there is no accountability for their misbehavior --- the information they sell to the adversary reveals nothing about their identity. This behavior can result in a complete break in many applications of threshold decryption, such as encrypted mempools, private voting, and sealed-bid auctions.
In this work we show how to add accountability to threshold decryption systems to deter this type of risk-free misbehavior. Suppose a quorum of or more parties construct a decoder algorithm that takes as input a ciphertext and outputs the corresponding plaintext or . They sell to the adversary. Our threshold decryption systems are equipped with a tracing algorithm that can trace to members of the quorum that created it. The tracing algorithm is only given blackbox access to and will identify some members of the misbehaving quorum. The parties can then be held accountable, which may discourage them from selling the decoder in the first place.
Our starting point is standard (non-threshold) traitor tracing, where parties each holds a secret key. Every party can decrypt a well-formed ciphertext on its own. However, if a subset of parties collude to create a pirate decoder that can decrypt well-formed ciphertexts, then it is possible to trace to at least one member of using only blackbox access to the decoder . Traitor tracing received much attention over the years and multiple schemes have been developed.
In this work we develop the theory of traitor tracing for threshold decryption, where now only a subset of or more parties can collude to create a pirate decoder . This problem has recently become quite important due to the real-world deployment of threshold decryption in encrypted mempools, as we explain in the paper. While there are several non-threshold traitor tracing schemes that we can leverage, adapting these constructions to the threshold decryption settings requires new cryptographic techniques. We present a number of constructions for traitor tracing for threshold decryption, and note that much work remains to explore the large design space
Head in the BitCloud: A Discussion on the Copyrightability and Ownership Rights in Generative Digital Art and Non-Fungible Tokens
This Comment discusses three major copyright questions raised by non-fungible tokens (NFTs) creation and distribution in the digital art world. First, how does employing AI in the creation of generative and derivative digital art and NFTs affect the copyright requirements of authorship? Second, who is the rightful owner of an NFT image pre- and post-purchase? Finally, how does the first sale doctrine apply to NFT image purchases and are those protections enough to resolve future copyright-specific NFT claims? In Part I, an introductory example is laid out to showcase the complex issues generative and derivative digital art and NFT images create within copyright law. Part II provides a foundational knowledge of key topics. This section explains what blockchain technology, smart contracts, and cryptocurrency are, and how they relate to the creation and sale of NFTs and NFT images. It goes on to address key NFT concepts such as on-chain and off-chain transactions, while outlining why there is real value in purchasing NFT images. This section concludes by discussing the future application of NFTs to other professional industries. Part III discusses how copyright law and NFTs interact. It explores the arguments on both sides of the copyright debate discussing AI-generated creative works and the authorship requirement. It then determines that most original and generated NFT images should be copyrightable because of the amount of creativity and planning that goes into creating the NFT projects and the AI that generates these NFT images. This portion also discusses who the rightful owner should be when generative NFT image projects spawn derivative NFT images, determining the artist who created the derivative NFT project is likely the default owner of the resulting works. Part IV then turns to the question of how the current first sale doctrine applies to NFT images. It briefly discusses how past legislative amendments have failed to address default rights in digital works while arguing the current doctrine does not adequately protect NFT image artists or purchasers. It then discusses how amending the first sale doctrine is the most efficient way to ensure an NFT image’s copyright is not diluted, while subsequently protecting both NFT artists and purchasers. It proposes an amendment to the first sale doctrine that would grant NFT image purchasers specific default rights in the NFT images they are purchasing. It distinguishes NFT images from former digital goods seeking first sale protection by addressing long-held concerns Congress has wrestled with when discussing digital asset ownership in the past. This section focuses on how the transparency and immutability of blockchain and smart contracts has nullified the issues of increased piracy risk and former inadequate asset tracking protocols. Finally, Part V summarizes why this type of amendment would be the best way to ensure NFTs continue providing value to the artistic community without diluting the creative rights they were created to track and protect
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
A Critical Review Of Post-Secondary Education Writing During A 21st Century Education Revolution
Educational materials are effective instruments which provide information and report new discoveries uncovered by researchers in specific areas of academia. Higher education, like other education institutions, rely on instructional materials to inform its practice of educating adult learners. In post-secondary education, developmental English programs are tasked with meeting the needs of dynamic populations, thus there is a continuous need for research in this area to support its changing landscape. However, the majority of scholarly thought in this area centers on K-12 reading and writing. This paucity presents a phenomenon to the post-secondary community. This research study uses a qualitative content analysis to examine peer-reviewed journals from 2003-2017, developmental online websites, and a government issued document directed toward reforming post-secondary developmental education programs. These highly relevant sources aid educators in discovering informational support to apply best practices for student success. Developmental education serves the purpose of addressing literacy gaps for students transitioning to college-level work. The findings here illuminate the dearth of material offered to developmental educators. This study suggests the field of literacy research is fragmented and highlights an apparent blind spot in scholarly literature with regard to English writing instruction. This poses a quandary for post-secondary literacy researchers in the 21st century and establishes the necessity for the literacy research community to commit future scholarship toward equipping college educators teaching writing instruction to underprepared adult learners
Threshold Encrypted Mempools: Limitations and Considerations
Encrypted mempools are a class of solutions aimed at preventing or reducing
negative externalities of MEV extraction using cryptographic privacy. Mempool
encryption aims to hide information related to pending transactions until a
block including the transactions is committed, targeting the prevention of
frontrunning and similar behaviour. Among the various methods of encryption,
threshold schemes are particularly interesting for the design of MEV mitigation
mechanisms, as their distributed nature and minimal hardware requirements
harmonize with a broader goal of decentralization.
This work looks beyond the formal and technical cryptographic aspects of
threshold encryption schemes to focus on the market and incentive implications
of implementing encrypted mempools as MEV mitigation techniques. In particular,
this paper argues that the deployment of such protocols without proper
consideration and understanding of market impact invites several undesired
outcomes, with the ultimate goal of stimulating further analysis of this class
of solutions outside of pure cryptograhic considerations. Included in the paper
is an overview of a series of problems, various candidate solutions in the form
of mempool encryption techniques with a focus on threshold encryption,
potential drawbacks to these solutions, and Osmosis as a case study. The paper
targets a broad audience and remains agnostic to blockchain design where
possible while drawing from mostly financial examples
Online reverse auctions research in marketing versus SCM: A review and future directions
An online reverse auction (ORA) is a dynamic procurement mechanism that allows suppliers to compete in real time via a platform to gain a buyer’s business. The ORA is a technological tool introduced in the late 1990s, gaining proponents and detractors among practitioners and academics. Remarkably, while practitioner interestin ORAs has grown, related marketing and supply chain management (SCM) research has declined. This contradiction between theory and practice suggests the need to conduct a systematic review to provide readers with a state-of-the-art understanding of ORAs and recommend fruitful avenues for further research. We focus on the marketing literature and contrast the findings with SCM literature, in such an analysis practical relevance is stressed. Our study offers three main contributions: (1) integration of the cumulative marketing knowledge on ORAs in the 2002–2020 period, (2) development of a three-layer framework of the ORA domain (i.e., conceptualization, ORA as a process, and research setting), and (3) construction of a new research agenda to deal with scholarly challenges and emerging trends.Xunta de Galicia | Ref. GPC ED431B 2022/10Universidade de Vigo/CISU
Learning in Repeated Multi-Unit Pay-As-Bid Auctions
Motivated by Carbon Emissions Trading Schemes, Treasury Auctions, and
Procurement Auctions, which all involve the auctioning of homogeneous multiple
units, we consider the problem of learning how to bid in repeated multi-unit
pay-as-bid auctions. In each of these auctions, a large number of (identical)
items are to be allocated to the largest submitted bids, where the price of
each of the winning bids is equal to the bid itself. The problem of learning
how to bid in pay-as-bid auctions is challenging due to the combinatorial
nature of the action space. We overcome this challenge by focusing on the
offline setting, where the bidder optimizes their vector of bids while only
having access to the past submitted bids by other bidders. We show that the
optimal solution to the offline problem can be obtained using a polynomial time
dynamic programming (DP) scheme. We leverage the structure of the DP scheme to
design online learning algorithms with polynomial time and space complexity
under full information and bandit feedback settings. We achieve an upper bound
on regret of and respectively, where is the number of units demanded by the
bidder, is the total number of auctions, and is the size of
the discretized bid space. We accompany these results with a regret lower
bound, which match the linear dependency in . Our numerical results suggest
that when all agents behave according to our proposed no regret learning
algorithms, the resulting market dynamics mainly converge to a welfare
maximizing equilibrium where bidders submit uniform bids. Lastly, our
experiments demonstrate that the pay-as-bid auction consistently generates
significantly higher revenue compared to its popular alternative, the uniform
price auction.Comment: 51 pages, 12 Figure
- …