67 research outputs found

    Electronic Voting: 6th International Joint Conference, E-Vote-ID 2021, Virtual Event, October 5–8, 2021: proceedings

    Get PDF
    This volume contains the papers presented at E-Vote-ID 2021, the Sixth International Joint Conference on Electronic Voting, held during October 5–8, 2021. Due to the extraordinary situation brought about by the COVID-19, the conference was held online for the second consecutive edition, instead of in the traditional venue in Bregenz, Austria. The E-Vote-ID conference is the result of the merger of the EVOTE and Vote-ID conferences, with first EVOTE conference taking place 17 years ago in Austria. Since that conference in 2004, over 1000 experts have attended the venue, including scholars, practitioners, authorities, electoral managers, vendors, and PhD students. The conference focuses on the most relevant debates on the development of electronic voting, from aspects relating to security and usability through to practical experiences and applications of voting systems, also including legal, social, or political aspects, amongst others, and has turned out to be an important global referent in relation to this issue

    Sixth International Joint Conference on Electronic Voting E-Vote-ID 2021. 5-8 October 2021

    Get PDF
    This volume contains papers presented at E-Vote-ID 2021, the Sixth International Joint Conference on Electronic Voting, held during October 5-8, 2021. Due to the extraordinary situation provoked by Covid-19 Pandemic, the conference is held online for second consecutive edition, instead of in the traditional venue in Bregenz, Austria. E-Vote-ID Conference resulted from the merging of EVOTE and Vote-ID and counting up to 17 years since the _rst E-Vote conference in Austria. Since that conference in 2004, over 1000 experts have attended the venue, including scholars, practitioners, authorities, electoral managers, vendors, and PhD Students. The conference collected the most relevant debates on the development of Electronic Voting, from aspects relating to security and usability through to practical experiences and applications of voting systems, also including legal, social or political aspects, amongst others; turning out to be an important global referent in relation to this issue. Also, this year, the conference consisted of: ¡ Security, Usability and Technical Issues Track ¡ Administrative, Legal, Political and Social Issues Track ¡ Election and Practical Experiences Track ¡ PhD Colloquium, Poster and Demo Session on the day before the conference E-VOTE-ID 2021 received 49 submissions, being, each of them, reviewed by 3 to 5 program committee members, using a double blind review process. As a result, 27 papers were accepted for its presentation in the conference. The selected papers cover a wide range of topics connected with electronic voting, including experiences and revisions of the real uses of E-voting systems and corresponding processes in elections. We would also like to thank the German Informatics Society (Gesellschaft fßr Informatik) with its ECOM working group and KASTEL for their partnership over many years. Further we would like to thank the Swiss Federal Chancellery and the Regional Government of Vorarlberg for their kind support. EVote- ID 2021 conference is kindly supported through European Union's Horizon 2020 projects ECEPS (grant agreement 857622) and mGov4EU (grant agreement 959072). Special thanks go to the members of the international program committee for their hard work in reviewing, discussing, and shepherding papers. They ensured the high quality of these proceedings with their knowledge and experience

    Crashworthy Code

    Get PDF
    Code crashes. Yet for decades, software failures have escaped scrutiny for tort liability. Those halcyon days are numbered: self-driving cars, delivery drones, networked medical devices, and other cyber-physical systems have rekindled interest in understanding how tort law will apply when software errors lead to loss of life or limb. Even after all this time, however, no consensus has emerged. Many feel strongly that victims should not bear financial responsibility for decisions that are entirely automated, while others fear that cyber-physical manufacturers must be shielded from crushing legal costs if we want such companies to exist at all. Some insist the existing liability regime needs no modernist cure, and that the answer for all new technologies is patience. This Article observes that no consensus is imminent as long as liability is pegged to a standard of “crashproof” code. The added prospect of cyber-physical injury has not changed the underlying complexities of software development. Imposing damages based on failure to prevent code crashes will not improve software quality, but will impede the rollout of cyber-physical systems. This Article offers two lessons from the “crashworthy” doctrine, a novel tort theory pioneered in the late 1960s in response to a rising epidemic of automobile accidents, which held automakers accountable for unsafe designs that injured occupants during car crashes. The first is that tort liability can be metered on the basis of mitigation, not just prevention. When code crashes are statistically inevitable, cyber-physical manufacturers may be held to have a duty to provide for safer code crashes, rather than no code crashes at all. Second, the crashworthy framework teaches courts to segment their evaluation of code, and make narrower findings of liability based solely on whether cyber-physical manufacturers have incorporated adequate software fault tolerance into their designs. Requiring all code to be perfect is impossible, but expecting code to be crashworthy is reasonable

    Feasibility Analysis of Various Electronic Voting Systems for Complex Elections

    Get PDF

    Designing Data Spaces

    Get PDF
    This open access book provides a comprehensive view on data ecosystems and platform economics from methodical and technological foundations up to reports from practical implementations and applications in various industries. To this end, the book is structured in four parts: Part I “Foundations and Contexts” provides a general overview about building, running, and governing data spaces and an introduction to the IDS and GAIA-X projects. Part II “Data Space Technologies” subsequently details various implementation aspects of IDS and GAIA-X, including eg data usage control, the usage of blockchain technologies, or semantic data integration and interoperability. Next, Part III describes various “Use Cases and Data Ecosystems” from various application areas such as agriculture, healthcare, industry, energy, and mobility. Part IV eventually offers an overview of several “Solutions and Applications”, eg including products and experiences from companies like Google, SAP, Huawei, T-Systems, Innopay and many more. Overall, the book provides professionals in industry with an encompassing overview of the technological and economic aspects of data spaces, based on the International Data Spaces and Gaia-X initiatives. It presents implementations and business cases and gives an outlook to future developments. In doing so, it aims at proliferating the vision of a social data market economy based on data spaces which embrace trust and data sovereignty

    Teaching to Clients: Quality Assurance in Higher Education and the Construction of the Invisible Student at Philipps-Universität Marburg and Universidad Centroamericana in Managua

    Get PDF
    The widespread applicability of quality assurance processes has induced a re-labeling of clients as students (see, for example: OECD, 1998), as well as an imposition of compatible evaluation and training processes for teachers. Quality assurance, a now globalised practice in higher education institutions, is an instance of the “audit culture” (Power, 1997, 2010; Strathern, 2000a), and has come to signify good government in universities. Its “rituals of verification” (Power, 1997) are now hegemonic and widespread practices. Quality assurance is also an intrinsic element of academic capitalism (Slaughter & Leslie, 1997; Slaughter & Rhoades, 2004), and deployed through the same mechanisms. The phenomenon of quality assurance has created a technology (Foucault, 1988) in the practices of evaluation and accreditation, which largely ignores evident differences of context and culture that emerge in situ, and focuses on creating “virtual” (Miller, 1998) similarities through a “tyranny of transparency” (Strathern, 2000c) that instead of revealing, conceals important issues from the teaching/learning experience, fetishizing the classroom session. Through quality assurance, universities present themselves to the public – and to each other – through a common language and common goals. The language of quality assurance, which I define as the ‘talk of quality’, describes quality as a summation of continuously changing and externally defined criteria that an institution must fulfil in order to be positively perceived by the public. This ‘talk of quality’ seeps into everyday decisions and transactions, generates alliances or competition, and continuously reinforces an imagined hierarchy of universities. Given the pervasiveness of this discourse, its visibility and repetitiveness, but above all, its use in day to day “rituals of verification” in which teachers and students are directly involved, to analyse higher education transformations it is not enough to look at policies, funding schemes, numbers of staff and students, facilities, research production or ranking achievements. For this reason, I analyse quality assurance practices and its discourse, as they are applied in specific contexts. The results and discussion The analysis revealed that the ‘talk of quality’ present in two universities displays almost identical concepts and notions and supports the development of specialised managerial capacity. Evaluation and accreditation processes are conducted in both universities and promote the enforcement of other “rituals of verification”, specifically teacher evaluation, which constitutes a technology (Foucault, 1988) for the subjectification of teachers, the effects of which have been described by several researchers. A fixed notion of good teaching has been defined in both universities through specific indicators. The results from each application of the process generate ‘truths’ about teachers supported by neutral sounding pedagogical concepts. Alongside the constant evaluation of teaching, both universities have also launched teacher training programmes and incentive – and punishment – systems tied to evaluation results. The transformation of students into clients emerges as a necessity for this technology to function. In order to present teacher evaluation as a simple and effective guiding tool to better teaching, an honest feedback from students, the questionnaire relies on assumptions about students’ responses as clients genuinely concerned with filling it in the intended way. The empirical analysis revealed that instead, students at both universities have their own criteria for judging teaching, which instead of relying on standardised and specific indicators, like those of the questionnaire, relies on shared ideas about how teachers make them feel, how they relate to them, how they perceive the course in question, and how they define knowledge in general or university life. Students also approach the answering of the questionnaire – which they largely perceive as a power tool applied by the management – from their own strategies of “college management” and “professor management” (Nathan, 2005), which allows them to shape the university’s choices to their own schemes. As evidenced by the empirical analysis, the ‘student-centred’ approach of quality assurance, which relies on the idea of the student as a demanding client and the teacher as a service provider, produces a management-centred higher education in which important elements are concealed by the same process that means to reveal them

    The creation of public value through e-government in the Sultanate of Oman

    Get PDF
    Public value (PV) is a debatable topic with different definitions across the public administration literature and e-government literature. Public value is seen as the last paradigm of both public administration and e-government studies, redefining the definition of e-government, its aims, success indicators and evaluation frameworks. Existing implementations are typically biased towards the realisation of efficiency and service effectiveness, with far less attention being paid to the delivery of PV. In addition, PV-related e-government studies have not presented a comprehensive and holistic framework to investigate e-government PV. This study seeks to address this gap by investigating how e-government facilitates the creation of PV. [Continues.

    Imbalanced Cryptographic Protocols

    Get PDF
    Efficiency is paramount when designing cryptographic protocols, heavy mathematical operations often increase computation time, even for modern computers. Moreover, they produce large amounts of data that need to be sent through (often limited) network connections. Therefore, many research efforts are invested in improving efficiency, sometimes leading to imbalanced cryptographic protocols. We define three types of imbalanced protocols, computationally, communicationally, and functionally imbalanced protocols. Computationally imbalanced cryptographic protocols appear when optimizing a protocol for one party having significantly more computing power. In communicationally imbalanced cryptographic protocols the messages mainly flow from one party to the others. Finally, in functionally imbalanced cryptographic protocols the functional requirements of one party strongly differ from the other parties. We start our study by looking into laconic cryptography, which fits both the computational and communicational category. The emerging area of laconic cryptography involves the design of two-party protocols involving a sender and a receiver, where the receiver’s input is large. The key efficiency requirement is that the protocol communication complexity must be independent of the receiver’s input size. We show a new way to build laconic OT based on the new notion of Set Membership Encryption (SME) – a new member in the area of laconic cryptography. SME allows a sender to encrypt to one recipient from a universe of receivers, while using a small digest from a large subset of receivers. A recipient is only able to decrypt the message if and only if it is part of the large subset. As another example of a communicationally imbalanced protocol we will look at NIZKs. We consider the problem of proving in zero-knowledge the existence of exploits in executables compiled to run on real-world processors. Finally, we investigate the problem of constructing law enforcement access systems that mitigate the possibility of unauthorized surveillance, as a functionally imbalanced cryptographic protocol. We present two main constructions. The first construction enables prospective access, allowing surveillance only if encryption occurs after a warrant has been issued and activated. The second allows retrospective access to communications that occurred prior to a warrant’s issuance

    CPA\u27s handbook of fraud and commercial crime prevention

    Get PDF
    https://egrove.olemiss.edu/aicpa_guides/1823/thumbnail.jp
    • …
    corecore