118 research outputs found

    Practical application of distributed ledger technology in support of digital evidence integrity verification processes

    Get PDF
    After its birth in cryptocurrencies, distributed ledger (blockchain) technology rapidly grew in popularity in other technology domains. Alternative applications of this technology range from digitizing the bank guarantees process for commercial property leases (Anz and IBM, 2017) to tracking the provenance of high-value physical goods (Everledger Ltd., 2017). As a whole, distributed ledger technology has acted as a catalyst to the rise of many innovative alternative solutions to existing problems, mostly associated with trust and integrity. In this research, a niche application of this technology is proposed for use in digital forensics by providing a mechanism for the transparent and irrefutable verification of digital evidence, ensuring its integrity as established blockchains serve as an ideal mechanism to store and validate arbitrary data against. Evaluation and identification of candidate technologies in this domain is based on a set of requirements derived from previous work in this field (Weilbach, 2014). OpenTimestamps (Todd, 2016b) is chosen as the foundation of further work for its robust architecture, transparent nature and multi-platform support. A robust evaluation and discussion of OpenTimestamps is performed to reinforce why it can be trusted as an implementation and protocol. An implementation of OpenTimestamps is designed for the popular open source forensic tool, Autopsy, and an Autopsy module is subsequently developed and released to the public. OpenTimestamps is tested at scale and found to have insignificant error rates for the verification of timestamps. Through practical implementation and extensive testing, it is shown that OpenTimestamps has the potential to significantly advance the practice of digital evidence integrity verification. A conclusion is reached by discussing some of the limitations of OpenTimestamps in terms of accuracy and error rates. It is shown that although OpenTimestamps has very specific timing claims in the attestation, with a near zero error rate, the actual attestation is truly accurate to within a day. This is followed by proposing potential avenues for future work

    A Grand Challenges-Based Research Agenda for Scholarly Communication and Information Science [MIT Grand Challenge PubPub Participation Platform]

    Get PDF
    Identifying Grand Challenges A global and multidisciplinary community of stakeholders came together in March 2018 to identify, scope, and prioritize a common vision for specific grand research challenges related to the fields of information science and scholarly communications. The participants included domain researchers in academia, practitioners, and those who are aiming to democratize scholarship. An explicit goal of the summit was to identify research needs related to barriers in the development of scalable, interoperable, socially beneficial, and equitable systems for scholarly information; and to explore the development of non-market approaches to governing the scholarly knowledge ecosystem. To spur discussion and exploration, grand challenge provocations were suggested by participants and framed into one of three sections: scholarly discovery, digital curation and preservation, and open scholarship. A few people participated in three segments, but most only attended discussions around a single topic. To create the guest list of desired participants within our three workshop target areas we invited a distribution of expertise providing diversity across several facets. In addition to having expertise in the specific focus area, we aimed for the participants in each track to be diverse across sectors, disciplines, and regions of the world. Each track had approximately 20-25 people from different parts of the world—including the United States, European Union, South Africa, and India. Domain researchers brought perspectives from a range of scientific disciplines, while practitioners brought perspectives from different roles (drawn from commercial, non-profit, and governmental sectors). Notwithstanding, we were constrained by our social networks, and by the location of the workshop in Cambridge, Massachusetts— and most of the participants were affiliated with US and European institutions. During our discussions, it quickly became clear that the grand challenges themselves cannot be neatly categorized into discovery, curation and preservation, and open scholarship—or even, for that matter, limited to library science and information sciences. Several cross-cutting themes emerged, such as a strong need to include underrepresented voices and communities outside of mainstream publishing and academic institutions, a need to identify incentives that will motivate people to make changes in their own approaches and processes toward a more open and trusted framework, and a need to identify collaborators and partners from multiple disciplines in order to build strong programs. The discussions were full of energy, insights, and enthusiasm for inclusive participation—and concluded with a desire for a global call to action to spark changes that will enable more equitable and open scholarship. Some important and productive tensions surfaced in our discussions, particularly around the best paths forward on the challenges we identified. On many core topics, however, there was widespread agreement among participants, especially on the urgent need to address the exclusion of knowledge production and access of so many people around the globe, and the troubling overrepresentation in the scholarly record of white, male, English-language voices. Ultimately, all agreed that we have an obligation to better enrich and greatly expand this space so that our communities can be catalysts for change. Towards a more inclusive, open, equitable, and sustainable scholarly knowledge ecosystem: Vision; Broadest impacts; Recommendations for broad impact. Research landscape: Challenges, threats, and barriers; Challenges to participation in the research community; Restrictions on forms of knowledge; Threats to integrity and trust; Threats to the durability of knowledge; Threats to individual agency; Incentives to sustain a scholarly knowledge ecosystem that is inclusive, equity, trustworthy, and sustainable; Grand Challenges research areas; Recommendations for research areas and programs. Targeted research questions, research challenges: Legal economic, policy, and organizational design for enduring, equitable, open scholarship; Measuring, predicting, and adapting to use and utility across scholarly communities; Designing and governing algorithms in the scholarly knowledge ecosystem to support accountability, credibility, and agency; Integrating oral and tacit knowledge into the scholarly knowledge ecosystem. Integrating research, practice, and policy: The need for leadership to coordinate research, policy, and practice initiatives; Role of libraries and archives as advocates and collaborators; Incorporating values of openness, sustainability, and equity into scholarly infrastructure and practice; Funders, catalysts, and coordinators; Recommendations for integrating research, practice, and policy

    Hajusraamatutehnoloogia kasutuselevõtu õiguslikud takistused: tehnoloogia neutraalsuse ja funktsionaalse samaväärsuse põhimõtetele tuginev analüüs

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsiooneKäesolev väitekiri käsitleb hajusraamatutehnoloogia (HT) kohtlemist Eesti ja EL õiguse alusel konkreetsete kasutusjuhtude näitel. HT on “mitmeotstarbeline tehnoloogia”, millel on rida erinevaid kasutusvõimalusi, sh. selle kõige tuntumad näited nagu plokiahelatehnoloogia ning bitimünt. Kuivõrd olemasolev õigusraamistik on loodud tsentraliseeritud infrastruktuuride ning mitte hajutatud andmestruktuuride jaoks nagu seda on HT, siis tihtipeale takistab olemasolev õigusraamistik HT kasutamist selles sisalduvate nii otseste kui ka kaudsete kallutatud nõuete tõttu. Nimetatud dissonants on sarnane analoogmaailma jaoks loodud õigusnormide takistava mõjuga digitaalsete lahenduste kasutuselevõtmisel. Seega ei ole väitekirjas käsitletavad takistused vaid HT-le omased vaid seotud iga uue tehnoloogia kasutuselevõtuga. Toodud probleemi uuritakseväitekirjas kolme konkreetse HT kasutusjuhu pinnal: (i) bitimündi vahetusteenuse osutamine; (ii) HT-põhise osanike nimekirja pidamine ; (iii) HT-põhise hübriid-targa lepingu ning elektroonilise allkirja kasutamine. Uurimise mõõdupuuna kasutatakse tehnoloogia neutraalsuse põhimõtet ning funktsionaalse samaväärsuse alampõhimõtet, et tuvastada kallutatud nõudeid ning piirata riigivõimu voli eelistada konkreetseid tehnoloogiaid samas teisi tehnoloogiaid diskrimineerides. HT kasutusjuhtude pinnal saab järeldada, et olemasolev õigsraamistik ei ole tehnoloogia-neutraalne ning eelistab tsentraliseeritud lahendusi ning ei taga HT-põhistele funktsionaalselt samaväärsetele lahendustele samaväärset kohtlemist. Arvestades toodud järeldusi uuritakse väitekirjas ka kallutatud nõuete põhjuseid ning strateegiaid kuidas jätkusuutlikult lahendada kallutatusest tekkinud takistused HT kasutusele. Väitekirja teema on oluline arvestades ka 2020. aasta lõpus avaldatud EL-i digitaalse finantspaketi määruste eesmärki, milleks on toetada HT kasutuselevõttu EL-is.This dissertation focuses on the treatment of distributed ledger technology (DLT) applications under the existing regulation in Estonia and the EU based on the analysis of specific use cases. The existing regulatory frameworks in most jurisdictions were built for centralized infrastructures and not for distributed ones, such as built on DLT. Consequently, current legal frameworks may inhibit the use of DLT due to either apparent or non-apparent biases written into the regulation. DLT on the other hand represents a “general-purpose technology” that, therefore, has abundance of applications including its most well known examples of blockchain and Bitcoin. The discrepancy between old rules and new tools is nothing new as the development of the digital world in comparison to the physical world led to the same problem. Therefore, the research problem addressed in the dissertation is not specific to DLT, but linked to the uptake of any new technology. With the aim to explore the potentially inhibiting effect of existing regulation, specific DLT use cases are investigated: (i) bitcoin exchange-service provision; (ii) DLT-based shareholder ledger maintenance and (iii) use of DLT-based electronic signature and hybrid smart contract agreements. In this exploration, the principle of technology neutrality and its sub-principle of functional equivalence are utilized as benchmarks for the identification of biases. The aim of these principles is to prohibit regulators from favouring some technologies and discriminating against others. The use case analyses show that some of the existing regulation is not technology-neutral due to inbound bias for centralized solutions. Furthermore, effects equivalence is not granted by existing regulation to functionally equivalent DLT-based solutions. Against this background, the dissertation discusses the reasons for these biases and regulative strategies to resolve these in a sustainable manner. The dissertation is especially relevant considering the goal of the proposed EU regulations of the Digital Finance Package introduced in late 2020 to promote the use of DLT in the EU.https://www.ester.ee/record=b542731

    Reconstructing Data Provenance from Log Files

    Get PDF
    Data provenance describes the derivation history of data, capturing details such as the entities involved and the relationships between entities. Knowledge of data provenance can be used to address issues, such as data quality assurance, data audit and system security. However, current computer systems are usually not equipped with means to acquire data provenance. Modifying underlying systems or introducing new monitoring software for provenance logging may be too invasive for production systems. As a result, data provenance may not always be available. This thesis investigates the completeness and correctness of data provenance reconstructed from log files with respect to the actual derivation history. To accomplish this, we designed and tested a solution that first extracts and models information from log files into provenance relations then reconstructs the data provenance from those relations. The reconstructed output is then evaluated against the ground truth provenance. The thesis also details the methodology used for constructing a dataset for provenance reconstruction research. Experimental results revealed data provenance that completely captures the ground truth can be reconstructed from system-layer log files. However, the outputs are susceptible to errors generated during event logging and errors induced by program dependencies. Results also show that usage of log files of different granularities collected from the system can help resolve logging errors described. Experiments with removing suspected program dependencies using approaches such as blacklisting and clustering have shown that the number of errors can be reduced by a factor of one hundred. Conclusions drawn from this research contribute towards the work on using reconstruction as an alternative approach for acquiring data provenance from computer systems
    • …
    corecore