26 research outputs found

    Beyond Possession of Lot 3/384: Visual Art Process as Agency in Understanding the Place of My Australian Settlerhood

    Get PDF
    Beyond Possession of Lot 3/384: Visual Art Process as Agency in Understanding the Place of My Australian Settlerhood is a project that considers the potential of the processes of visual art and researched writing to reveal fresh insights into the place of my lived settler experience. With a focus on the reemergence of scholarship and art production surrounding the issues, also examined are ways in which the hierarchical tropes of abstract, manipulable time inherent to modernity is able to unsettle ontological understandings of place. Against the backdrop of a broad Western historical timeline that reviews features of cultural, judicial and philosophical organizations of place, space and time, the relationships between art process and place in the specific context of this thesis is Lot 3/384, a parcel of land I own in the Southern Highlands of New South Wales. The artwork produced for the thesis is situated within a framework of art practitioners whose interest in the materiality, politics and substance of place align with my own. Also considered is the use of prescriptive cultural motifs within the traditions of representational landscape art that tend to endorse hegemonic discourses. To disrupt these narrow practices, the concept of methexis is explored as a possible arena of performative and interactive art process that clarifies and expands current settler understandings of place

    Assessing the evidential value of artefacts recovered from the cloud

    Get PDF
    Cloud computing offers users low-cost access to computing resources that are scalable and flexible. However, it is not without its challenges, especially in relation to security. Cloud resources can be leveraged for criminal activities and the architecture of the ecosystem makes digital investigation difficult in terms of evidence identification, acquisition and examination. However, these same resources can be leveraged for the purposes of digital forensics, providing facilities for evidence acquisition, analysis and storage. Alternatively, existing forensic capabilities can be used in the Cloud as a step towards achieving forensic readiness. Tools can be added to the Cloud which can recover artefacts of evidential value. This research investigates whether artefacts that have been recovered from the Xen Cloud Platform (XCP) using existing tools have evidential value. To determine this, it is broken into three distinct areas: adding existing tools to a Cloud ecosystem, recovering artefacts from that system using those tools and then determining the evidential value of the recovered artefacts. From these experiments, three key steps for adding existing tools to the Cloud were determined: the identification of the specific Cloud technology being used, identification of existing tools and the building of a testbed. Stemming from this, three key components of artefact recovery are identified: the user, the audit log and the Virtual Machine (VM), along with two methodologies for artefact recovery in XCP. In terms of evidential value, this research proposes a set of criteria for the evaluation of digital evidence, stating that it should be authentic, accurate, reliable and complete. In conclusion, this research demonstrates the use of these criteria in the context of digital investigations in the Cloud and how each is met. This research shows that it is possible to recover artefacts of evidential value from XCP

    Accelerating Equitable Grantmaking: Seizing the Moment to Norm Multiyear, Flexible Funding

    Get PDF
    Over the course of the pandemic, more than 60 percent of foundations loosened restrictions and lightened reporting for grants. Now is the time to normalize these good practices across the sector, because flexible, multiyear grants, anchored in trust, are not only the right way to build grantee partnerships but also the smart way for grantmakers to create more impact, advance equity, and strengthen grantee organizations for the long run, as strong evaluations show.In October–November 2021 MilwayPLUS interviewed, surveyed, and conducted focus groups with 30 global funders and nonprofits that had significantly increased their percentage of multiyear, flexible funding over the past decade. We surfaced five common barriers to making the shift and practical ways that boards, CEOs, and program officers have broken through, which nonprofits felt helped to share power. Moreover, we also discovered five effective accelerators of change that both funders and grantees rated more significant than the barriers and could speed the transition. Among the accelerators was the adoption of an equity lens on grantmaking.To help funders and nonprofits harness this positive momentum for change, we offer this tool kit of tactics, resources, examples, and starting points. We seek to equip trustees, CEOs, program officers, and grantees themselves to overcome board biases and other barriers, to accelerate the shift to multiyear, flexible funding, and to embrace practices that create the greatest impact and strongest partnerships with their grantees

    The Development of Digital Forensics Workforce Competency on the Example of Estonian Defence League

    Get PDF
    03.07.2014 kehtestati Vabariigi Valitsuse määrus nr. 108, mis reguleerib Kaitseliidu kaasamise tingimusi ja korda küberjulgeoleku tagamisel. Seega võivad Kaitseliidu küberkaitse üksuse (KL KKÜ edaspidi KKÜ) kutsuda olukorda toetama erinevad asutused: näiteks Riigi Infosüsteemide amet (RIA), infosüsteemi järelevalveasutus või kaitseministeerium või selle valitsemisala ametiasutused oma ülesannete raames. KKÜ-d saab kaasata info- ja sidetehnoloogia infrastruktuuri järjepidevuse tagamisel, turvaintsidentide kontrollimisel ja lahendamisel, rakendades nii aktiivseid kui passiivseid meetmeid. KKÜ ülesannete kaardistamisel täheldati, et KKÜ partnerasutused / organisatsioonid ei ole kaardistanud oma spetsialistide olemasolevaid pädevusi ja sellele lisaks puudub ülevaade digitaalse ekspertiisi kogukonnas vajaolevatest pädevustest. Leitut arvesse võttes seati ülesandeks vajadustest ja piirangutest (võttes arvesse digitaalse ekspertiisi kogukonda kujundavaid standardeid) ülevaatliku pildi loomine, et töötada välja digitaalse ekspertiisi kompetentsipõhine raamistik, mis toetab KKÜ spetsialistide arendamist palkamisest pensionini. Selleks uurisime KKÜ ja nende olemasolevate koolitusprogrammide hetkeolukorda ning otsustasime milliseid omadusi peab edasise arengu tarbeks uurima ja kaaluma. Võrreldavate tulemuste saa-miseks ja eesmärgi täitmiseks pidi koostatav mudel olema suuteline lahendama 5-t järgnevat ülesannet: 1. Oskuste kaardistamine, 2. Eesmärkide seadmine ja ümberhindamine, 3. Koolituskava planeerimine, 4. Värbamisprotsessi kiirendamine ning 5. Spetsialistide kestva arengu soodustamine. Raamistiku väljatöötamiseks võeti aluseks National Initiative for Cybersecurity Education (NICE) Cybersecurity Workforce Framework (NICE Framework) pädevusraamistik mida parendati digitaalse ekspertiisi spetsialistide, ja käesoleval juhul ka KKÜ, vajadusi silmas pidades. Täiendusi lisati nii tasemete, spetsialiseerumise kui ka ülesannete kirjelduste kujul. Parenduste lisamisel võeti arvesse töös tutvustatud digitaalse ekspertiisi piiranguid ja standardeid, mille lõpptulemusena esitati KKÜ-le Digitaalse Ekspertiisi Pädevuse ontoloogia, KKÜ struktuuri muudatuse ettepanek, soovitatavad õpetamisstrateegiad digitaalse ekspertiisi kasutamiseks (muudetud Bloomi taksonoomia tasemetega), uus digitaalse ekspertiisi standardi alajaotus – Mehitamata Süsteemide ekspertiis ja Digitaalse Ekspertiisi Pädevuse Mudeli Raamistik. Ülesannete ja oskuste loetelu koostati rahvusvaheliselt tunnustatud sertifitseerimis-organisatsioonide ja erialast pädevust pakkuvate õppekavade abil. Kavandatava mudeli hindamiseks kasutati mini-Delphi ehk Estimate-Talk-Estimate (ETE) tehnikat. Esialgne prognoos vajaduste ja prioriteetidega anti KKÜ partnerasutustele saamaks tehtud töö kohta ekspertarvamusi. Kogu tagasisidet silmas pidades tehti mudelisse korrektuurid ja KKÜ-le sai vormistatud ettepanek ühes edasise tööplaaniga. Üldiselt kirjeldab väljapakutud pädevusraamistik KKÜ spetsialistilt ooda-tavat pädevuse ulatust KKÜ-s, et suurendada nende rolli kiirreageerimisrühmana. Raamistik aitab määratleda digitaalse ekspertiisi eeldatavaid pädevusi ja võimekusi praktikas ning juhendab eksperte spetsialiseerumise valikul. Kavandatud mudeli juures on arvestatud pikaajalise mõjuga (palkamisest pensionini). Tulenevalt mudeli komplekssusest, on raamistikul pikk rakendusfaas – organisatsiooni arengule maksimaalse mõju saavutamiseks on prognoositud ajakava maksimaalselt 5 aastat. Antud ettepanekud on käesolevaks hetkeks KKÜ poolt heaks kiidetud ning planeeritud kava rakendati esmakordselt 2019 aasta aprillikuus.In 03.07.2014 Regulation No. 108 was introduced which regulates the conditions and pro-cedure of the involvement of the Estonian Defence League (EDL) Cyber Defence Unit (CDU) in ensuring cyber security. This means that EDL can be brought in by the Information System Authority, Ministry of Defence or the authorities of its area of government within the scope of either of their tasks e.g. ensuring the continuity of information and communication technology infrastructure and in handling and solving cyber security incidents while applying both active and passive measures. In January 2018 EDL CDU’s Digi-tal Evidence Handling Group had to be re-organized and, thus, presented a proposal for internal curriculum in order to further instruct Digital Evidence specialists. While describing the CDU's tasks, it was noted that the CDU's partner institutions / organizations have not mapped out their specialists’ current competencies. With this in mind, we set out to create a comprehensive list of needs and constraints (taking into account the community standards of DF) to develop a DF-based competence framework that supports the devel-opment of CDU professionals. Hence, we studied the current situation of CDU, their existing training program, and contemplated which features we need to consider and ex-plore for further development. In order to assemble comparable results and to achieve the goal the model had to be able to solve the 5 following tasks: 1. Competency mapping, 2. Goal setting and reassessment, 3. Scheduling the training plan, 4. Accelerating the recruitment process, and 5. Promoting the continuous development of professionals. The frame-work was developed on the basis of the National Initiative for Cybersecurity Education (NICE) Cybersecurity Workforce Framework (NICE Framework), which was revised to meet the needs of DF specialists, including EDL CDU. Additions were supplemented in terms of levels, specialization, and job descriptions. The proposals included the DF limitations and standards introduced in the work, which ultimately resulted in a proposal for a Digital Forensics Competency ontology, EDL CDU structure change, Suggested Instruc-tional Strategies for Digital Forensics Use With Each Level of revised Bloom's Taxonomy, a new DF standard subdivision – Unmanned Systems Forensics, and Digital Forensic Competency Model Framework. The list of tasks and skills were compiled from international certification distribution organizations and curricula, and their focus on DF Special-ist Competencies. Mini-Delphi or Estimate-Talk-Estimate (ETE) techniques were applied to evaluate the proposed model. An initial estimation of competencies and priorities were given to the EDL CDU partner institutions for expert advice and evaluation. Considering the feedback, improvements were made to the model and a proposal was put forward to the CDU with a future work plan. In general, the proposed competence framework describes the expected scope of competence of an DF specialist in the EDL CDU to enhance their role as a rapid response team. The framework helps in defining the expected compe-tencies and capabilities of digital forensics in practice and offers guidance to the experts in the choice of specialization. The proposed model takes into account the long-term effect (hire-to-retire). Due to the complexity of the model, the framework has a long implementation phase — the maximum time frame for achieving the full effect for the organization is expected to be 5 years. These proposals were approved by EDL CDU and the proposed plan was first launched in April 2019

    Co-designing reliability and performance for datacenter memory

    Get PDF
    Memory is one of the key components that affects reliability and performance of datacenter servers. Memory in today’s servers is organized and shared in several ways to provide the most performant and efficient access to data. For example, cache hierarchy in multi-core chips to reduce access latency, non-uniform memory access (NUMA) in multi-socket servers to improve scalability, disaggregation to increase memory capacity. In all these organizations, hardware coherence protocols are used to maintain memory consistency of this shared memory and implicitly move data to the requesting cores. This thesis aims to provide fault-tolerance against newer models of failure in the organization of memory in datacenter servers. While designing for improved reliability, this thesis explores solutions that can also enhance performance of applications. The solutions build over modern coherence protocols to achieve these properties. First, we observe that DRAM memory system failure rates have increased, demanding stronger forms of memory reliability. To combat this, the thesis proposes Dvé, a hardware driven replication mechanism where data blocks are replicated across two different memory controllers in a cache-coherent NUMA system. Data blocks are accompanied by a code with strong error detection capabilities so that when an error is detected, correction is performed using the replica. Dvé’s organization offers two independent points of access to data which enables: (a) strong error correction that can recover from a range of faults affecting any of the components in the memory and (b) higher performance by providing another nearer point of memory access. Dvé’s coherent replication keeps the replicas in sync for reliability and also provides coherent access to read replicas during fault-free operation for improved performance. Dvé can flexibly provide these benefits on-demand at runtime. Next, we observe that the coherence protocol itself requires to be hardened against failures. Memory in datacenter servers is being disaggregated from the compute servers into dedicated memory servers, driven by standards like CXL. CXL specifies the coherence protocol semantics for compute servers to access and cache data from a shared region in the disaggregated memory. However, the CXL specification lacks the requisite level of fault-tolerance necessary to operate at an inter-server scale within the datacenter. Compute servers can fail or be unresponsive in the datacenter and therefore, it is important that the coherence protocol remain available in the presence of such failures. The thesis proposes Āpta, a CXL-based, shared disaggregated memory system for keeping the cached data consistent without compromising availability in the face of compute server failures. Āpta architects a high-performance fault-tolerant object-granular memory server that significantly improves performance for stateless function-as-a-service (FaaS) datacenter applications

    The Hermeneutics Of The Hard Drive: Using Narratology, Natural Language Processing, And Knowledge Management To Improve The Effectiveness Of The Digital Forensic Process

    Get PDF
    In order to protect the safety of our citizens and to ensure a civil society, we ask our law enforcement, judiciary and intelligence agencies, under the rule of law, to seek probative information which can be acted upon for the common good. This information may be used in court to prosecute criminals or it can be used to conduct offensive or defensive operations to protect our national security. As the citizens of the world store more and more information in digital form, and as they live an ever-greater portion of their lives online, law enforcement, the judiciary and the Intelligence Community will continue to struggle with finding, extracting and understanding the data stored on computers. But this trend affords greater opportunity for law enforcement. This dissertation describes how several disparate approaches: knowledge management, content analysis, narratology, and natural language processing, can be combined in an interdisciplinary way to positively impact the growing difficulty of developing useful, actionable intelligence from the ever-increasing corpus of digital evidence. After exploring how these techniques might apply to the digital forensic process, I will suggest two new theoretical constructs, the Hermeneutic Theory of Digital Forensics and the Narrative Theory of Digital Forensics, linking existing theories of forensic science, knowledge management, content analysis, narratology, and natural language processing together in order to identify and extract narratives from digital evidence. An experimental approach will be described and prototyped. The results of these experiments demonstrate the potential of natural language processing techniques to digital forensics

    The Sixth Alumni Conference of the International Space University

    Get PDF
    These proceedings cover the sixth alumni conference of the International Space University, coordinated by the ISU U.S. Alumni Organization, which was held at Rice University in Houston, Texas, on July 11, 1997. The alumni conference gives graduates of the International Space University's interdisciplinary, international, and intercultural program a forum in which they may present and exchange technical ideas, and keep abreast of the wide variety of work in which the ever-growing body of alumni is engaged. The diversity that is characteristic of ISU is reflected in the subject matter of the papers published in this proceedings. This proceedings preserves the order of the alumni presentations given at the 1997 ISU Alumni Conference. As in previous years, a special effort was made to solicit papers with a strong connection to the two ISU 1997 Summer Session Program design projects: (1) Transfer of Technology, Spin-Offs, Spin-Ins; and (2) Strategies for the Exploration of Mars. Papers in the remaining ten sessions cover the departmental areas traditional to the ISU summer session program

    Engineering planetary lasers for interstellar communication

    Get PDF
    Transmitting large amounts of data efficiently among neighboring stars will vitally support any eventual contact with extrasolar intelligence, whether alien or human. Laser carriers are particularly suitable for high-quality, targeted links. Space laser transmitter systems designed by this work, based on both demonstrated and imminent advanced space technology, could achieve reliable data transfer rates as high as 1 kb/s to matched receivers as far away as 25 pc, a distance including over 700 approximately solar-type stars. The centerpiece of this demonstration study is a fleet of automated spacecraft incorporating adaptive neural-net optical processing active structures, nuclear electric power plants, annular momentum control devices, and ion propulsion. Together the craft sustain, condition, modulate, and direct to stellar targets an infrared laser beam extracted from the natural mesospheric, solar-pumped, stimulated CO2 emission recently discovered at Venus. For a culture already supported by mature interplanetary industry, the cost of building planetary or high-power space laser systems for interstellar communication would be marginal, making such projects relevant for the next human century. Links using high-power lasers might support data transfer rates as high as optical frequencies could ever allow. A nanotechnological society such as we might become would inevitably use 10 to the 20th power b/yr transmission to promote its own evolutionary expansion out of the galaxy
    corecore