111 research outputs found

    Synthetic steganography: Methods for generating and detecting covert channels in generated media

    Get PDF
    Issues of privacy in communication are becoming increasingly important. For many people and businesses, the use of strong cryptographic protocols is sufficient to protect their communications. However, the overt use of strong cryptography may be prohibited or individual entities may be prohibited from communicating directly. In these cases, a secure alternative to the overt use of strong cryptography is required. One promising alternative is to hide the use of cryptography by transforming ciphertext into innocuous-seeming messages to be transmitted in the clear. ^ In this dissertation, we consider the problem of synthetic steganography: generating and detecting covert channels in generated media. We start by demonstrating how to generate synthetic time series data that not only mimic an authentic source of the data, but also hide data at any of several different locations in the reversible generation process. We then design a steganographic context-sensitive tiling system capable of hiding secret data in a variety of procedurally-generated multimedia objects. Next, we show how to securely hide data in the structure of a Huffman tree without affecting the length of the codes. Next, we present a method for hiding data in Sudoku puzzles, both in the solved board and the clue configuration. Finally, we present a general framework for exploiting steganographic capacity in structured interactions like online multiplayer games, network protocols, auctions, and negotiations. Recognizing that structured interactions represent a vast field of novel media for steganography, we also design and implement an open-source extensible software testbed for analyzing steganographic interactions and use it to measure the steganographic capacity of several classic games. ^ We analyze the steganographic capacity and security of each method that we present and show that existing steganalysis techniques cannot accurately detect the usage of the covert channels. We develop targeted steganalysis techniques which improve detection accuracy and then use the insights gained from those methods to improve the security of the steganographic systems. We find that secure synthetic steganography, and accurate steganalysis thereof, depends on having access to an accurate model of the cover media

    Seeing the God of New Mexico: Mary Austin\u27s Starry Adventure and the Optic of Enchantment

    Get PDF
    This thesis examines 20th century American writer Mary Austin\u27s last novel, Starry Adventure (1931), a work unjustly ignored by most Austin scholars, yet touted by the photographer Ansel Adams (in a letter to Austin) as the greatest thing I have ever read. This thesis will be particularly concerned with the concept of vision in the novel and the connections between Austin\u27s fiction and the New Mexican modernism/primitivism movement in the visual arts. I explore what I call Austin\u27s optic of enchantment, a visual experience of divinity that is uniquely tied to the New Mexican landscape. I break down this optic of enchantment into three distinct and definitive facets: First - a visual understanding of the landscape which is directly informed by the then-contemporary movements of visual artists in Taos and Santa Fe, and more widely, in New York and Europe. Second, I discuss how the visual experience of the landscape is derived from a primitivist sense of indigenous experience: Indian artistic culture and its deep, non-lingual understanding of the land, and the ritual and mysticism of Penitente and chicano culture. Finally, I complicate literary studies of Austin with theories of modern visual culture, relying on the work of visual studies critics and modernist art historians to illustrate how American vision was changing at the turn of the 20th century, and how the Southwestern landscape played a role in determining national concepts of modernity and modernism in art, letters and beyond

    The Cord Weekly (October 26, 2005)

    Get PDF

    Casco Bay Weekly : 5 December 1991

    Get PDF
    https://digitalcommons.portlandlibrary.com/cbw_1991/1047/thumbnail.jp

    Combatting AI’s Protectionism & Totalitarian-Coded Hypnosis: The Case for AI Reparations & Antitrust Remedies in the Ecology of Collective Self-Determination

    Get PDF
    Artificial Intelligence’s (AI) global race for comparative advantage has the world spinning, while leaving people of color and the poor rushing to reinvent AI imagination in less racist, destructive ways. In repurposing AI technology, we can look to close the national racial gaps in academic achievement, healthcare, housing, income, and fairness in the criminal justice system to conceive what AI reparations can fairly look like. AI can create a fantasy world, realizing goods we previously thought impossible. However, if AI does not close these national gaps, it no longer has foreseeable or practical social utility value compared to its foreseeable and actual grave social harm. The hypothetical promises of AI’s beneficial use as an equality machine without the requisite action and commitment to address the inequality it already causes now is fantastic propaganda masquerading as merit for a Silicon Valley that has yet to diversify its own ranks or undo the harm it is already causing. Care must be taken that fanciful imagining yields to practical realities that, in many cases, AI no longer has foreseeable practical social utility when compared to the harm it poses to democracy, privacy, equality, personhood and global warming. Until we can accept as a nation that the Sherman Antitrust Act of 1890 and the Clayton Antitrust Act of 1914 are not up to the task for breaking up tech companies; until we can acknowledge DOJ and FTC regulators are constrained from using their power because of a framework of permissibility implicit in the “consumer welfare standard” of antitrust law; until a conservative judiciary inclined to defer to that paradigm ceases its enabling of big tech, then workers, students, and all natural persons will continue to be harmed by big tech’s anticompetitive and inhumane activity. Accordingly, AI should be vigorously subject to anti-trust monopolistic protections and corporate, contractual, and tort liability explored herein, such as strict liability or a new AI prima facie tort that can pierce the corporate and technological veil of algorithmic proprietary secrecy in the interest of justice. And when appropriate, AI implementation should be phased out for a later time when we have better command and control of how to eliminate its harmful impacts that will only exacerbate existing inequities. Fourth Amendment jurisprudence of a totalitarian tenor—greatly helped by Terry v. Ohio—has opened the door to expansive police power through AI’s air superiority and proliferation of surveillance in communities of color. This development is further exacerbated by AI companies’ protectionist actions. AI rests in a protectionist ecology including, inter alia, the notion of black boxes, deep neural network learning, Section 230 of the Communications Decency Act, and partnerships with law enforcement that provide cover under the auspices of police immunity. These developments should discourage a “safe harbor” protecting tech companies from liability unless and until there is a concomitant safe harbor for Blacks and people of color to be free of the impact of harmful algorithmic spell casting. As a society, we should endeavor to protect the sovereign soul’s choice to decide which actions it will implicitly endorse with its own biometric property. Because we do not morally consent to give the right to use our biometrics to accuse, harass, or harm another in a line up, arrest, or worse, these concerns should be seen as the lawful exercise of our right to remain a conscientious objector under the First Amendment. Our biometrics should not bear false witness against our neighbors in violation of our First Amendment right to the free exercise of religious belief, sincerely held convictions, and conscientious objections thereto. Accordingly, this Article suggests a number of policy recommendations for legislative interventions that have informed the work of the author as a Commissioner on the Massachusetts Commission on Facial Recognition Technology, which has now become the framework for the recently proposed federal legislation—The Facial Recognition Technology Act of 2022. It further explores what AI reparations might fairly look like, and the collective social movements of resistance that are needed to bring about its fruition. It imagines a collective ecology of self-determination to counteract the expansive scope of AI’s protectionism, surveillance, and discrimination. This movement of self-determination seeks: (1) Black, Brown, and race-justice-conscious progressives to have majority participatory governance over all harmful tech applied disproportionately to those of us already facing both social death and contingent violence in our society by resorting to means of legislation, judicial activism, entrepreneurial influential pressure, algorithmic enforced injunctions, and community organization; (2) a prevailing reparations mindset infused in coding, staffing, governance, and antitrust accountability within all industry sectors of AI product development and services; (3) the establishment of our own counter AI tech, as well as tech, law, and social enrichment educational academies, technological knowledge exchange programs, victim compensation funds, and the establishment of our own ISPs, CDNs, cloud services, domain registrars, and social media platforms provided on our own terms to facilitate positive social change in our communities; and (4) personal daily divestment from AI companies’ ubiquitous technologies, to the extent practicable to avoid their hypnotic and addictive effects and to deny further profits to dehumanizing AI tech practices. AI requires a more just imagination. In this way, we can continue to define ourselves for ourselves and submit to an inside-out, heart-centered mindfulness perspective that informs our coding work and advocacy. Recognizing we are engaged in a battle of the mind and soul of AI, the nation, and ourselves is all the more imperative since we know that algorithms are not just programmed—they program us and the world in which we live. The need for public education, the cornerstone institution for creating an informed civil society, is now greater than ever, but it too is insidiously infected by algorithms as the digital codification of the old Jim Crow laws, promoting the same racial profiling, segregative tracking, and stigma labeling many public school students like myself had to overcome. For those of us who stand successful in defiance of these predictive algorithms, we stand simultaneously as the living embodiment of the promise inherent in all of us and the endemic fallacies of erroneous predictive code. A need thus arises for a counter-disruptive narrative in which our victory as survivors over coded inequity disrupts the false psychological narrative of technological objectivity and promise for equality

    Journal of Telecommunications and Information Technology, 2009, nr 4

    Get PDF
    kwartalni

    Repeating the past experimental and empirical methods in system and software security

    Get PDF
    I propose a new method of analyzing intrusions: instead of analyzing evidence and deducing what must have happened, I find the intrusion-causing circumstances by a series of automatic experiments. I first capture process';s system calls, and when an intrusion has been detected, I use these system calls to replay some of the captured processes in order to find the intrusion-causing processes—the cause-effect chain that led to the intrusion. I extend this approach to find also the inputs to those processes that cause the intrusion—the attack signature. Intrusion analysis is a minimization problem—how to find a minimal set of circumstances that makes the intrusion happen. I develop several efficient minimization algorithms and show their theoretical properties, such as worst-case running times, as well as empirical evidence for a comparison of average running times. Our evaluations show that the approach is correct and practical; it finds the 3 processes out of 32 that are responsible for a proof-of-concept attack in about 5 minutes, and it finds the 72 out of 168 processes in a large, complicated, and difficult to detect multi-stage attack involving Apache and suidperl in about 2.5 hours. I also extract attack signatures in proof-of-concept attacks in reasonable time. I have also considered the problem of predicting before deployment which components in a software system are most likely to contain vulnerabilities. I present empirical evidence that vulnerabilities are connected to a component';s imports. In a case study on Mozilla, I correctly predicted one half of all vulnerable components, while more than two thirds of our predictions were correct.Ich stelle eine neue Methode der Einbruchsanalyse vor: Anstatt Spuren zu analysieren und daraus den Ereignisverlauf zu erschließen, finde ich die einbruchsverursachenden UmstĂ€nde durch automatische Experimente. ZunĂ€chst zeichne ich die Systemaufrufe von Prozessen auf. Nachdem ein Einbruch entdeckt wird, benutze ich diese Systemaufrufe, um Prozesse teilweise wieder einzuspielen, so dass ich herausfinden kann, welche Prozesse den Einbruch verursacht haben —die Ursache-Wirkungs-Kette. Ich erweitere diesen Ansatz, um auch die einbruchsverursachenden Eingaben dieser Prozesse zu finden — die Angriffs-Signatur. Einbruchsanalyse ist ein Minimierungsproblem — wie findet man eine minimale Menge von UmstĂ€nden, die den Einbruch passieren lassen? Ich entwickle einige effiziente Algorithmen und gebe sowohl theroretische Eigenschaften an, wie z.B. die Laufzeit im ungĂŒnstigsten Fall, als auch empirische Ergebnisse, die das mittlere Laufzeitverhalen beleuchten. Meine Evaluierung zeigt, dass unser Ansatz korrekt und praktikabel ist; er findet die 3 aus 32 Prozessen, die fĂŒr einen konstruierten Angriff verantwortlich sind, in etwa 5 Minuten, und er findet die 72 von 168 Prozessen, die fĂŒr einen echten, komplizierten, mehrstufigen und schwer zu analysierenden Angriff auf Apache und suidperl verantwortlich sind, in 2,5 Stunden. Ich kann ebenfalls Angriffs-Signaturen eines konstruierten Angriffs in vernĂŒnftiger Zeit erstellen. Ich habe mich auch mit dem Problem beschĂ€ftigt, vor der Auslieferung von Software diejenigen Komponenten vorherzusagen, die besonders anfĂ€llig fĂŒr Schwachstellen sind. Ich bringe empirische Anhaltspunkte, dass Schwachstellen mit Importen korrelieren. In einer Fallstudie ĂŒber Mozilla konnte ich die HĂ€lfte aller fehlerhaften Komponenten korrekt vorhersagen, wobei etwa zwei Drittel aller Vorhersagen richtig war
    • 

    corecore