241 research outputs found

    DeAR: A Deep-learning-based Audio Re-recording Resilient Watermarking

    Full text link
    Audio watermarking is widely used for leaking source tracing. The robustness of the watermark determines the traceability of the algorithm. With the development of digital technology, audio re-recording (AR) has become an efficient and covert means to steal secrets. AR process could drastically destroy the watermark signal while preserving the original information. This puts forward a new requirement for audio watermarking at this stage, that is, to be robust to AR distortions. Unfortunately, none of the existing algorithms can effectively resist AR attacks due to the complexity of the AR process. To address this limitation, this paper proposes DeAR, a deep-learning-based audio re-recording resistant watermarking. Inspired by DNN-based image watermarking, we pioneer a deep learning framework for audio carriers, based on which the watermark signal can be effectively embedded and extracted. Meanwhile, in order to resist the AR attack, we delicately analyze the distortions that occurred in the AR process and design the corresponding distortion layer to cooperate with the proposed watermarking framework. Extensive experiments show that the proposed algorithm can resist not only common electronic channel distortions but also AR distortions. Under the premise of high-quality embedding (SNR=25.86dB), in the case of a common re-recording distance (20cm), the algorithm can effectively achieve an average bit recovery accuracy of 98.55%.Comment: Accepted by AAAI202

    CopyRNeRF: Protecting the CopyRight of Neural Radiance Fields

    Full text link
    Neural Radiance Fields (NeRF) have the potential to be a major representation of media. Since training a NeRF has never been an easy task, the protection of its model copyright should be a priority. In this paper, by analyzing the pros and cons of possible copyright protection solutions, we propose to protect the copyright of NeRF models by replacing the original color representation in NeRF with a watermarked color representation. Then, a distortion-resistant rendering scheme is designed to guarantee robust message extraction in 2D renderings of NeRF. Our proposed method can directly protect the copyright of NeRF models while maintaining high rendering quality and bit accuracy when compared among optional solutions.Comment: 11 pages, 6 figures, accepted by iccv 2023 non-camera-ready versio

    Building Universal Digital Libraries: An Agenda for Copyright Reform

    Get PDF
    This article proposes a series of copyright reforms to pave the way for digital library projects like Project Gutenberg, the Internet Archive, and Google Print, which promise to make much of the world\u27s knowledge easily searchable and accessible from anywhere. Existing law frustrates digital library growth and development by granting overlapping, overbroad, and near-perpetual copyrights in books, art, audiovisual works, and digital content. Digital libraries would benefit from an expanded public domain, revitalized fair use doctrine and originality requirement, rationalized systems for copyright registration and transfer, and a new framework for compensating copyright owners for online infringement without imposing derivative copyright liability on technologists. This article\u27s case for reform begins with rolling back the copyright term extensions of recent years, which were upheld by the Supreme Court in Eldred v. Reno. Indefinitely renewable copyrights threaten to marginalize Internet publishing and online libraries by entangling them in endless disputes regarding the rights to decades- or centuries-old works. Similarly, digital library projects are becoming unnecessarily complicated and expensive to undertake due to the assertion by libraries and copyright holding companies of exclusive rights over unoriginal reproductions of public domain works, and the demands of authors that courts block all productive digital uses of their already published but often out-of-print works. Courts should refuse to allow the markets in digital reproductions to be monopolized in this way, and Congress must introduce greater certainty into copyright licensing by requiring more frequent registration and recordation of rights. Courts should also consider the digitizing of copyrighted works for the benefit of the public to be fair use, particularly where only excerpts of the works are posted online for public perusal. A digital library like Google Print needs a degree of certainty - which existing law does not provide - that it will not be punished for making miles of printed matter instantly searchable in the comfort of one\u27s home, or for rescuing orphan works from obscurity or letting consumers preview a few pages of a book before buying it. Finally, the Supreme Court\u27s recognition of liability for inducement of digital copyright infringement in the Grokster case may have profoundly negative consequences for digital library technology. The article discusses how recent proposals for statutory file-sharing licenses may reduce the bandwidth and storage costs of digital libraries, and thereby make them more comprehensive and accessible

    Building Universal Digital Libraries: An Agenda for Copyright Reform

    Get PDF
    This article proposes a series of copyright reforms to pave the way for digital library projects like Project Gutenberg, the Internet Archive, and Google Print, which promise to make much of the world’s knowledge easily searchable and accessible from anywhere. Existing law frustrates digital library growth and development by granting overlapping, overbroad, and near-perpetual copyrights in books, art, audiovisual works, and digital content. Digital libraries would benefit from an expanded public domain, revitalized fair use doctrine and originality requirement, rationalized systems for copyright registration and transfer, and a new framework for compensating copyright owners for online infringement without imposing derivative copyright liability on technologists. This article’s case for reform begins with rolling back the copyright term extensions of recent years, which were upheld by the Supreme Court in Eldred v. Reno. Indefinitely renewable copyrights threaten to marginalize Internet publishing and online libraries by entangling them in endless disputes regarding the rights to decades- or centuries-old works. Similarly, digital library projects are becoming unnecessarily complicated and expensive to undertake due to the assertion by library and copyright holding companies of exclusive rights over unoriginal reproductions of public domain works, and the demands of authors that courts block all productive digital uses of their already published but often out-of-print works. Courts should refuse to allow the markets in digital reproductions to be monopolized in this way, and Congress must introduce greater certainty into copyright licensing by requiring more frequent registration and recordation of rights. Courts should also consider the digitizing of copyrighted works for the benefit of the public to be fair use, particularly where only excerpts of the works are posted online for public perusal. A digital library like Google Print needs a degree of certainty that existing law does not provide that it will not be punished for making miles of printed matter instantly searchable in the comfort of one’s home, or for rescuing orphan works from obscurity or letting consumers preview a few pages of a book before buying it. Finally, the Supreme Court’s recognition of liability for inducement of digital copyright infringement in the Grokster case may have profoundly negative consequences for digital library technology. The article discusses how recent proposals for statutory file-sharing licenses may reduce the bandwidth and storage costs of digital libraries, and thereby make them more comprehensive and accessible

    Harnessing Simulated Data with Graphs

    Get PDF
    Physically accurate simulations allow for unlimited exploration of arbitrarily crafted environments. From a scientific perspective, digital representations of the real world are useful because they make it easy validate ideas. Virtual sandboxes allow observations to be collected at-will, without intricate setting up for measurements or needing to wait on the manufacturing, shipping, and assembly of physical resources. Simulation techniques can also be utilized over and over again to test the problem without expending costly materials or producing any waste. Remarkably, this freedom to both experiment and generate data becomes even more powerful when considering the rising adoption of data-driven techniques across engineering disciplines. These are systems that aggregate over available samples to model behavior, and thus are better informed when exposed to more data. Naturally, the ability to synthesize limitless data promises to make approaches that benefit from datasets all the more robust and desirable. However, the ability to readily and endlessly produce synthetic examples also introduces several new challenges. Data must be collected in an adaptive format that can capture the complete diversity of states achievable in arbitrary simulated configurations while too remaining amenable to downstream applications. The quantity and zoology of observations must also straddle a range which prevents overfitting but is descriptive enough to produce a robust approach. Pipelines that naively measure virtual scenarios can easily be overwhelmed by trying to sample an infinite set of available configurations. Variations observed across multiple dimensions can quickly lead to a daunting expansion of states, all of which must be processed and solved. These and several other concerns must first be addressed in order to safely leverage the potential of boundless simulated data. In response to these challenges, this thesis proposes to wield graphs in order to instill structure over digitally captured data, and curb the growth of variables. The paradigm of pairing data with graphs introduced in this dissertation serves to enforce consistency, localize operators, and crucially factor out any combinatorial explosion of states. Results demonstrate the effectiveness of this methodology in three distinct areas, each individually offering unique challenges and practical constraints, and together showcasing the generality of the approach. Namely, studies observing state-of-the-art contributions in design for additive manufacturing, side-channel security threats, and large-scale physics based contact simulations are collectively achieved by harnessing simulated datasets with graph algorithms

    The People Inside

    Get PDF
    Our collection begins with an example of computer vision that cuts through time and bureaucratic opacity to help us meet real people from the past. Buried in thousands of files in the National Archives of Australia is evidence of the exclusionary “White Australia” policies of the nineteenth and twentieth centuries, which were intended to limit and discourage immigration by non-Europeans. Tim Sherratt and Kate Bagnall decided to see what would happen if they used a form of face-detection software made ubiquitous by modern surveillance systems and applied it to a security system of a century ago. What we get is a new way to see the government documents, not as a source of statistics but, Sherratt and Bagnall argue, as powerful evidence of the people affected by racism
    corecore