1,765 research outputs found

    Quantum Tokens for Digital Signatures

    Get PDF
    The fisherman caught a quantum fish. "Fisherman, please let me go", begged the fish, "and I will grant you three wishes". The fisherman agreed. The fish gave the fisherman a quantum computer, three quantum signing tokens and his classical public key. The fish explained: "to sign your three wishes, use the tokenized signature scheme on this quantum computer, then show your valid signature to the king, who owes me a favor". The fisherman used one of the signing tokens to sign the document "give me a castle!" and rushed to the palace. The king executed the classical verification algorithm using the fish's public key, and since it was valid, the king complied. The fisherman's wife wanted to sign ten wishes using their two remaining signing tokens. The fisherman did not want to cheat, and secretly sailed to meet the fish. "Fish, my wife wants to sign ten more wishes". But the fish was not worried: "I have learned quantum cryptography following the previous story (The Fisherman and His Wife by the brothers Grimm). The quantum tokens are consumed during the signing. Your polynomial wife cannot even sign four wishes using the three signing tokens I gave you". "How does it work?" wondered the fisherman. "Have you heard of quantum money? These are quantum states which can be easily verified but are hard to copy. This tokenized quantum signature scheme extends Aaronson and Christiano's quantum money scheme, which is why the signing tokens cannot be copied". "Does your scheme have additional fancy properties?" the fisherman asked. "Yes, the scheme has other security guarantees: revocability, testability and everlasting security. Furthermore, if you're at sea and your quantum phone has only classical reception, you can use this scheme to transfer the value of the quantum money to shore", said the fish, and swam away.Comment: Added illustration of the abstract to the ancillary file

    Late-Time Photometry of Type Ia Supernova SN 2012cg Reveals the Radioactive Decay of 57^{57}Co

    Full text link
    Seitenzahl et al. (2009) have predicted that roughly three years after its explosion, the light we receive from a Type Ia supernova (SN Ia) will come mostly from reprocessing of electrons and X-rays emitted by the radioactive decay chain 57Co → 57Fe^{57}{\rm Co}~\to~^{57}{\rm Fe}, instead of positrons from the decay chain 56Co → 56Fe^{56}{\rm Co}~\to~^{56}{\rm Fe} that dominates the SN light at earlier times. Using the {\it Hubble Space Telescope}, we followed the light curve of the SN Ia SN 2012cg out to 10551055 days after maximum light. Our measurements are consistent with the light curves predicted by the contribution of energy from the reprocessing of electrons and X-rays emitted by the decay of 57^{57}Co, offering evidence that 57^{57}Co is produced in SN Ia explosions. However, the data are also consistent with a light echo ∼14\sim14 mag fainter than SN 2012cg at peak. Assuming no light-echo contamination, the mass ratio of 57^{57}Ni and 56^{56}Ni produced by the explosion, a strong constraint on any SN Ia explosion model, is 0.043−0.011+0.0120.043^{+0.012}_{-0.011}, roughly twice Solar. In the context of current explosion models, this value favors a progenitor white dwarf with a mass near the Chandrasekhar limit.Comment: Updated to reflect the final version published by ApJ. For a video about the paper, see https://youtu.be/t3pUbZe8wq

    Language-Grounded Indoor 3D Semantic Segmentation in the Wild

    Full text link
    Recent advances in 3D semantic segmentation with deep neural networks have shown remarkable success, with rapid performance increase on available datasets. However, current 3D semantic segmentation benchmarks contain only a small number of categories -- less than 30 for ScanNet and SemanticKITTI, for instance, which are not enough to reflect the diversity of real environments (e.g., semantic image understanding covers hundreds to thousands of classes). Thus, we propose to study a larger vocabulary for 3D semantic segmentation with a new extended benchmark on ScanNet data with 200 class categories, an order of magnitude more than previously studied. This large number of class categories also induces a large natural class imbalance, both of which are challenging for existing 3D semantic segmentation methods. To learn more robust 3D features in this context, we propose a language-driven pre-training method to encourage learned 3D features that might have limited training examples to lie close to their pre-trained text embeddings. Extensive experiments show that our approach consistently outperforms state-of-the-art 3D pre-training for 3D semantic segmentation on our proposed benchmark (+9% relative mIoU), including limited-data scenarios with +25% relative mIoU using only 5% annotations.Comment: 23 pages, 8 figures, project page: https://rozdavid.github.io/scannet20
    • …
    corecore