249 research outputs found
Short Paper: Blockcheck the Typechain
Recent efforts have sought to design new smart contract programming languages that make writing blockchain programs safer. But programs on the blockchain are beholden only to the safety properties enforced by the blockchain itself: even the strictest language-only properties can be rendered moot on a language-oblivious blockchain due to inter-contract interactions. Consequently, while safer languages are a necessity, fully realizing their benefits necessitates a language-aware redesign of the blockchain itself. To this end, we propose that the blockchain be viewed as a typechain: a chain of typed programs-not arbitrary blocks-that are included iff they typecheck against the existing chain. Reaching consensus, or blockchecking, validates typechecking in a byzantine fault-tolerant manner. Safety properties traditionally enforced by a runtime are instead enforced by a type system with the aim of statically capturing smart contract correctness. To provide a robust level of safety, we contend that a typechain must minimally guarantee (1) asset linearity and liveness, (2) physical resource availability, including CPU and memory, (3) exceptionless execution, or no early termination, (4) protocol conformance, or adherence to some state machine, and (5) inter-contract safety, including reentrancy safety. Despite their exacting nature, typechains are extensible, allowing for rich libraries that extend the set of verified properties. We expand on typechain properties and present examples of real-world bugs they prevent
BaseSAFE: Baseband SAnitized Fuzzing through Emulation
Rogue base stations are an effective attack vector. Cellular basebands
represent a critical part of the smartphone's security: they parse large
amounts of data even before authentication. They can, therefore, grant an
attacker a very stealthy way to gather information about calls placed and even
to escalate to the main operating system, over-the-air. In this paper, we
discuss a novel cellular fuzzing framework that aims to help security
researchers find critical bugs in cellular basebands and similar embedded
systems. BaseSAFE allows partial rehosting of cellular basebands for fast
instrumented fuzzing off-device, even for closed-source firmware blobs.
BaseSAFE's sanitizing drop-in allocator, enables spotting heap-based
buffer-overflows quickly. Using our proof-of-concept harness, we fuzzed various
parsers of the Nucleus RTOS-based MediaTek cellular baseband that are
accessible from rogue base stations. The emulator instrumentation is highly
optimized, reaching hundreds of executions per second on each core for our
complex test case, around 15k test-cases per second in total. Furthermore, we
discuss attack vectors for baseband modems. To the best of our knowledge, this
is the first use of emulation-based fuzzing for security testing of commercial
cellular basebands. Most of the tooling and approaches of BaseSAFE are also
applicable for other low-level kernels and firmware. Using BaseSAFE, we were
able to find memory corruptions including heap out-of-bounds writes using our
proof-of-concept fuzzing harness in the MediaTek cellular baseband. BaseSAFE,
the harness, and a large collection of LTE signaling message test cases will be
released open-source upon publication of this paper
Approximating ReLU on a Reduced Ring for Efficient MPC-based Private Inference
Secure multi-party computation (MPC) allows users to offload machine learning
inference on untrusted servers without having to share their privacy-sensitive
data. Despite their strong security properties, MPC-based private inference has
not been widely adopted in the real world due to their high communication
overhead. When evaluating ReLU layers, MPC protocols incur a significant amount
of communication between the parties, making the end-to-end execution time
multiple orders slower than its non-private counterpart.
This paper presents HummingBird, an MPC framework that reduces the ReLU
communication overhead significantly by using only a subset of the bits to
evaluate ReLU on a smaller ring. Based on theoretical analyses, HummingBird
identifies bits in the secret share that are not crucial for accuracy and
excludes them during ReLU evaluation to reduce communication. With its
efficient search engine, HummingBird discards 87--91% of the bits during ReLU
and still maintains high accuracy. On a real MPC setup involving multiple
servers, HummingBird achieves on average 2.03--2.67x end-to-end speedup without
introducing any errors, and up to 8.64x average speedup when some amount of
accuracy degradation can be tolerated, due to its up to 8.76x communication
reduction
Detecting Exploit Primitives Automatically for Heap Vulnerabilities on Binary Programs
Automated Exploit Generation (AEG) is a well-known difficult task, especially
for heap vulnerabilities. Previous works first detected heap vulnerabilities
and then searched for exploitable states by using symbolic execution and
fuzzing techniques on binary programs. However, it is not always easy to
discovery bugs using fuzzing or symbolic technologies and solvable for internal
overflow of heap objects. In this paper, we present a solution DEPA to detect
exploit primitives based on primitive-crucial-behavior model for heap
vulnerabilities. The core of DEPA contains two novel techniques, 1)
primitive-crucial-behavior identification through pointer dependence analysis,
and 2) exploit primitive determination method which includes triggering both
vulnerabilities and exploit primitives. We evaluate DEPA on eleven real-world
CTF(capture the flag) programs with heap vulnerabilities and DEPA can discovery
arbitrary write and arbitrary jump exploit primitives for ten programs except
for program multi-heap. Results showed that primitive-crucial-behavior
identification and determining exploit primitives are accurate and effective by
using our approach. In addition, DEPA is superior to the state-of-the-art tools
in determining exploit primitives for the heap object internal overflowComment: 11 pages 9 figure
Confidential Machine Learning on Untrusted Platforms: a Survey
With the ever-growing data and the need for developing powerful machine learning models, data owners increasingly depend on various untrusted platforms (e.g., public clouds, edges, and machine learning service providers) for scalable processing or collaborative learning. Thus, sensitive data and models are in danger of unauthorized access, misuse, and privacy compromises. A relatively new body of research confidentially trains machine learning models on protected data to address these concerns. In this survey, we summarize notable studies in this emerging area of research. With a unified framework, we highlight the critical challenges and innovations in outsourcing machine learning confidentially. We focus on the cryptographic approaches for confidential machine learning (CML), primarily on model training, while also covering other directions such as perturbation-based approaches and CML in the hardware-assisted computing environment. The discussion will take a holistic way to consider a rich context of the related threat models, security assumptions, design principles, and associated trade-offs amongst data utility, cost, and confidentiality
- …