172 research outputs found

    AdaFocus: Towards End-to-end Weakly Supervised Learning for Long-Video Action Understanding

    Full text link
    Developing end-to-end models for long-video action understanding tasks presents significant computational and memory challenges. Existing works generally build models on long-video features extracted by off-the-shelf action recognition models, which are trained on short-video datasets in different domains, making the extracted features suffer domain discrepancy. To avoid this, action recognition models can be end-to-end trained on clips, which are trimmed from long videos and labeled using action interval annotations. Such fully supervised annotations are expensive to collect. Thus, a weakly supervised method is needed for long-video action understanding at scale. Under the weak supervision setting, action labels are provided for the whole video without precise start and end times of the action clip. To this end, we propose an AdaFocus framework. AdaFocus estimates the spike-actionness and temporal positions of actions, enabling it to adaptively focus on action clips that facilitate better training without the need for precise annotations. Experiments on three long-video datasets show its effectiveness. Remarkably, on two of datasets, models trained with AdaFocus under weak supervision outperform those trained under full supervision. Furthermore, we form a weakly supervised feature extraction pipeline with our AdaFocus, which enables significant improvements on three long-video action understanding tasks

    Succinct Partial Garbling from Groups and Applications

    Get PDF
    A garbling scheme transforms a program (e.g., circuit) CC into a garbled program C^\hat{C}, along with a pair of short keys (ki,0,ki,1)(k_{i,0},k_{i,1}) for each input bit xix_i, such that (C,C^,{ki,xi})(C,\hat{C}, \{k_{i,x_i}\}) can be used to recover the output z=C(x)z = C(x) while revealing nothing else about the input xx. This can be naturally generalized to partial garbling, where part of the input is public, and a computation z=C(x,y)z = C(x, y) is decomposed into a public part Cpub(x)C_{\text{pub}}(x), depending only on the public input xx, and a private part z=Cpriv(Cpub(x),y)z = C_{\text{priv}}(C_{\text{pub}}(x), y) that also involves a private input yy. A key challenge in garbling is to achieve succinctness, where the size of the garbled program may grow only with the security parameter and (possibly) the output length, but not with the size of CC. Prior work achieved this strong notion of succinctness using heavy tools such as indistinguishability obfuscation (iO) or a combination of fully homomorphic encryption and attribute-based encryption. In this work, we introduce new succinct garbling schemes based on variants of standard group-based assumptions. Our approach, being different from prior methods, offers a promising pathway towards practical succinct garbling. Specifically, we construct: - A succinct partial garbling scheme for general circuits, where the garbled circuit size scales linearly with the private computation Cpriv|C_{\text{priv}}| and is independent of the public computation Cpub|C_{\text{pub}}|. This implies fully succinct conditional disclosure of secrets (CDS) protocols for circuits. - Succinct (fully hiding) garbling schemes for simple types of programs, including truth tables, bounded-length branching programs (capturing decision trees and DFAs as special cases) and degree-2 polynomials, where the garbled program size is independent of the program size. This implies succinct private simultaneous messages (PSM) protocols for the same programs. Our succinct partial garbling scheme can be based on a circular-security variant of the power-DDH assumption, which holds in the generic group model, or alternatively on the key-dependent message security of the Damgård-Jurik encryption. For bounded-depth circuits or the aforementioned simple programs, we avoid circular-security assumptions entirely. At the heart of our technical approach is a new computational flavor of algebraic homomorphic MAC (aHMAC), for which we obtain group-based constructions building on techniques from the literature on homomorphic secret sharing. Beyond succinct garbling, we demonstrate the utility of aHMAC by constructing constrained pseudorandom functions (CPRFs) for general constraint circuits from group-based assumptions. Previous CPRF constructions were limited to NC1\mathsf{NC}^1 circuits or alternatively relied on lattices or iO

    Aviation subsidy policy and regional wellbeing: Important indicators from relevant stakeholders’ perspectives

    Get PDF
    A failure to adequately reconcile stakeholder interests and opinions can increase the probability of a failed aviation subsidy request or a loss of regional opportunity. This study, instead of arguing the importance and offering critiques of aviation subsidies, conducts a survey of stakeholders in New Zealand and Taiwan and uses fuzzy analytic hierarchy process to evaluate and prioritise key air transport activities and regional wellbeing indicators regarding the design and implementation of aviation subsidy policies in the early stage of the COVID-19 pandemic. The findings show that destination served, flight frequency, local business activities, medical treatment, and rapid hazard response were considered the key factors of aviation subsidies. Integrating economic and social wellbeing into subsidy policy design and implementation is highly necessary. The results provide useful insights for the development of aviation subsidy policies aimed at improving regional wellbeing in New Zealand and Taiwan during the post-COVID-19 era

    How to Compress Garbled Circuit Input Labels, Efficiently

    Get PDF
    Garbled Circuits are essential building blocks in cryptography, and extensive research has explored their construction from both applied and theoretical perspectives. However, a challenge persists: While theoretically designed garbled circuits offer optimal succinctness--remaining constant in size regardless of the underlying circuit’s complexit--and are reusable for multiple evaluations, their concrete computational costs are prohibitively high. On the other hand, practically efficient garbled circuits, inspired by Yao’s garbled circuits, encounter limitations due to substantial communication bottlenecks and a lack of reusability. To strike a balance, we propose a novel concept: online-offline garbling. This approach leverages instance-independent and (partially) reusable preprocessing during an offline phase, to enable the creation of constant-size garbled circuits in an online phase, while maintaining practical efficiency. Specifically, during the offline stage, the garbler generates and transmits a reference string, independent of the computation to be performed later. Subsequently, in the online stage, the garbler efficiently transforms a circuit into a constant-size garbled circuit. The evaluation process relies on both the reference string and the garbled circuit. We demonstrate that by leveraging existing tools such as those introduced by Applebaum et al. (Crypto’13) and Chongwon et al. (Crypto’17), online-offline garbling can be achieved under a variety of assumptions, including the hardness of Learning With Errors (LWE), Computational Diffie-Hellman (CDH), and factoring. In contrast, without the help of an offline phase, constant-size garbling is only feasible under the LWE and circular security assumptions, or the existence of indistinguishability obfuscation. However, these schemes are still very inefficient, several orders of magnitude more costly than Yao-style garbled circuits. To address this, we propose a new online-offline garbling scheme based on Ring LWE. Our scheme offers both asymptotic and concrete efficiency. It serves as a practical alternative to Yao-style garbled circuits, especially in scenarios where online communication is constrained. Furthermore, we estimate the concrete latency using our approach in realistic settings and demonstrate that it is 2-20X faster than using Yao-style garbled circuits. This improvement is estimated without taking into account parallelization of computation, which can lead to further performance improvement using our scheme

    ABE for Circuits with Constant-Size Secret Keys and Adaptive Security

    Get PDF
    An important theme in research on attribute-based encryption (ABE) is minimizing the sizes of the secret keys and ciphertexts. In this work, we present two new ABE schemes with *constant-size* secret keys, that is, the key size is independent of the sizes of policies or attributes, and dependent only on the security parameter lambda. * We construct the first key-policy ABE scheme for circuits with constant-size secret keys, |sk_f|=poly(lambda), which concretely consist of only three group elements. The previous state-of-the-art construction by [Boneh et al., Eurocrypt \u2714] has key size polynomial in the maximum depth d of the policy circuits, |sk_f|=poly(d,lambda). Our new scheme removes this dependency of key size on d while keeping the ciphertext size the same, which grows linearly in the attribute length and polynomially in the maximal depth, |ct|=|x|poly(d,lambda). * We present the first ciphertext-policy ABE scheme for Boolean formulae that simultaneously has constant-size keys and succinct ciphertexts of size independent of the policy formulae, in particular, |sk_f|=poly(lambda) and |ct_x|=poly(|x|,lambda). Concretely, each secret key consists of only two group elements. Previous ciphertext-policy ABE schemes either have succinct ciphertexts but non constant-size keys [Agrawal--Yamada, Eurocrypt \u2720; Agrawal--Wichs--Yamada, TCC \u2720], or constant-size keys but large ciphertexts that grow with the policy size, as well as the attribute length. Our second construction is the first ABE scheme achieving *double succinctness*, where both keys and ciphertexts are smaller than the corresponding attributes and policies tied to them. Our constructions feature new ways of combining lattices with pairing groups for building ABE and are proven selectively secure based on LWE and in the generic (pairing) group model. We further show that when replacing the LWE assumption with its adaptive variant introduced in [Quach--Wee--Wichs FOCS \u2718] the constructions become adaptively secure

    A Unified Framework for Succinct Garbling from Homomorphic Secret Sharing

    Get PDF
    A major challenge in cryptography is the construction of succinct garbling schemes that have asymptotically smaller size than Yao’s garbled circuit construction. We present a new framework for succinct garbling that replaces the heavy machinery of most previous constructions by lighter-weight homomorphic secret sharing techniques. Concretely, we achieve 1-bit-per-gate (amortized) garbling size for Boolean circuits under circular variants of standard assumptions in composite-order or prime-order groups, as well as a lattice-based instantiation. We further extend these ideas to layered circuits, improving the per-gate cost below 1 bit, and to arithmetic circuits, eliminating the typical Ω(λ)-factor overhead for garbling mod-p computations. Our constructions also feature “leveled” variants that remove circular-security requirements at the cost of adding a depth-dependent term to the garbling size. Our framework significantly extends a recent technique of Liu, Wang, Yang, and Yu (Eurocrypt 2025) for lattice-based succinct garbling, and opens new avenues toward practical succinct garbling. For moderately large circuits with a few million gates, our garbled circuits can be two orders of magnitude smaller than Yao-style garbling. While our garbling and evaluation algorithms are much slower, they are still practically feasible, unlike previous fully succinct garbling schemes that rely on expensive tools such as iO or a non-black-box combination of FHE and ABE. This trade-off can make our framework appealing when a garbled circuit is used as a functional ciphertext that is broadcast or stored in multiple locations (e.g., on a blockchain), in which case communication and storage may dominate computational cost

    New Ways to Garble Arithmetic Circuits

    Get PDF
    The beautiful work of Applebaum, Ishai, and Kushilevitz [FOCS\u2711] initiated the study of arithmetic variants of Yao\u27s garbled circuits. An arithmetic garbling scheme is an efficient transformation that converts an arithmetic circuit C:RnRmC: \mathcal{R}^n \rightarrow \mathcal{R}^m over a ring R\mathcal{R} into a garbled circuit C^\widehat C and nn affine functions LiL_i for i[n]i \in [n], such that C^\widehat C and Li(xi)L_i(x_i) reveals only the output C(x)C(x) and no other information of xx. AIK presented the first arithmetic garbling scheme supporting computation over integers from a bounded (possibly exponentially large) range, based on Learning With Errors (LWE). In contrast, converting CC into a Boolean circuit and applying Yao\u27s garbled circuit treats the inputs as bit strings instead of ring elements, and hence is not arithmetic . In this work, we present new ways to garble arithmetic circuits, which improve the state-of-the-art on efficiency, modularity, and functionality. To measure efficiency, we define the rate of a garbling scheme as the maximal ratio between the bit-length of the garbled circuit C^|\widehat C| and that of the computation tableau C|C|\ell in the clear, where \ell is the bit length of wire values (e.g., Yao\u27s garbled circuit has rate O(λ)O(\lambda)). \bullet We present the first constant-rate arithmetic garbled circuit for computation over large integers based on the Decisional Composite Residuosity (DCR) assumption, significantly improving the efficiency of the schemes of Applebaum, Ishai, and Kushilevitz. \bullet We construct an arithmetic garbling scheme for modular computation over R=Zp\mathcal{R} = \mathbb{Z}_p for any integer modulus pp, based on either DCR or LWE. The DCR-based instantiation achieves rate O(λ)O(\lambda) for large pp. Furthermore, our construction is modular and makes black-box use of the underlying ring and a simple key extension gadget. \bullet We describe a variant of the first scheme supporting arithmetic circuits over bounded integers that are augmented with Boolean computation (e.g., truncation of an integer value, and comparison between two values), while keeping the constant rate when garbling the arithmetic part. To the best of our knowledge, constant-rate (Boolean or arithmetic) garbling was only achieved before using the powerful primitive of indistinguishability obfuscation, or for restricted circuits with small depth

    LERNA: Secure Single-Server Aggregation via Key-Homomorphic Masking

    Get PDF
    This paper introduces LERNA, a new framework for single-server secure aggregation. Our protocols are tailored to the setting where multiple consecutive aggregation phases are performed with the same set of clients, a fraction of which can drop out in some of the phases. We rely on an initial secret sharing setup among the clients which is generated once-and-for-all, and reused in all following aggregation phases. Compared to prior works [Bonawitz et al. CCS’17, Bell et al. CCS’20], the reusable setup eliminates one round of communication between the server and clients per aggregation—i.e., we need two rounds for semi-honest security (instead of three), and three rounds (instead of four) in the malicious model. Our approach also significantly reduces the server’s computational costs by only requiring the reconstruction of a single secret-shared value (per aggregation). Prior work required reconstructing a secret-shared value for each client involved in the computation. We provide instantiations of LERNA based on both the Decisional Composite Residuosity (DCR) and (Ring) Learning with Rounding ((R)LWR) assumptions respectively and evaluate a version based on the latter assumption. In addition to savings in round-complexity (which result in reduced latency), our experiments show that the server computational costs are reduced by two orders of magnitude in comparison to the state-of-the-art. In settings with a large number of clients, we also reduce the computational costs up to twenty-fold for most clients, while a small set of “heavy clients” is subject to a workload that is still smaller than that of prior work

    Assessment of Nondestructive Testing Technologies for Quality Control/Quality Assurance of Asphalt Mixtures

    Get PDF
    Asphalt pavements suffer various failures due to insufficient quality within their design lives. The American Association of State Highway and Transportation Officials (AASHTO) Mechanistic-Empirical Pavement Design Guide (MEPDG) has been proposed to improve pavement quality through quantitative performance prediction. Evaluation of the actual performance (quality) of pavements requires in situ nondestructive testing (NDT) techniques that can accurately measure the most critical, objective, and sensitive properties of pavement systems. The purpose of this study is to assess existing as well as promising new NDT technologies for quality control/quality assurance (QC/QA) of asphalt mixtures. Specifically, this study examined field measurements of density via the PaveTracker electromagnetic gage, shear-wave velocity via surface-wave testing methods, and dynamic stiffness via the Humboldt GeoGauge for five representative paving projects covering a range of mixes and traffic loads. The in situ tests were compared against laboratory measurements of core density and dynamic modulus. The in situ PaveTracker density had a low correlation with laboratory density and was not sensitive to variations in temperature or asphalt mix type. The in situ shear-wave velocity measured by surface-wave methods was most sensitive to variations in temperature and asphalt mix type. The in situ density and in situ shear-wave velocity were combined to calculate an in situ dynamic modulus, which is a performance-based quality measurement. The in situ GeoGauge stiffness measured on hot asphalt mixtures several hours after paving had a high correlation with the in situ dynamic modulus and the laboratory density, whereas the stiffness measurement of asphalt mixtures cooled with dry ice or at ambient temperature one or more days after paving had a very low correlation with the other measurements. To transform the in situ moduli from surface-wave testing into quantitative quality measurements, a QC/QA procedure was developed to first correct the in situ moduli measured at different field temperatures to the moduli at a common reference temperature based on master curves from laboratory dynamic modulus tests. The corrected in situ moduli can then be compared against the design moduli for an assessment of the actual pavement performance. A preliminary study of microelectromechanical systems- (MEMS)-based sensors for QC/QA and health monitoring of asphalt pavements was also performed
    corecore