1,497 research outputs found

    Transient Receptor Potential V Channels Are Essential for Glucose Sensing by Aldolase and AMPK

    Get PDF
    Fructose-1,6-bisphosphate (FBP) aldolase links sensing of declining glucose availability to AMPK activation via the lysosomal pathway. However, how aldolase transmits lack of occupancy by FBP to AMPK activation remains unclear. Here, we show that FBP-unoccupied aldolase interacts with and inhibits endoplasmic reticulum (ER)-localized transient receptor potential channel subfamily V, inhibiting calcium release in low glucose. The decrease of calcium at contact sites between ER and lysosome renders the inhibited TRPV accessible to bind the lysosomal v-ATPase that then recruits AXIN:LKB1 to activate AMPK independently of AMP. Genetic depletion of TRPVs blocks glucose starvation-induced AMPK activation in cells and liver of mice, and in nematodes, indicative of physical requirement of TRPVs. Pharmacological inhibition of TRPVs activates AMPK and elevates NAD(+) levels in aged muscles, rejuvenating the animals' running capacity. Our study elucidates that TRPVs relay the FBP-free status of aldolase to the reconfiguration of v-ATPase, leading to AMPK activation in low glucose

    OmniLytics+: A Secure, Efficient, and Affordable Blockchain Data Market for Machine Learning through Off-Chain Processing

    Full text link
    The rapid development of large machine learning (ML) models requires a massive amount of training data, resulting in booming demands of data sharing and trading through data markets. Traditional centralized data markets suffer from low level of security, and emerging decentralized platforms are faced with efficiency and privacy challenges. In this paper, we propose OmniLytics+, the first decentralized data market, built upon blockchain and smart contract technologies, to simultaneously achieve 1) data (resp., model) privacy for the data (resp. model) owner; 2) robustness against malicious data owners; 3) efficient data validation and aggregation. Specifically, adopting the zero-knowledge (ZK) rollup paradigm, OmniLytics+ proposes to secret share encrypted local gradients, computed from the encrypted global model, with a set of untrusted off-chain servers, who collaboratively generate a ZK proof on the validity of the gradient. In this way, the storage and processing overheads are securely offloaded from blockchain verifiers, significantly improving the privacy, efficiency, and affordability over existing rollup solutions. We implement the proposed OmniLytics+ data market as an Ethereum smart contract [41]. Extensive experiments demonstrate the effectiveness of OmniLytics+ in training large ML models in presence of malicious data owner, and the substantial advantages of OmniLytics+ in gas cost and execution time over baselines

    Incremental Neural Implicit Representation with Uncertainty-Filtered Knowledge Distillation

    Full text link
    Recent neural implicit representations (NIRs) have achieved great success in the tasks of 3D reconstruction and novel view synthesis. However, they suffer from the catastrophic forgetting problem when continuously learning from streaming data without revisiting the previously seen data. This limitation prohibits the application of existing NIRs to scenarios where images come in sequentially. In view of this, we explore the task of incremental learning for NIRs in this work. We design a student-teacher framework to mitigate the catastrophic forgetting problem. Specifically, we iterate the process of using the student as the teacher at the end of each time step and let the teacher guide the training of the student in the next step. As a result, the student network is able to learn new information from the streaming data and retain old knowledge from the teacher network simultaneously. Although intuitive, naively applying the student-teacher pipeline does not work well in our task. Not all information from the teacher network is helpful since it is only trained with the old data. To alleviate this problem, we further introduce a random inquirer and an uncertainty-based filter to filter useful information. Our proposed method is general and thus can be adapted to different implicit representations such as neural radiance field (NeRF) and neural SDF. Extensive experimental results for both 3D reconstruction and novel view synthesis demonstrate the effectiveness of our approach compared to different baselines

    GNeSF: Generalizable Neural Semantic Fields

    Full text link
    3D scene segmentation based on neural implicit representation has emerged recently with the advantage of training only on 2D supervision. However, existing approaches still requires expensive per-scene optimization that prohibits generalization to novel scenes during inference. To circumvent this problem, we introduce a generalizable 3D segmentation framework based on implicit representation. Specifically, our framework takes in multi-view image features and semantic maps as the inputs instead of only spatial information to avoid overfitting to scene-specific geometric and semantic information. We propose a novel soft voting mechanism to aggregate the 2D semantic information from different views for each 3D point. In addition to the image features, view difference information is also encoded in our framework to predict the voting scores. Intuitively, this allows the semantic information from nearby views to contribute more compared to distant ones. Furthermore, a visibility module is also designed to detect and filter out detrimental information from occluded views. Due to the generalizability of our proposed method, we can synthesize semantic maps or conduct 3D semantic segmentation for novel scenes with solely 2D semantic supervision. Experimental results show that our approach achieves comparable performance with scene-specific approaches. More importantly, our approach can even outperform existing strong supervision-based approaches with only 2D annotations. Our source code is available at: https://github.com/HLinChen/GNeSF.Comment: NeurIPS 202

    TreeSBA: Tree-Transformer for Self-Supervised Sequential Brick Assembly

    Full text link
    Inferring step-wise actions to assemble 3D objects with primitive bricks from images is a challenging task due to complex constraints and the vast number of possible combinations. Recent studies have demonstrated promising results on sequential LEGO brick assembly through the utilization of LEGO-Graph modeling to predict sequential actions. However, existing approaches are class-specific and require significant computational and 3D annotation resources. In this work, we first propose a computationally efficient breadth-first search (BFS) LEGO-Tree structure to model the sequential assembly actions by considering connections between consecutive layers. Based on the LEGO-Tree structure, we then design a class-agnostic tree-transformer framework to predict the sequential assembly actions from the input multi-view images. A major challenge of the sequential brick assembly task is that the step-wise action labels are costly and tedious to obtain in practice. We mitigate this problem by leveraging synthetic-to-real transfer learning. Specifically, our model is first pre-trained on synthetic data with full supervision from the available action labels. We then circumvent the requirement for action labels in the real data by proposing an action-to-silhouette projection that replaces action labels with input image silhouettes for self-supervision. Without any annotation on the real data, our model outperforms existing methods with 3D supervision by 7.8% and 11.3% in mIoU on the MNIST and ModelNet Construction datasets, respectively.Comment: ECCV 202
    corecore