626 research outputs found

    Expanding CRISPR/Cas9 Genome Editing Capacity in Zebrafish Using SaCas9.

    Get PDF
    The type II CRISPR/Cas9 system has been used widely for genome editing in zebrafish. However, the requirement for the 5'-NGG-3' protospacer-adjacent motif (PAM) of Cas9 from Streptococcus pyogenes (SpCas9) limits its targeting sequences. Here, we report that a Cas9 ortholog from Staphylococcus aureus (SaCas9), and its KKH variant, successfully induced targeted mutagenesis with high frequency in zebrafish. Confirming previous findings, the SpCas9 variant, VQR, can also induce targeted mutations in zebrafish. Bioinformatics analysis of these new Cas targets suggests that the number of available target sites in the zebrafish genome can be greatly expanded. Collectively, the expanded target repertoire of Cas9 in zebrafish should further facilitate the utility of this organism for genetic studies of vertebrate biology

    HybridNet: Dual-Branch Fusion of Geometrical and Topological Views for VLSI Congestion Prediction

    Full text link
    Accurate early congestion prediction can prevent unpleasant surprises at the routing stage, playing a crucial character in assisting designers to iterate faster in VLSI design cycles. In this paper, we introduce a novel strategy to fully incorporate topological and geometrical features of circuits by making several key designs in our network architecture. To be more specific, we construct two individual graphs (geometry-graph, topology-graph) with distinct edge construction schemes according to their unique properties. We then propose a dual-branch network with different encoder layers in each pathway and aggregate representations with a sophisticated fusion strategy. Our network, named HybridNet, not only provides a simple yet effective way to capture the geometric interactions of cells, but also preserves the original topological relationships in the netlist. Experimental results on the ISPD2015 benchmarks show that we achieve an improvement of 10.9% compared to previous methods

    Modest BBR: Enabling better fairness for BBR congestion control

    Get PDF
    As a vital component of TCP, congestion control defines TCP's performance characteristics. Hence, it is important for congestion control to provide high link utilization and low queuing delay. Recent BBR tries to estimate available bottleneck capacity to achieve this goal. However, its aggressiveness characteristics generate a massive amount of packet retransmission which harms loss-based congestion control protocol such as Cubic. In this paper, we first dive into this issue and reveal that the aggressiveness of BBR can degrade the performance of Cubic, as well as the overall Internet transmission. Then we present Modest BBR, a simple yet effective solution based on BBR, by responding to retransmission less aggressively. Through extensive testbed experiments and Mininet simulation, we show Modest BBR can preserve high throughput and short convergence time while improve the overall performance when coexisting with Cubic. For example, Modest BBR gets similar throughput compared to BBR, while it improves 7.1% of the overall throughput and achieves better fairness to loss-based schemes

    Distilling ChatGPT for Explainable Automated Student Answer Assessment

    Full text link
    Providing explainable and faithful feedback is crucial for automated student answer assessment. In this paper, we introduce a novel framework that explores using ChatGPT, a cutting-edge large language model, for the concurrent tasks of student answer scoring and rationale generation. We identify the appropriate instructions by prompting ChatGPT with different templates to collect the rationales, where inconsistent rationales are refined to align with marking standards. The refined ChatGPT outputs enable us to fine-tune a smaller language model that simultaneously assesses student answers and provides rationales. Extensive experiments on the benchmark dataset show that the proposed method improves the overall QWK score by 11% compared to ChatGPT. Furthermore, our thorough analysis and human evaluation demonstrate that the rationales generated by our proposed method are comparable to those of ChatGPT. Our approach provides a viable solution to achieve explainable automated assessment in education. Code available at https://github.com/lijiazheng99/aera.Comment: Accepted EMNLP 202

    Numerical simulation and experimental calibration of additive manufacturing by blown powder technology. Part I: thermal analysis

    Get PDF
    Purpose This paper aims to address the numerical simulation of additive manufacturing (AM) processes. The numerical results are compared with the experimental campaign carried out at State Key Laboratory of Solidification Processing laboratories, where a laser solid forming machine, also referred to as laser engineered net shaping, is used to fabricate metal parts directly from computer-aided design models. Ti-6Al-4V metal powder is injected into the molten pool created by a focused, high-energy laser beam and a layer of added material is sinterized according to the laser scanning pattern specified by the user. Design/methodology/approach The numerical model adopts an apropos finite element (FE) activation technology, which reproduces the same scanning pattern set for the numerical control system of the AM machine. This consists of a complex sequence of polylines, used to define the contour of the component, and hatches patterns to fill the inner section. The full sequence is given through the common layer interface format, a standard format for different manufacturing processes such as rapid prototyping, shape metal deposition or machining processes, among others. The result is a layer-by-layer metal deposition which can be used to build-up complex structures for components such as turbine blades, aircraft stiffeners, cooling systems or medical implants, among others. Findings Ad hoc FE framework for the numerical simulation of the AM process by metal deposition is introduced. Description of the calibration procedure adopted is presented. Originality/value The objectives of this paper are twofold: firstly, this work is intended to calibrate the software for the numerical simulation of the AM process, to achieve high accuracy. Secondly, the sensitivity of the numerical model to the process parameters and modeling data is analyzed.Peer ReviewedPostprint (author's final draft

    Track: Tracerouting in SDN networks with arbitrary network functions

    Get PDF
    The centralization of control plane in Software defined networking (SDN) creates a paramount challenge on troubleshooting the network as packets are ultimately forwarded by distributed data planes. Existing path tracing tools largely utilize packet tags to probe network paths among SDN-enabled switches. However, network functions (NFs) or middleboxes, whose presence is ubiquitous in today's networks, can drop packets or alter their tags - an action that can collapse the probing mechanism. In addition, sending probing packets through network functions could corrupt their internal states, risking of the correctness of servicing logic (e.g., incorrect load balancing decisions). In this paper, we present a novel troubleshooting tool, Track, for SDN-enabled network with arbitrary NFs. Track can discover the forwarding path including NFs taken by any packets, without changing the forwarding rules in switches and internal states of NFs. We have implemented Track on RYU controller. Our extensive experiment results show that Track can achieve 95.08% and 100% accuracy for discovering forwarding paths with and without NFs respectively, and can efficiently generate traces within 3 milliseconds per hop

    TCon: A transparent congestion control deployment platform for optimizing WAN transfers

    Get PDF
    Nowadays, many web services (e.g., cloud storage) are deployed inside datacenters and may trigger transfers to clients through WAN. TCP congestion control is a vital component for improving the performance (e.g., latency) of these services. Considering complex networking environment, the default congestion control algorithms on servers may not always be the most efficient, and new advanced algorithms will be proposed. However, adjusting congestion control algorithm usually requires modification of TCP stacks of servers, which is difficult if not impossible, especially considering different operating systems and configurations on servers. In this paper, we propose TCon, a light-weight, flexible and scalable platform that allows administrators (or operators) to deploy any appropriate congestion control algorithms transparently without making any changes to TCP stacks of servers. We have implemented TCon in Open vSwitch (OVS) and conducted extensive test-bed experiments by transparently deploying BBR congestion control algorithm over TCon. Test-bed results show that the BBR over TCon works effectively and the performance stays close to its native implementation on servers, reducing latency by 12.76% on average

    Learning the Relation between Similarity Loss and Clustering Loss in Self-Supervised Learning

    Full text link
    Self-supervised learning enables networks to learn discriminative features from massive data itself. Most state-of-the-art methods maximize the similarity between two augmentations of one image based on contrastive learning. By utilizing the consistency of two augmentations, the burden of manual annotations can be freed. Contrastive learning exploits instance-level information to learn robust features. However, the learned information is probably confined to different views of the same instance. In this paper, we attempt to leverage the similarity between two distinct images to boost representation in self-supervised learning. In contrast to instance-level information, the similarity between two distinct images may provide more useful information. Besides, we analyze the relation between similarity loss and feature-level cross-entropy loss. These two losses are essential for most deep learning methods. However, the relation between these two losses is not clear. Similarity loss helps obtain instance-level representation, while feature-level cross-entropy loss helps mine the similarity between two distinct images. We provide theoretical analyses and experiments to show that a suitable combination of these two losses can get state-of-the-art results. Code is available at https://github.com/guijiejie/ICCL.Comment: This paper is accepted by IEEE Transactions on Image Processin
    corecore