4,249 research outputs found
Polymerase-endonuclease amplification reaction for large-scale enzymatic production of antisense oligonucleotide
Synthetic oligonucleotides are contaminated with highly homologous failure sequences. Oligonucleotide synthesis is difficult to scale up because it requires expensive equipments, hazardous chemicals, and tedious purification process. Here we report a novel thermocyclic reaction, polymerase-endonuclease amplification reaction (PEAR), for the amplification of oligonucleotides. A target oligonucleotide and a tandem repeated antisense probe are subjected to repeated cycles of denaturing, annealing, elongation and cleaving, in which thermostable DNA polymerase elongation and strand slipping generate duplex tandem repeats, and thermostable endonuclease (PspGI) cleavage releases monomeric duplex oligonucleotides. Each round of PEAR achieves >100-fold amplification. The product can be used in one more round of PEAR directly, and the process can be further repeated. In addition to avoiding dangerous materials and improved product purity, this reaction is easy to scale up and amenable to full automation, so it has the potential to be a useful tool for large-scale production of antisense oligonucleotide drugs
Unsupervised Learning of Visual Representations using Videos
Is strong supervision necessary for learning a good visual representation? Do
we really need millions of semantically-labeled images to train a Convolutional
Neural Network (CNN)? In this paper, we present a simple yet surprisingly
powerful approach for unsupervised learning of CNN. Specifically, we use
hundreds of thousands of unlabeled videos from the web to learn visual
representations. Our key idea is that visual tracking provides the supervision.
That is, two patches connected by a track should have similar visual
representation in deep feature space since they probably belong to the same
object or object part. We design a Siamese-triplet network with a ranking loss
function to train this CNN representation. Without using a single image from
ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train
an ensemble of unsupervised networks that achieves 52% mAP (no bounding box
regression). This performance comes tantalizingly close to its
ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4%. We
also show that our unsupervised network can perform competitively in other
tasks such as surface-normal estimation
DeepRebirth: Accelerating Deep Neural Network Execution on Mobile Devices
Deploying deep neural networks on mobile devices is a challenging task.
Current model compression methods such as matrix decomposition effectively
reduce the deployed model size, but still cannot satisfy real-time processing
requirement. This paper first discovers that the major obstacle is the
excessive execution time of non-tensor layers such as pooling and normalization
without tensor-like trainable parameters. This motivates us to design a novel
acceleration framework: DeepRebirth through "slimming" existing consecutive and
parallel non-tensor and tensor layers. The layer slimming is executed at
different substructures: (a) streamline slimming by merging the consecutive
non-tensor and tensor layer vertically; (b) branch slimming by merging
non-tensor and tensor branches horizontally. The proposed optimization
operations significantly accelerate the model execution and also greatly reduce
the run-time memory cost since the slimmed model architecture contains less
hidden layers. To maximally avoid accuracy loss, the parameters in new
generated layers are learned with layer-wise fine-tuning based on both
theoretical analysis and empirical verification. As observed in the experiment,
DeepRebirth achieves more than 3x speed-up and 2.5x run-time memory saving on
GoogLeNet with only 0.4% drop of top-5 accuracy on ImageNet. Furthermore, by
combining with other model compression techniques, DeepRebirth offers an
average of 65ms inference time on the CPU of Samsung Galaxy S6 with 86.5% top-5
accuracy, 14% faster than SqueezeNet which only has a top-5 accuracy of 80.5%.Comment: AAAI 201
- β¦