(A) The prediction accuracy matrix of trained deep networks, estimated over all the images in the dataset. To increase the complexity of the training and testing procedure, we expressed each construct for different time periods, and we then trained and tested the deep networks with all of these different datasets. Each row corresponds to a separate network that has been trained solely on the given dataset. Columns are the average pixel-wise prediction accuracy, assuming that all the pixels picked by the network in an image should belong to the protein with which the cells have been transfected. The given accuracy values may include effects of misexpressed proteins, weak fluorescence signals, and imaging noise. (B) From left to right, first column: merged channels (405 nm/CH1, 488 nm/CH2, 561 nm/CH3, 633 nm/CH4), before being processed by the network. Second column: images produced by assigning false colors to bright pixels, assuming that all the proteins in the image exactly match the given nanobarcode. Third column: output of the deep network, with each pixel given the false color representing the protein picked by the network. Colors are scaled based on class probabilities (Fig 2). Fourth column: false color output of the network overlaid on the gray “cell halos” produced from the brightfield images. Brightfield images have been processed to remove noise and background gradients and to enhance the contrast. (C, D) As (A) and (B), for additional nanobarcode proteins. The data underlying this Figure are available as file “FigS20_AC.xlsx” from http://dx.doi.org/10.17169/refubium-40101. (TIFF)</p

    Similar works

    Full text


    Available Versions