122,404 research outputs found

    Strings in Singular Time-Dependent Backgrounds

    Full text link
    We review the construction of time-dependent backgrounds with space-like singularities. We mainly consider exact CFT backgrounds. The algebraic and geometric aspects of these backgrounds are discussed. Physical issues, results and difficulties associated with such systems are reviewed. Finally, we present some new results: a two dimensional cosmology in the presence of an Abelian gauge field described within a family of (SL(2)xU(1))/(U(1)xZ) quotient CFTs.Comment: 22 pages, 4 figures, Contribution to the proceedings of Symposium Ahrenshoop, August 200

    Reclaiming Refugee Rights as Human Rights

    Get PDF
    On April 5, 2019, PILR held their triennial symposium titled: Revisiting Human Rights: The Universal Declaration at 70. As a reflection of the event, a few panelists composed contribution pieces reflecting on the topic

    Analyzing and clustering neural data

    Get PDF
    This thesis aims to analyze neural data in an overall effort by the Charles Stark Draper Laboratory to determine an underlying pattern in brain activity in healthy individuals versus patients with a brain degenerative disorder. The neural data comes from ECoG (electrocorticography) applied to either humans or primates. Each ECoG array has electrodes that measure voltage variations which neuroscientists claim correlates to neurons transmitting signals to one another. ECoG differs from the less invasive technique of EEG (electroencephalography) in that EEG electrodes are placed above a patients scalp while ECoG involves drilling small holes in the skull to allow electrodes to be closer to the brain. Because of this ECoG boasts an exceptionally high signal-to-noise ratio and less susceptibility to artifacts than EEG [6]. While wearing the ECoG caps, the patients are asked to perform a range of different tasks. The tasks performed by patients are partitioned into different levels of mental stress i.e. how much concentration is presumably required. The specific dataset used in this thesis is derived from cognitive behavior experiments performed on primates at MGH (Massachusetts General Hospital). The content of this thesis can be thought of as a pipelined process. First the data is collected from the ECoG electrodes, then the data is pre-processed via signal processing techniques and finally the data is clustered via unsupervised learning techniques. For both the pre-processing and the clustering steps, different techniques are applied and then compared against one another. The focus of this thesis is to evaluate clustering techniques when applied to neural data. For the pre-processing step, two types of bandpass filters, a Butterworth Filter and a Chebyshev Filter were applied. For the clustering step three techniques were applied to the data, K-means Clustering, Spectral Clustering and Self-Tuning Spectral Clustering. We conclude that for pre-processing the results from both filters are very similar and thus either filter is sufficient. For clustering we conclude that K- means has the lowest amount of overlap between clusters. K-means is also the most time-efficient of the three techniques and is thus the ideal choice for this application.2016-10-27T00:00:00

    Deep learning with asymmetric connections and Hebbian updates

    Get PDF
    We show that deep networks can be trained using Hebbian updates yielding similar performance to ordinary back-propagation on challenging image datasets. To overcome the unrealistic symmetry in connections between layers, implicit in back-propagation, the feedback weights are separate from the feedforward weights. The feedback weights are also updated with a local rule, the same as the feedforward weights - a weight is updated solely based on the product of activity of the units it connects. With fixed feedback weights as proposed in Lillicrap et. al (2016) performance degrades quickly as the depth of the network increases. If the feedforward and feedback weights are initialized with the same values, as proposed in Zipser and Rumelhart (1990), they remain the same throughout training thus precisely implementing back-propagation. We show that even when the weights are initialized differently and at random, and the algorithm is no longer performing back-propagation, performance is comparable on challenging datasets. We also propose a cost function whose derivative can be represented as a local Hebbian update on the last layer. Convolutional layers are updated with tied weights across space, which is not biologically plausible. We show that similar performance is achieved with untied layers, also known as locally connected layers, corresponding to the connectivity implied by the convolutional layers, but where weights are untied and updated separately. In the linear case we show theoretically that the convergence of the error to zero is accelerated by the update of the feedback weights
    corecore