18 research outputs found

    A Simple Method to Simultaneously Detect and Identify Spikes from Raw Extracellular Recordings

    Get PDF
    The ability to track when and which neurons fire in the vicinity of an electrode, in an efficient and reliable manner can revolutionize the neuroscience field. The current bottleneck lies in spike sorting algorithms; existing methods for detecting and discriminating the activity of multiple neurons rely on inefficient, multi-step processing of extracellular recordings. In this work, we show that a single-step processing of raw (unfiltered) extracellular signals is sufficient for both the detection and identification of active neurons, thus greatly simplifying and optimizing the spike sorting approach. The efficiency and reliability of our method is demonstrated in both real and simulated data

    VERITE: A Robust Benchmark for Multimodal Misinformation Detection Accounting for Unimodal Bias

    Full text link
    Multimedia content has become ubiquitous on social media platforms, leading to the rise of multimodal misinformation (MM) and the urgent need for effective strategies to detect and prevent its spread. In recent years, the challenge of multimodal misinformation detection (MMD) has garnered significant attention by researchers and has mainly involved the creation of annotated, weakly annotated, or synthetically generated training datasets, along with the development of various deep learning MMD models. However, the problem of unimodal bias in MMD benchmarks -- where biased or unimodal methods outperform their multimodal counterparts on an inherently multimodal task -- has been overlooked. In this study, we systematically investigate and identify the presence of unimodal bias in widely-used MMD benchmarks (VMU-Twitter, COSMOS), raising concerns about their suitability for reliable evaluation. To address this issue, we introduce the "VERification of Image-TExtpairs" (VERITE) benchmark for MMD which incorporates real-world data, excludes "asymmetric multimodal misinformation" and utilizes "modality balancing". We conduct an extensive comparative study with a Transformer-based architecture that shows the ability of VERITE to effectively address unimodal bias, rendering it a robust evaluation framework for MMD. Furthermore, we introduce a new method -- termed Crossmodal HArd Synthetic MisAlignment (CHASMA) -- for generating realistic synthetic training data that preserve crossmodal relations between legitimate images and false human-written captions. By leveraging CHASMA in the training process, we observe consistent and notable improvements in predictive performance on VERITE; with a 9.2% increase in accuracy. We release our code at: https://github.com/stevejpapad/image-text-verificatio

    RED-DOT: Multimodal Fact-checking via Relevant Evidence Detection

    Full text link
    Online misinformation is often multimodal in nature, i.e., it is caused by misleading associations between texts and accompanying images. To support the fact-checking process, researchers have been recently developing automatic multimodal methods that gather and analyze external information, evidence, related to the image-text pairs under examination. However, prior works assumed all external information collected from the web to be relevant. In this study, we introduce a "Relevant Evidence Detection" (RED) module to discern whether each piece of evidence is relevant, to support or refute the claim. Specifically, we develop the "Relevant Evidence Detection Directed Transformer" (RED-DOT) and explore multiple architectural variants (e.g., single or dual-stage) and mechanisms (e.g., "guided attention"). Extensive ablation and comparative experiments demonstrate that RED-DOT achieves significant improvements over the state-of-the-art (SotA) on the VERITE benchmark by up to 33.7%. Furthermore, our evidence re-ranking and element-wise modality fusion led to RED-DOT surpassing the SotA on NewsCLIPings+ by up to 3% without the need for numerous evidence or multiple backbone encoders. We release our code at: https://github.com/stevejpapad/relevant-evidence-detectio

    Intra- and inter- cluster inhibition in Dentate Gyrus.

    No full text
    <p>In intra-cluster inhibition(first column of GCs), the most excited GC excites an interneuron which projects back to inhibit other GC within the same cluster. The same mechanism holds for the inter-cluster inhibition mediated by MCs. (MC: Mossy Cells, GC: Granule Cells, INT: Interneurons).</p

    Dentate Gyrus Circuitry Features Improve Performance of Sparse Approximation Algorithms

    No full text
    <div><p>Memory-related activity in the Dentate Gyrus (DG) is characterized by sparsity. Memory representations are seen as activated neuronal populations of granule cells, the main encoding cells in DG, which are estimated to engage 2–4% of the total population. This sparsity is assumed to enhance the ability of DG to perform pattern separation, one of the most valuable contributions of DG during memory formation. In this work, we investigate how features of the DG such as its excitatory and inhibitory connectivity diagram can be used to develop theoretical algorithms performing Sparse Approximation, a widely used strategy in the Signal Processing field. Sparse approximation stands for the algorithmic identification of few components from a dictionary that approximate a certain signal. The ability of DG to achieve pattern separation by sparsifing its representations is exploited here to improve the performance of the state of the art sparse approximation algorithm “Iterative Soft Thresholding” (IST) by adding new algorithmic features inspired by the DG circuitry. Lateral inhibition of granule cells, either direct or indirect, via mossy cells, is shown to enhance the performance of the IST. Apart from revealing the potential of DG-inspired theoretical algorithms, this work presents new insights regarding the function of particular cell types in the pattern separation task of the DG.</p></div

    Evolution of approximation for purple-highlighted elements in Fig. 2B, using DG-IST with d = opt = 171 (left panel) and DG-IST with d = inf (right panel).

    No full text
    <p>First row of each panel shows the approximation evolution of the two elements, black horizontal lines declare the original elements to be approximated and vertical pink lines (only left panel) show the iterations at which elimination of inhibition takes place for the second largest element in the corresponding column of <i>x<sup>m</sup></i>. The second row of each panel illustrates the Input to each GC through the iterative process, <math><mrow><mi>i</mi><mi>n</mi><mi>p</mi><mi>u</mi><mi>t</mi><mo>=</mo><mi>κ</mi><mo>⋅</mo><mrow><mo>[</mo><mrow><mrow><mrow><mo>(</mo><mrow><mi>A</mi><mi>T</mi><mrow><mo>(</mo><mrow><mi>y</mi><mo>−</mo><mi>A</mi><msubsup><mi>x</mi><mi>i</mi><mi>m</mi></msubsup></mrow><mo>)</mo></mrow></mrow><mo>)</mo></mrow></mrow><mi>m</mi><mo>−</mo><msub><mi>I</mi><mi>s</mi></msub><mo>−</mo><msub><mi>M</mi><mi>s</mi></msub></mrow><mo>]</mo></mrow><mo>=</mo><mi>κ</mi><mo>⋅</mo><mo stretchy="false">(</mo><mi>E</mi><mi>r</mi><mi>r</mi><mi>o</mi><mi>r</mi><mo>+</mo><mi>I</mi><mi>N</mi><mi>T</mi><mo>+</mo><mi>M</mi><mi>C</mi><mo stretchy="false">)</mo></mrow></math>. The Error, MC, and INT values that add up to form the Input value are shown in the remaining rows of each panel.</p

    DG-IST without INT- or MC- mediated inhibition.

    No full text
    <p>(A) Mean MSE of 100 instances of <i>x</i> vectors with: <i>N</i> = 1000, <i>a</i> = 2%, <i>M</i> ≥ <i>a</i> ⋅ log(<i>N</i>/<i>a</i>) estimated using IST (blue), DG-IST with <i>d</i> = 96 (red), DG-IST with <i>d</i> = 96 without INT-mediated inhibition (green), and DG-IST with <i>d</i> = 96 without MC-mediated inhibition (black) (Inset: magnification of the last 100 iterations). (B) Boxplots of the MSE of 100 instances of <i>x</i> vectors with: <i>N</i> = 1000, <i>a</i> = 2%, <i>M</i> ≥ <i>a</i> ⋅ log(<i>N</i>/<i>a</i>) estimated using IST, DG-IST with <i>d</i> = 96, DG-IST with <i>d</i> = 96 without INT-mediated inhibition, and DG-IST with <i>d</i> = 96 without MC-mediated inhibition. (C) MSE of a specific instance of vector <i>x</i> with: <i>N</i> = 1000, <i>a</i> = 2%, <i>M</i> ≥ <i>a</i> ⋅ log(<i>N</i>/<i>a</i>) estimated using IST (blue), DG-IST with <i>d</i> = 96 (red), DG-IST with <i>d</i> = 96 without INT-mediated inhibition (green), and DG-IST with <i>d</i> = 96 without MC-mediated inhibition (black).</p

    DG-IST performance.

    No full text
    <p>(A) MSE vs. Iterations for IST (blue), DG-IST with d = inf (green) DG-IST with d = opt = 171 (red), and DG-IST with d = 96 (cyan). (B) Sparse approximation of vector <i>x</i> by DG-IST with d = opt = 171 (red) and DG-IST with d = inf (green). Purple and orange highlighted stems correspond to elements within the same column and row of matrix <i>x<sup>m</sup></i>, respectively. Brown highlighted stem corresponds to a non-zero element that belongs to a row and a column with no other non-zero elements. (C) Sparse approximation of vector <i>x</i> by DG-IST with d = opt = 171 (red) and IST (blue). (D) Sparse approximation of vector <i>x</i> by DG-IST with d = opt = 171 (red) and d = 96 (cyan).</p

    Majorization-Minimization by IST and DG-IST.

    No full text
    <p>(A) the <i>J</i>(<i>x</i>) function to be minimized. (B) The <i>G</i>(<i>x</i>) (yellow surface) function where <i>G</i>(<i>x</i>) ≥ <i>J</i>(<i>x</i>) ∀ <i>x</i> and <i>G</i>(<i>x<sub>k</sub></i>) = <i>J</i>(<i>x<sub>k</sub></i>), IST algorithm. (C) <i>G</i>(<i>x</i>) ≥ <i>J</i>(<i>x</i>) ∀ <i>x<sub>2</sub></i> with <i>x</i><sub>1</sub> = <i>const</i> and <i>G</i>(<i>x<sub>k</sub></i>) = <i>J</i>(<i>x<sub>k</sub></i>), DG-IST algorithm. (D) <i>x<sub>k</sub></i> is the initialization point of vector <i>x</i> (black arrow) and blue arrow indicates the <math><mrow><mi>x</mi><mo>′</mo><mo>=</mo><mrow><mo>(</mo><mrow><msub><mi>x</mi><mo>′</mo><mn>1</mn></msub><mo>,</mo><msub><mi>x</mi><mo>′</mo><mn>2</mn></msub></mrow><mo>)</mo></mrow></mrow></math> where <i>G</i>(<i>x</i>) is minimized, IST algorithm. (E) <i>x<sub>k</sub></i> is the initialization point of vector <i>x</i> (black arrow, same as in (D)) and blue arrow indicates the <math><mrow><mi>x</mi><mo>″</mo><mo>=</mo><mrow><mo>(</mo><mrow><msub><mi>x</mi><mo>″</mo><mn>1</mn></msub><mo>,</mo><msub><mi>x</mi><mo>″</mo><mn>2</mn></msub></mrow><mo>)</mo></mrow></mrow></math> where <i>G</i>(<i>x</i>) is minimized, DG-IST algorithm. Notice that, <math><mrow><msub><mi>x</mi><mo>″</mo><mn>2</mn></msub><mo><</mo><msub><mi>x</mi><mo>′</mo><mn>2</mn></msub></mrow></math>, thus closer to the point where <i>J</i>(<i>x</i>) is minimized (see (A)).</p

    Evolution of approximation for purple-highlighted elements in Fig. 2B, using DG-IST with d = opt = 171 (left panel) and IST (right panel).

    No full text
    <p>First row of each panel shows the evolution of the two elements, black horizontal lines declare the original elements to be approximated and vertical pink lines (only left panel) show the iterations at which elimination of inhibition takes place for the second largest element in the corresponding column of matrix <i>x<sup>m</sup></i>. The second row of each panel illustrates the Input to each GC through the iterative process, <math><mrow><mi>i</mi><mi>n</mi><mi>p</mi><mi>u</mi><mi>t</mi><mo>=</mo><mi>κ</mi><mo>⋅</mo><mrow><mo>[</mo><mrow><mrow><mrow><mo>(</mo><mrow><mi>A</mi><mi>T</mi><mrow><mo>(</mo><mrow><mi>y</mi><mo>−</mo><mi>A</mi><msubsup><mi>x</mi><mi>i</mi><mi>m</mi></msubsup></mrow><mo>)</mo></mrow></mrow><mo>)</mo></mrow></mrow><mi>m</mi><mo>−</mo><msub><mi>I</mi><mi>s</mi></msub><mo>−</mo><msub><mi>M</mi><mi>s</mi></msub></mrow><mo>]</mo></mrow><mo>=</mo><mi>κ</mi><mo>⋅</mo><mo stretchy="false">(</mo><mi>E</mi><mi>r</mi><mi>r</mi><mi>o</mi><mi>r</mi><mo>+</mo><mi>I</mi><mi>N</mi><mi>T</mi><mo>+</mo><mi>M</mi><mi>C</mi><mo stretchy="false">)</mo></mrow></math>. The Error, MC, and INT values that add up to form the Input value are shown in the remaining rows of each panel.</p
    corecore