2 research outputs found

    The hardness of the independence and matching clutter of a graph

    Full text link
    A {\it clutter} (or {\it antichain} or {\it Sperner family}) LL is a pair (V,E)(V,E), where VV is a finite set and EE is a family of subsets of VV none of which is a subset of another. Usually, the elements of VV are called {\it vertices} of LL, and the elements of EE are called {\it edges} of LL. A subset ses_e of an edge ee of a clutter is called {\it recognizing} for ee, if ses_e is not a subset of another edge. The {\it hardness} of an edge ee of a clutter is the ratio of the size of e’se\textrm{'s} smallest recognizing subset to the size of ee. The hardness of a clutter is the maximum hardness of its edges. We study the hardness of clutters arising from independent sets and matchings of graphs.Comment: 23 pages, 5 figure

    Deep Lake: a Lakehouse for Deep Learning

    Full text link
    Traditional data lakes provide critical data infrastructure for analytical workloads by enabling time travel, running SQL queries, ingesting data with ACID transactions, and visualizing petabyte-scale datasets on cloud storage. They allow organizations to break down data silos, unlock data-driven decision-making, improve operational efficiency, and reduce costs. However, as deep learning takes over common analytical workflows, traditional data lakes become less useful for applications such as natural language processing (NLP), audio processing, computer vision, and applications involving non-tabular datasets. This paper presents Deep Lake, an open-source lakehouse for deep learning applications developed at Activeloop. Deep Lake maintains the benefits of a vanilla data lake with one key difference: it stores complex data, such as images, videos, annotations, as well as tabular data, in the form of tensors and rapidly streams the data over the network to (a) Tensor Query Language, (b) in-browser visualization engine, or (c) deep learning frameworks without sacrificing GPU utilization. Datasets stored in Deep Lake can be accessed from PyTorch, TensorFlow, JAX, and integrate with numerous MLOps tools
    corecore