15 research outputs found

    Probing to Minimize

    Get PDF
    We develop approximation algorithms for set-selection problems with deterministic constraints, but random objective values, i.e., stochastic probing problems. When the goal is to maximize the objective, approximation algorithms for probing problems are well-studied. On the other hand, few techniques are known for minimizing the objective, especially in the adaptive setting, where information about the random objective is revealed during the set-selection process and allowed to influence it. For minimization problems in particular, incorporating adaptivity can have a considerable effect on performance. In this work, we seek approximation algorithms that compare well to the optimal adaptive policy. We develop new techniques for adaptive minimization, applying them to a few problems of interest. The core technique we develop here is an approximate reduction from an adaptive expectation minimization problem to a set of adaptive probability minimization problems which we call threshold problems. By providing near-optimal solutions to these threshold problems, we obtain bicriteria adaptive policies. We apply this method to obtain an adaptive approximation algorithm for the Min-Element problem, where the goal is to adaptively pick random variables to minimize the expected minimum value seen among them, subject to a knapsack constraint. This partially resolves an open problem raised in [Goel et al., 2010]. We further consider three extensions on the Min-Element problem, where our objective is the sum of the smallest k element-weights, or the weight of the min-weight basis of a given matroid, or where the constraint is not given by a knapsack but by a matroid constraint. For all three of the variations we explore, we develop adaptive approximation algorithms for their corresponding threshold problems, and prove their near-optimality via coupling arguments

    The Mouse Action Recognition System (MARS): a software pipeline for automated analysis of social behaviors in mice

    Get PDF
    The study of social behavior requires scoring the animals' interactions. This is generally done by hand-- a time consuming, subjective, and expensive process. Recent advances in computer vision enable tracking the pose (posture) of freely-behaving laboratory animals automatically. However, classifying complex social behaviors such as mounting and attack remains technically challenging. Furthermore, the extent to which expert annotators, possibly from different labs, agree on the definitions of these behaviors varies. There is a shortage in the neuroscience community of benchmark datasets that can be used to evaluate the performance and reliability of both pose estimation tools and manual and automated behavior scoring. We introduce the Mouse Action Recognition System (MARS), an automated pipeline for pose estimation and behavior quantification in pairs of freely behaving mice. We compare MARS's annotations to human annotations and find that MARS's pose estimation and behavior classification achieve human-level performance. As a by-product we characterize the inter-expert variability in behavior scoring. The two novel datasets used to train MARS were collected from ongoing experiments in social behavior, and identify the main sources of disagreement between annotators. They comprise 30,000 frames of manual annotated mouse poses and over 14 hours of manually annotated behavioral recordings in a variety of experimental preparations. We are releasing this dataset alongside MARS to serve as community benchmarks for pose and behavior systems. Finally, we introduce the Behavior Ensemble and Neural Trajectory Observatory (Bento), a graphical interface that allows users to quickly browse, annotate, and analyze datasets including behavior videos, pose estimates, behavior annotations, audio, and neural recording data. We demonstrate the utility of MARS and Bento in two use cases: a high-throughput behavioral phenotyping study, and exploration of a novel imaging dataset. Together, MARS and Bento provide an end-to-end pipeline for behavior data extraction and analysis, in a package that is user-friendly and easily modifiable

    The Mouse Action Recognition System (MARS): a software pipeline for automated analysis of social behaviors in mice

    Get PDF
    The study of social behavior requires scoring the animals' interactions. This is generally done by hand-- a time consuming, subjective, and expensive process. Recent advances in computer vision enable tracking the pose (posture) of freely-behaving laboratory animals automatically. However, classifying complex social behaviors such as mounting and attack remains technically challenging. Furthermore, the extent to which expert annotators, possibly from different labs, agree on the definitions of these behaviors varies. There is a shortage in the neuroscience community of benchmark datasets that can be used to evaluate the performance and reliability of both pose estimation tools and manual and automated behavior scoring. We introduce the Mouse Action Recognition System (MARS), an automated pipeline for pose estimation and behavior quantification in pairs of freely behaving mice. We compare MARS's annotations to human annotations and find that MARS's pose estimation and behavior classification achieve human-level performance. As a by-product we characterize the inter-expert variability in behavior scoring. The two novel datasets used to train MARS were collected from ongoing experiments in social behavior, and identify the main sources of disagreement between annotators. They comprise 30,000 frames of manual annotated mouse poses and over 14 hours of manually annotated behavioral recordings in a variety of experimental preparations. We are releasing this dataset alongside MARS to serve as community benchmarks for pose and behavior systems. Finally, we introduce the Behavior Ensemble and Neural Trajectory Observatory (Bento), a graphical interface that allows users to quickly browse, annotate, and analyze datasets including behavior videos, pose estimates, behavior annotations, audio, and neural recording data. We demonstrate the utility of MARS and Bento in two use cases: a high-throughput behavioral phenotyping study, and exploration of a novel imaging dataset. Together, MARS and Bento provide an end-to-end pipeline for behavior data extraction and analysis, in a package that is user-friendly and easily modifiable

    The Mouse Action Recognition System (MARS): multi-worker behavior annotation data

    No full text
    This dataset was collected to quantify inter-annotator variability in manual labeling of social behaviors in freely interacting mice. We provided a group of graduate students, postdocs, and technicians in the Anderson lab with written descriptions of close investigation, mounting, and attack behaviors (included below), and instructed them to score a set of ten resident-intruder videos, all taken from unoperated mice. Annotators were given front- and top-view video of social interactions, and scored behavior using either Bento or the Caltech Behavior Annotator, both of which support simultaneous display of front- and top-view video and frame-by-frame browsing and scoring. All but one annotator (Human 4) had previous experience scoring mouse behavior videos; Human 4 had previous experience scoring similar social behaviors in flies

    The Mouse Action Recognition System (MARS): behavior annotation data

    No full text
    The study of naturalistic social behavior requires quantification of animals' interactions. This is generally done through manual annotation—a highly time consuming and tedious process. Recent advances in computer vision enable tracking the pose (posture) of freely-behaving animals. However, automatically and accurately classifying complex social behaviors remains technically challenging. We recently introduced the Mouse Action Recognition System (MARS), an automated pipeline for pose estimation and behavior quantification in pairs of freely interacting mice (Segalin et al, 2020). This Dataset includes the training and test sets used to train MARS's pose estimation network, which detects the pose of each of two mice in terms of a set of anatomical keypoints. Included in this dataset are pose estimates, extracted pose features, and frame-by-frame manual annotations from video recordings of pairs of mice freely interacting in a standard home cage.Data Acquisition Experimental mice ("residents") were transported in their homecage (with cagemates removed) to a behavioral testing room, and acclimatized for 5-15 minutes. Homecages were then inserted into a custom-built hardware setup (Hong et al, 2015) with infrared video captured at 30 fps from top- and front-view cameras (Point Grey Grasshopper3) recorded at 1024x570 (top) and 1280x500 (front) pixel resolution using StreamPix video software (NorPix). Following two further minutes of acclimatization, an unfamiliar group-housed male or female BALB/c mouse ("intruder") was introduced to the cage, and animals were allowed to freely interact for a period of approximately 10 minutes. BALB/c mice are used as intruders for their white coat color (simplifying identity tracking), as well as their relatively submissive behavior, which reduces the likelihood of intruder-initiated aggression. In some videos, mice are implanted with a cranial cannula, or with a head-mounted miniaturized microscope (nVista, Inscopix) or optical fiber for optogenetics or fiber photometry, attached to a cable of varying color and thickness. Surgical procedures for these implantations can be found in (Karigo et al, 2020). The raw behavior videos are currently in preparation for upload, and will be added to this repository shortly. Behavior Annotation. Behaviors were annotated on a frame-by-frame basis by a trained human expert in the Anderson lab. Annotators were provided with simultaneous top- and front-view video of interacting mice, and scored every video frame for close investigation, attack, and mounting (for full criteria see Methods of Segalin et al, 2020). In some videos, additional behaviors were also annotated--when this occurred, these behaviors were assigned to one of close investigation, attack, mounting, or "other" for the purpose of training classifiers. Annotation was performed either in BENTO (Segalin et al, 2020) or using a previously developed custom Matlab interface. Pose Estimation. The poses of mice in top-view recordings are estimated using the Mouse Action Recognition System (MARS, Segalin et al, 2020), a computer vision tool that identifies seven anatomically defined keypoints on the body of each mouse: the nose, ears, base of neck, hips, and tail. For details on the pose estimation process, please refer to the MARS manuscript. Note that while front-view video was acquired, pose information from the front view was not included in this dataset as it was not found to improve MARS classifier performance. This is likely due to the poor quality of front-view pose estimates due to high occurrence of occlusion as mice are interacting. Pose Feature Extraction. To facilitate behavior classification, a large set of features are computed from the poses of the two interacting mice, capturing the animals' relative positions, velocities, distances to cage walls, and other socially informative features. For each feature, we furthermore compute its mean, standard deviation, maximum, and minimum value within windows of +/-1, 5, and 10 frames of the current frame, to capture changes in feature values over time. A full list of features is included in the MARS manuscript; code to compute features from animals' poses can be found in the MARS Github repository.Related Publication: The Mouse Action Recognition System (MARS): a software pipeline for automated analysis of social behaviors in mice Segalin, Cristina Caltech Williams, Jalani Caltech Karigo, Tomomi Caltech Hui, May Caltech Zelikowsky, Moriel University of Utah Sun, Jennifer J. Caltech Perona, Pietro Caltech Anderson, David J. Caltech Kennedy, Ann Northwestern bioRxiv 2020-07-27 https://doi.org/10.1101/2020.07.26.222299 en

    The Mouse Action Recognition System (MARS): pose annotation data

    No full text
    The study of naturalistic social behavior requires quantification of animals' interactions. This is generally done through manual annotation—a highly time consuming and tedious process. Recent advances in computer vision enable tracking the pose (posture) of freely-behaving animals. However, automatically and accurately classifying complex social behaviors remains technically challenging. We recently introduced the Mouse Action Recognition System (MARS), an automated pipeline for pose estimation and behavior quantification in pairs of freely interacting mice (Segalin et al, 2020). This Dataset includes the training, test, and validation sets used to train MARS's supervised classifiers for three social behaviors of interest: close investigation, mounting, and attack. Included in the dataset are 15000 pairs of top- and front-view frames from videos of pairs of interacting mice. Each frame has been manually annotated by five individuals for a total of nine (top-view) or thirteen (front-view) keypoints: the nose, ears, base of neck, hips, base of tail, middle of tail, and end of tail, and additionally the four paws (front-view only.)Data Acquisition Experimental mice ("residents") were transported in their homecage (with cagemates removed) to a behavioral testing room, and acclimatized for 5-15 minutes. Homecages were then inserted into a custom-built hardware setup (Hong et al, 2015) with infrared video captured at 30 fps from top- and front-view cameras (Point Grey Grasshopper3) recorded at 1024x570 (top) and 1280x500 (front) pixel resolution using StreamPix video software (NorPix). Following two further minutes of acclimatization, an unfamiliar group-housed male or female BALB/c mouse ("intruder") was introduced to the cage, and animals were allowed to freely interact for a period of approximately 10 minutes. BALB/c mice are used as intruders for their white coat color (simplifying identity tracking), as well as their relatively submissive behavior, which reduces the likelihood of intruder-initiated aggression. In some videos, mice are implanted with a cranial cannula, or with a head-mounted miniaturized microscope (nVista, Inscopix) or optical fiber for optogenetics or fiber photometry, attached to a cable of varying color and thickness. Surgical procedures for these implantations can be found in (Karigo et al, 2020). To create a data set of video frames for labeling, we sampled 64 videos from several years of experimental projects. We extracted a set of 15,000 individual frames each from the top- and front-view cameras, giving a total of 2,700,000 individual keypoint annotations (15,000 frames x (7 top-view + 11 front-view keypoints per mouse) x 2 mice x 5 annotators). 5,000 of the extracted frames included resident mice with a fiberoptic cable, cannula, or head-mounted microendoscope with cable. Pose Annotation We defined nine anatomical keypoints in the top-view video (the nose, ears, base of neck, hips, and tail base, midpoint, and endpoint), and 13 keypoints in the front-view video (top-view keypoints plus the four paws). The tail mid- and endpoint annotations were subsequently discarded for training of MARS, however these annotations are still included in the "raw_keypoints" files in this Dataset. We used the crowdsourcing platform Amazon Mechanical Turk (AMT) to obtain manual annotations of pose keypoints on a set of video frames. AMT workers were provided with written instructions and illustrated examples of each keypoint, and instructed to infer the location of occluded keypoints. To compensate for annotation noise, each keypoint was annotated by five AMT workers, and a "ground truth" location for that keypoint was defined as the median across annotators. The median was computed separately in the x and y dimensions. Annotations of individual workers were also post-processed to correct for common mistakes, such as confusing the left and right sides of the animals. Another common worker error was to mistake the top of the head-mounted microendoscope for the resident animal's nose; we visually screened for these errors and corrected them manually.This dataset contains images and pose annotations ("keypoints") for top- and front-view cameras on 15,000 pairs of movie frames. Keypoints are provided for each camera, in two formats: MARS_raw_keypoints_(top/front).manifest contains annotations from each worker for each keypoint/image, in the raw ".manifest" format expected by MARS_Developer. It still includes tail mid- and end-point annotations, and has not been corrected for common annotator mistakes such as left/right flipping of body parts. For each frame, it contains: the corresponding image filename annotatedResult-metadata, some metadata about the GroundTruth labeling job (may be ignored). annotatedResult, a dictionary containing annotations from each worker who labeled the image. Each worker's annotations are encoded in a string json in ['annotationsFromAllWorkers]['content'] MARS_keypoints_(top/front).json contains the processed annotations used to train MARS: for each body part in each image, we take the median of keypoint coordinates in the x and y dimensions, after correcting for annotator errors. Keypoint locations are provided in pixels rounded to the nearest tenth of a pixel. The width and height of each image are also provided, so that keypoints can be scaled to fractional values if needed.Related Publication: The Mouse Action Recognition System (MARS): a software pipeline for automated analysis of social behaviors in mice Segalin, Cristina Caltech Williams, Jalani Caltech Karigo, Tomomi Caltech Hui, May Caltech Zelikowsky, Moriel University of Utah Sun, Jennifer J. Caltech Perona, Pietro Caltech Anderson, David J. Caltech Kennedy, Ann Northwestern bioRxiv 2020-07-27 https://doi.org/10.1101/2020.07.26.222299 en

    Mouse Action Recognition System (MARS) trained models, version 1.8

    No full text
    This dataset contains trained neural network models for mouse detection and pose estimation, and trained classifiers for mouse behavior annotation. These model files accompany the MARS end-user software posted at https://github.com/neuroethology/MARS, for MARS version 1.8. To use these trained models, download the models.zip file and unzip into MARS/mars_v1_8/models. See https://github.com/neuroethology/MARS for documentation on using these models from within MARS. We also include two short sample videos of interacting mice (in sample_videos.zip) that can be used to test your MARS installation.Related Publication: The Mouse Action Recognition System (MARS): a software pipeline for automated analysis of social behaviors in mice Segalin, Cristina Caltech Williams, Jalani Caltech Karigo, Tomomi Caltech Hui, May Caltech Zelikowsky, Moriel Caltech Sun, Jennifer J. Caltech Perona, Pietro Caltech Anderson, David J. Caltech Kennedy, Ann Caltech https://doi.org/10.1101/2020.07.26.222299 engContact person: Ann Kennedy [email protected]

    Reproductive biology and intergeneric breeding comatibility of ornamental Portulaca and Calandrinia (Portulacaceae)

    No full text
    Portulaca grandiflora Hook and P. umbraticola Kunth (Portulacaceae) are popular garden annuals, and have been bred for improved ornamental value. However, limited research has been published on hybridisation of Portulaca, with no reports on intergeneric hybridisation. Calandrinia balonensis Lindley and Calandrinia sp. nov. (not yet fully classified) are floriferous Australian Portulacaceae species, with potential as novel flowering pot plants, and are potential candidates for breeding with ornamental Portulaca.We studied the reproductive biology of these four species and breeding compatibility for reciprocal crosses of P. grandifloraĂ—C. balonensis (2n = 18) and P. umbraticolaĂ—C. sp. nov. (2n = 24). All four species produced seeds for intraspecific outcrosses. P. grandiflora and C. sp. nov. are partially self-compatible whereas P. umbraticola and C. balonensis are highly self-incompatible. Autogamy was detected only for P. grandiflora. Reciprocal crosses of P. grandifloraĂ—C. balonensis and P. umbraticolaĂ—C. sp. nov. with similar chromosome numbers did not produce seeds, primarily because of pollen-pistil incompatibility that prevents pollen-tube growth within the stigmata. Methods to overcome hybridisation barriers of these species combinations need to be established to create novel products for ornamental horticulture
    corecore