261 research outputs found
Automatic Alerts Based On Departure From Routine
Research indicates that a task or item that is forgotten is presaged by a change in routine. This disclosure describes techniques to automatically determine that a user may have forgotten a task or item. The determination is made based on analysis of user permitted data such as calendar appointments, location, etc. that indicate a change in routine. Upon determination that the user is likely to forget a task or item, e.g., due to change in routine, an alert is provided to the user
Comparative user feedback rating
This disclosure describes comparative user feedback for products and services. Users are presented with choices of products and/or services and requested to make an ordered selection of their preferred choice(s). A user’s spatial ordering of choices on a slider provides information about the degree of user preferences. The comparative user feedback data are analyzed and used to train a machine learning (ML) model using ordinal regression. Absolute training data are provided to the machine learning model as seed training data. Ordinal ML regression is used to generate a comparison metric based on the comparison data, and is utilized for ranking user preferences and providing recommendations. The comparative feedback is integrated into the user’s profile that is utilized to generate user feedback candidates and provide recommendations
Improving Query Suggestions Based On Search Box Edits
Users often enter terms into a search box and then modify the query. Such modifications may be based on, e.g., the real-time query suggestions offered by the search engine, or the user thinking of a different phrasing for the query. Such changes to entered search terms prior to executing the search are not captured in the search history, and are not taken into account for tailoring post-search query suggestions or search results. This disclosure describes the use of a trained machine learning model to customize query suggestions and/or search results based on terms previously typed in the search box, obtained with the user’s permission
Destination Search With User-specified Constraints
When issuing destination-related queries users sometimes include on or more constraints within the query. Even though a search engine can be used to search for data regarding each constraint individually, users need to integrate the individual results manually based on separate searches for different types of data. This disclosure describes techniques to retrieve and present search results for destination-related queries based on user-specified constraints present within a user query. The results are obtained by performing separate searches based on various constraints specified in the user query and combining and filtering the results to include only those results that match all constraints. The results are sorted based on specific criteria prior to presentation to the user
Robust ADHD testing by applying clustering techniques to survey responses or speech data
Existing tests for attention deficit hyperactivity disorder (ADHD) may exhibit some bias. Also, these tests require filling in a survey with subjective responses, which can lead to misdiagnosis. The techniques described herein reduce bias in ADHD tests by seeking clusters in test-parameter space conditioned on certain characteristics of a person. Clustering is performed using machine learning techniques. With user permission, speech data is obtained via one or more devices such as a phone, smart speaker, etc. an is used to make objective diagnoses of ADHD
Recommended from our members
Incremental Packing Problems: Algorithms and Polyhedra
In this thesis, we propose and study discrete, multi-period extensions of classical packing problems, a fundamental class of models in combinatorial optimization. Those extensions fall under the general name of incremental packing problems. In such models, we are given an added time component and different capacity constraints for each time. Over time, capacities are weakly increasing as resources increase, allowing more items to be selected. Once an item is selected, it cannot be removed in future times. The goal is to maximize some (possibly also time-dependent) objective function under such packing constraints.
In Chapter 2, we study the generalized incremental knapsack problem, a multi-period extension to the classical knapsack problem. We present a policy that reduces the generalized incremental knapsack problem to sequentially solving multiple classical knapsack problems, for which many efficient algorithms are known. We call such an algorithm a single-time algorithm. We prove that this algorithm gives a (0.17 - ⋲)-approximation for the generalized incremental knapsack problem. Moreover, we show that the algorithm is very efficient in practice. On randomly generated instances of the generalized incremental knapsack problem, it returns near optimal solutions and runs much faster compared to Gurobi solving the problem using the standard integer programming formulation.
In Chapter 3, we present additional approximation algorithms for the generalized incremental knapsack problem. We first give a polynomial-time (½-⋲)-approximation, improving upon the approximation ratio given in Chapter 2. This result is based on a new reformulation of the generalized incremental knapsack problem as a single-machine sequencing problem, which is addressed by blending dynamic programming techniques and the classical Shmoys-Tardos algorithm for the generalized assignment problem. Using the same sequencing reformulation, combined with further enumeration-based self-reinforcing ideas and new structural properties of nearly-optimal solutions, we give a quasi-polynomial time approximation scheme for the problem, thus ruling out the possibility that the generalized incremental knapsack problem is APX-hard under widely-believed complexity assumptions.
In Chapter 4, we first turn our attention to the submodular monotone all-or-nothing incremental knapsack problem (IK-AoN), a special case of the submodular monotone function subject to a knapsack constraint extended to a multi-period setting. We show that each instance of IK-AoN can be reduced to a linear version of the problem. In particular, using a known PTAS for the linear version from literature as a subroutine, this implies that IK-AoN admits a PTAS. Next, we study special cases of the generalized incremental knapsack problem and provide improved approximation schemes for these special cases.
In Chapter 5, we give a polynomial-time (¼-⋲)-approximation in expectation for the incremental generalized assignment problem, a multi-period extension of the generalized assignment problem. To develop this result, similar to the reformulation from Chapter 3, we reformulate the incremental generalized assignment problem as a multi-machine sequencing problem. Following the reformulation, we show that the (½-⋲)-approximation for the generalized incremental knapsack problem, combined with further randomized rounding techniques, can be leveraged to give a constant factor approximation in expectation for the incremental generalized assignment problem.
In Chapter 6, we turn our attention to the incremental knapsack polytope. First, we extend one direction of Balas's characterization of 0/1-facets of the knapsack polytope to the incremental knapsack polytope. Starting from extended cover inequalities valid for the knapsack polytope, we show how to strengthen them to define facets for the incremental knapsack polytope. In particular, we prove that under the same conditions for which these inequalities define facets for the knapsack polytope, following our strengthening procedure, the resulting inequalities define facets for the incremental knapsack polytope. Then, as there are up to exponentially many such inequalities, we give separation algorithms for this class of inequalities
Selectable Heaps and Their Application to Lazy Search Trees
We show the O(log n) time extract minimum function of efficient priority queues can be generalized to the extraction of the k smallest elements in O(k log(n/k)) time. We first show the heap-ordered tree selection of Kaplan et al. can be applied on the heap-ordered trees of the classic Fibonacci heap to support the extraction in O(k \log(n/k)) amortized time. We then show selection is possible in a priority queue with optimal worst-case guarantees by applying heap-ordered tree selection on Brodal queues, supporting the operation in O(k log(n/k)) worst-case time.
Via a reduction from the multiple selection problem, Ω(k log(n/k)) time is necessary.
We then apply the result to the lazy search trees of Sandlund & Wild, creating a new interval data structure based on selectable heaps. This gives optimal O(B+n) lazy search tree performance, lowering insertion complexity into a gap Δi to O(log(n/|Δi|))$ time. An O(1)-time merge operation is also made possible under certain conditions. If Brodal queues are used, all runtimes of the lazy search tree can be made worst-case. The presented data structure uses soft heaps of Chazelle, biased search trees, and efficient priority queues in a non-trivial way, approaching the theoretically-best data structure for ordered data
- …