24,692 research outputs found

    The influence of frontal alignment in the advanced reciprocating gait orthosis on energy cost and crutch force requirements during paraplegic gait

    Get PDF
    Reduction of energy cost and upper body load during paraplegic walking is considered to be an important criterion in future developments of walking systems. A high energy cost limits the maximum walking distance in the current devices, whereas wrist and shoulder pathology can deteriorate because of the high upper body load. A change in alignment of the mechanical brace in the frontal plane, i.e. abduction, can contribute to a more efficient gait pattern with sufficient foot clearance with less pelvic lateral sway. A decrease in pelvic lateral sway after aligning in abduction results in a shift of the centre of mass to the swing leg crutch which may result in a decrease in required crutch force on stance side to maintain foot clearance. Five paraplegic subjects were provided with a standard Advanced Reciprocating Gait Orthosis (ARGO) and an ARGO aligned in 4 different degrees of abduction (0°, 3°, 6° and 9°). After determining an optimal abduction angle for each of the subjects, a cross over design was used to compare the ARGO with the individually optimised abducted orthosis. An abduction angle between 0° and 3° was chosen as optimal abduction angle. Subjects were not able to walk satisfactory with abduction angles 6° and 9°. A significant reduction in crutch peak force on stance side was found (approx. 12% , p < 0.01) in the abducted orthosis. Reduction in crutch force time integral (15%) as well as crutch peak force on swing side (5%) was not significant. No differences in oxygen uptake as well as oxygen cost was found. We concluded that an abduction angle between 0° and 3° is beneficial with respect to upper boHy load, whereas energy requirements did not change

    Logic Programming Approaches for Representing and Solving Constraint Satisfaction Problems: A Comparison

    Full text link
    Many logic programming based approaches can be used to describe and solve combinatorial search problems. On the one hand there is constraint logic programming which computes a solution as an answer substitution to a query containing the variables of the constraint satisfaction problem. On the other hand there are systems based on stable model semantics, abductive systems, and first order logic model generators which compute solutions as models of some theory. This paper compares these different approaches from the point of view of knowledge representation (how declarative are the programs) and from the point of view of performance (how good are they at solving typical problems).Comment: 15 pages, 3 eps-figure

    CHR as grammar formalism. A first report

    Full text link
    Grammars written as Constraint Handling Rules (CHR) can be executed as efficient and robust bottom-up parsers that provide a straightforward, non-backtracking treatment of ambiguity. Abduction with integrity constraints as well as other dynamic hypothesis generation techniques fit naturally into such grammars and are exemplified for anaphora resolution, coordination and text interpretation.Comment: 12 pages. Presented at ERCIM Workshop on Constraints, Prague, Czech Republic, June 18-20, 200

    Towards robust and reliable multimedia analysis through semantic integration of services

    Get PDF
    Thanks to ubiquitous Web connectivity and portable multimedia devices, it has never been so easy to produce and distribute new multimedia resources such as videos, photos, and audio. This ever-increasing production leads to an information overload for consumers, which calls for efficient multimedia retrieval techniques. Multimedia resources can be efficiently retrieved using their metadata, but the multimedia analysis methods that can automatically generate this metadata are currently not reliable enough for highly diverse multimedia content. A reliable and automatic method for analyzing general multimedia content is needed. We introduce a domain-agnostic framework that annotates multimedia resources using currently available multimedia analysis methods. By using a three-step reasoning cycle, this framework can assess and improve the quality of multimedia analysis results, by consecutively (1) combining analysis results effectively, (2) predicting which results might need improvement, and (3) invoking compatible analysis methods to retrieve new results. By using semantic descriptions for the Web services that wrap the multimedia analysis methods, compatible services can be automatically selected. By using additional semantic reasoning on these semantic descriptions, the different services can be repurposed across different use cases. We evaluated this problem-agnostic framework in the context of video face detection, and showed that it is capable of providing the best analysis results regardless of the input video. The proposed methodology can serve as a basis to build a generic multimedia annotation platform, which returns reliable results for diverse multimedia analysis problems. This allows for better metadata generation, and improves the efficient retrieval of multimedia resources

    Abduction-Based Explanations for Machine Learning Models

    Full text link
    The growing range of applications of Machine Learning (ML) in a multitude of settings motivates the ability of computing small explanations for predictions made. Small explanations are generally accepted as easier for human decision makers to understand. Most earlier work on computing explanations is based on heuristic approaches, providing no guarantees of quality, in terms of how close such solutions are from cardinality- or subset-minimal explanations. This paper develops a constraint-agnostic solution for computing explanations for any ML model. The proposed solution exploits abductive reasoning, and imposes the requirement that the ML model can be represented as sets of constraints using some target constraint reasoning system for which the decision problem can be answered with some oracle. The experimental results, obtained on well-known datasets, validate the scalability of the proposed approach as well as the quality of the computed solutions
    corecore