5,170 research outputs found
Simplified methods for the minimization of open stacks problem.
Este trabalho apresenta dois m?todos para a solu??o do Problema de Minimiza??o de Pilhas Abertas (ou MOSP, de Minimization of Open Stacks Problem), um problema de sequenciamento de padr?es oriundo do contexto de produ??o de pe?as, cuja aplica??o industrial ? direta. O primeiro ? relativo a uma heur?stica baseada em teoria de grafos e crit?rios gulosos, enquanto o segundo ? relativo a um m?todo de programa??o din?mica. Os resultados do experimento realizado comprovam a efic?cia das simplifica??es propostas quando comparadas com os m?todos da literatura.This paper presents two methods for solving the minimization of open stack problem (MOSP), a pattern sequencing problem found in production systems with direct industrial application. The first method refers to a heuristic based on graph theory and greedy criteria, while the second refers to the dynamic programming method. The results show the effectiveness of the proposed simplifications compared to the methods reported in the literature
Towards Domain Generalization for ECG and EEG Classification: Algorithms and Benchmarks
Despite their immense success in numerous fields, machine and deep learning
systems have not yet been able to firmly establish themselves in
mission-critical applications in healthcare. One of the main reasons lies in
the fact that when models are presented with previously unseen,
Out-of-Distribution samples, their performance deteriorates significantly. This
is known as the Domain Generalization (DG) problem. Our objective in this work
is to propose a benchmark for evaluating DG algorithms, in addition to
introducing a novel architecture for tackling DG in biosignal classification.
In this paper, we describe the Domain Generalization problem for biosignals,
focusing on electrocardiograms (ECG) and electroencephalograms (EEG) and
propose and implement an open-source biosignal DG evaluation benchmark.
Furthermore, we adapt state-of-the-art DG algorithms from computer vision to
the problem of 1D biosignal classification and evaluate their effectiveness.
Finally, we also introduce a novel neural network architecture that leverages
multi-layer representations for improved model generalizability. By
implementing the above DG setup we are able to experimentally demonstrate the
presence of the DG problem in ECG and EEG datasets. In addition, our proposed
model demonstrates improved effectiveness compared to the baseline algorithms,
exceeding the state-of-the-art in both datasets. Recognizing the significance
of the distribution shift present in biosignal datasets, the presented
benchmark aims at urging further research into the field of biomedical DG by
simplifying the evaluation process of proposed algorithms. To our knowledge,
this is the first attempt at developing an open-source framework for evaluating
ECG and EEG DG algorithms.Comment: Accepted in IEEE Transactions on Emerging Topics in Computational
Intelligenc
The Hanabi Challenge: A New Frontier for AI Research
From the early days of computing, games have been important testbeds for
studying how well machines can do sophisticated decision making. In recent
years, machine learning has made dramatic advances with artificial agents
reaching superhuman performance in challenge domains like Go, Atari, and some
variants of poker. As with their predecessors of chess, checkers, and
backgammon, these game domains have driven research by providing sophisticated
yet well-defined challenges for artificial intelligence practitioners. We
continue this tradition by proposing the game of Hanabi as a new challenge
domain with novel problems that arise from its combination of purely
cooperative gameplay with two to five players and imperfect information. In
particular, we argue that Hanabi elevates reasoning about the beliefs and
intentions of other agents to the foreground. We believe developing novel
techniques for such theory of mind reasoning will not only be crucial for
success in Hanabi, but also in broader collaborative efforts, especially those
with human partners. To facilitate future research, we introduce the
open-source Hanabi Learning Environment, propose an experimental framework for
the research community to evaluate algorithmic advances, and assess the
performance of current state-of-the-art techniques.Comment: 32 pages, 5 figures, In Press (Artificial Intelligence
NiftyNet: a deep-learning platform for medical imaging
Medical image analysis and computer-assisted intervention problems are
increasingly being addressed with deep-learning-based solutions. Established
deep-learning platforms are flexible but do not provide specific functionality
for medical image analysis and adapting them for this application requires
substantial implementation effort. Thus, there has been substantial duplication
of effort and incompatible infrastructure developed across many research
groups. This work presents the open-source NiftyNet platform for deep learning
in medical imaging. The ambition of NiftyNet is to accelerate and simplify the
development of these solutions, and to provide a common mechanism for
disseminating research outputs for the community to use, adapt and build upon.
NiftyNet provides a modular deep-learning pipeline for a range of medical
imaging applications including segmentation, regression, image generation and
representation learning applications. Components of the NiftyNet pipeline
including data loading, data augmentation, network architectures, loss
functions and evaluation metrics are tailored to, and take advantage of, the
idiosyncracies of medical image analysis and computer-assisted intervention.
NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D
and 3D images and computational graphs by default.
We present 3 illustrative medical image analysis applications built using
NiftyNet: (1) segmentation of multiple abdominal organs from computed
tomography; (2) image regression to predict computed tomography attenuation
maps from brain magnetic resonance images; and (3) generation of simulated
ultrasound images for specified anatomical poses.
NiftyNet enables researchers to rapidly develop and distribute deep learning
solutions for segmentation, regression, image generation and representation
learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge
Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6
figures; Update includes additional applications, updated author list and
formatting for journal submissio
SHREC'16: partial matching of deformable shapes
Matching deformable 3D shapes under partiality transformations is a challenging problem that has received limited focus in the computer vision and graphics communities. With this benchmark, we explore and thoroughly investigate the robustness of existing matching methods in this challenging task. Participants are asked to provide a point-to-point correspondence (either sparse or dense) between deformable shapes undergoing different kinds of partiality transformations, resulting in a total of 400 matching problems to be solved for each method - making this benchmark the biggest and most challenging of its kind. Five matching algorithms were evaluated in the contest; this paper presents the details of the dataset, the adopted evaluation measures, and shows thorough comparisons among all competing methods
Separable Convex Optimization with Nested Lower and Upper Constraints
We study a convex resource allocation problem in which lower and upper bounds
are imposed on partial sums of allocations. This model is linked to a large
range of applications, including production planning, speed optimization,
stratified sampling, support vector machines, portfolio management, and
telecommunications. We propose an efficient gradient-free divide-and-conquer
algorithm, which uses monotonicity arguments to generate valid bounds from the
recursive calls, and eliminate linking constraints based on the information
from sub-problems. This algorithm does not need strict convexity or
differentiability. It produces an -approximate solution for the
continuous problem in time
and an integer solution in time, where is
the number of decision variables, is the number of constraints, and is
the resource bound. A complexity of is also achieved
for the linear and quadratic cases. These are the best complexities known to
date for this important problem class. Our experimental analyses confirm the
good performance of the method, which produces optimal solutions for problems
with up to 1,000,000 variables in a few seconds. Promising applications to the
support vector ordinal regression problem are also investigated
- …