1,686 research outputs found
Combining Contrast Invariant L1 Data Fidelities with Nonlinear Spectral Image Decomposition
This paper focuses on multi-scale approaches for variational methods and
corresponding gradient flows. Recently, for convex regularization functionals
such as total variation, new theory and algorithms for nonlinear eigenvalue
problems via nonlinear spectral decompositions have been developed. Those
methods open new directions for advanced image filtering. However, for an
effective use in image segmentation and shape decomposition, a clear
interpretation of the spectral response regarding size and intensity scales is
needed but lacking in current approaches. In this context, data
fidelities are particularly helpful due to their interesting multi-scale
properties such as contrast invariance. Hence, the novelty of this work is the
combination of -based multi-scale methods with nonlinear spectral
decompositions. We compare with scale-space methods in view of
spectral image representation and decomposition. We show that the contrast
invariant multi-scale behavior of promotes sparsity in the spectral
response providing more informative decompositions. We provide a numerical
method and analyze synthetic and biomedical images at which decomposition leads
to improved segmentation.Comment: 13 pages, 7 figures, conference SSVM 201
Improving results of pediatric renal transplantation
BACKGROUND: Outcome after renal transplantation in children has been variable. We undertook a retrospective study of our experience over the past five years. STUDY DESIGN: From January 1, 1988, to October 15, 1992, 60 renal transplantations were performed upon 59 children at the Children's Hospital of Pittsburgh. Twenty-eight (47 percent) of the kidneys were from cadaveric donors, and 32 (53 percent) were from living donors. The recipients ranged in age from 0.8 to 17.4 years, with a mean of 9.8 ± 4.8 years. Forty-six (77 percent) recipients were undergoing a first transplant, while 14 (23 percent) received a second or third transplant. Eight (13 percent) of the patients were sensitized, with a panel reactive antibody of more than 40 percent. Eleven of the 14 patients undergoing retransplantation and seven of the eight patients who were sensitized received kidneys from cadaveric donors. Thirty- three (55 percent) patients received cyclosporine-based immunosuppression, and 27 (45 percent) received FK506 as the primary immunosuppressive agent. RESULTS: The median follow-up period was 36 months, with a range of six to 63 months. The one- and four-year actuarial patient survival rate was 100 and 98 percent. The one- and four-year actuarial graft survival rate was 98 and 83 percent. For living donor recipients, the one- and four-year actuarial patient survival rate was 100 and 100 percent; for cadaveric recipients, it was 100 and 96 percent. Corresponding one- and four-year actuarial graft survival rates were 100 and 95 percent for the living donor recipients and 96 and 69 percent for the cadaveric recipients. Patients on cyclosporine had a one- and four-year patient survival rate of 100 and 97 percent, and patients on FK506 had a one- and three-year patient survival rate of 100 and 100 percent. Corresponding one- and four-year actuarial graft survival rates were 100 and 85 percent in the cyclosporine group, while one- and three-year actuarial graft survival rates were 96 and 84 percent in the FK506 group. The mean serum creatinine level was 1.24 ± 0.64 mg per dL; the blood urea nitrogen level was 26 ± 13 mg per dL. The incidence of rejection was 47 percent; 75 percent of the rejections were steroid-responsive. The incidence of cytomegalovirus was 10 percent. The incidence of post-transplant lymphoproliferative disorder was 8 percent. None of the patients on cyclosporine were able to be taken off prednisone; 56 percent of the patients receiving FK506 were taken off prednisone successfully. Early growth and development data suggest that the patients receiving FK506 off prednisone had significant gains in growth. CONCLUSIONS: These results support the idea that renal transplantation is a successful therapy for end-stage renal disease in children. They also illustrate the potential benefits of a new immunosuppressive agent, FK506
FK506 IN PEDIATRIC KIDNEY-TRANSPLANTATION - PRIMARY AND RESCUE EXPERIENCE
Between December 14, 1989, and December 17, 1993,43 patients undergoing kidney transplantation alone at the Children’s Hospital of Pittsburgh received FK506 as the primary immunosuppressive agent. The mean recipient age was 10.2 ± 4.8 years (range 0.7–17.4), with 7 (16%) children under 5 years of age and 2 (5%) under 2 years of age. Fifteen (35%) children underwent retransplantation, and 5 (12%) had a panel reactive antibody level greater than 40%. Twenty-two (51%) cases were with cadaveric donors, and 21 (49%) were with living donors. The mean follow-up is 25 ± 14 months. There were no deaths. One and three year actuarial graft survival was 98% and 85%. The mean serum creatinine and BUN were 1.2 ± 0.6 mg/dl and 26 ± 11 mg/dl; the calculated creatinine clearance was 75 ± 23 ml/min/1.73 m(2). Twenty-four (62%) patients have been successfully withdrawn from steroids, and 24 (62%) require no anti-hypertensive medication. Improved growth was seen, particularly in pre-adolescent children off steroids. Between July 28, 1990, and December 2, 1993, 24 children were referred for rescue therapy with FK506, 14.6 ± 16.4 months (range 1.1–53.2) after transplantation. Nineteen (79%) were referred because of resistant rejection; 4 (17%) were referred because of proteinuria; 1 (4%) was switched because of steroid-related obesity. There were no deaths. One and two year graft survival was 75% and 68%. Seventeen (71%) patients were successfully rescued, including 1 of 2 patients who arrived on dialysis. Four (24%) of the successfully rescued patients were weaned off steroids. While not without side effects, which include nephrotoxicity, neurotoxicity, diabetogenicity, and viral complications, FK506 appears to be an effective immunosuppressive agent for both primary and rescue therapy after kidney transplantation. Its steroid-sparing qualities may be of particular importance in the pediatric population
Utilitarian Collective Choice and Voting
In his seminal Social Choice and Individual Values, Kenneth Arrow stated that his theory applies to voting. Many voting theorists have been convinced that, on account of Arrow’s theorem, all voting methods must be seriously flawed. Arrow’s theory is strictly ordinal, the cardinal aggregation of preferences being explicitly rejected. In this paper I point out that all voting methods are cardinal and therefore outside the reach of Arrow’s result.
Parallel to Arrow’s ordinal approach, there evolved a consistent cardinal theory of collective choice. This theory, most prominently associated with the work of Harsanyi, continued the older utilitarian tradition in a more formal style. The purpose of this paper is to show that various derivations of utilitarian SWFs can also be used to derive utilitarian voting (UV). By this I mean a voting rule that allows the voter to score each alternative in accordance with a given scale. UV-k indicates a scale with k distinct values. The general theory leaves k to be determined on pragmatic grounds. A (1,0) scale gives approval voting. I prefer the scale (1,0,-1) and refer to the resulting voting rule as evaluative voting.
A conclusion of the paper is that the defects of conventional voting methods result not from Arrow’s theorem, but rather from restrictions imposed on voters’ expression of their preferences.
The analysis is extended to strategic voting, utilizing a novel set of assumptions regarding voter behavior
Multiclass Semi-Supervised Learning on Graphs using Ginzburg-Landau Functional Minimization
We present a graph-based variational algorithm for classification of
high-dimensional data, generalizing the binary diffuse interface model to the
case of multiple classes. Motivated by total variation techniques, the method
involves minimizing an energy functional made up of three terms. The first two
terms promote a stepwise continuous classification function with sharp
transitions between classes, while preserving symmetry among the class labels.
The third term is a data fidelity term, allowing us to incorporate prior
information into the model in a semi-supervised framework. The performance of
the algorithm on synthetic data, as well as on the COIL and MNIST benchmark
datasets, is competitive with state-of-the-art graph-based multiclass
segmentation methods.Comment: 16 pages, to appear in Springer's Lecture Notes in Computer Science
volume "Pattern Recognition Applications and Methods 2013", part of series on
Advances in Intelligent and Soft Computin
Learn your opponent's strategy (in polynomial time)!
Agents that interact in a distributed environment might increase their utility by behaving optimally given the strategies of the other agents. To do so, agents need to learn about those with whom they share the same world. This paper examines interactions among agents from a game theoretic perspective. In this context, learning has been assumed as a means to reach equilibrium. We analyze the complexity of this learning process. We start with a restricted two-agent model, in which agents are represented by finite automata, and one of the agents plays a fixed strategy. We show that even with this restrictions, the learning process may be exponential in time. We then suggest a criterion of simplicity, that induces a class of automata that are learnable in polynomial time
Strong laws of large numbers for sub-linear expectations
We investigate three kinds of strong laws of large numbers for capacities
with a new notion of independently and identically distributed (IID) random
variables for sub-linear expectations initiated by Peng. It turns out that
these theorems are natural and fairly neat extensions of the classical
Kolmogorov's strong law of large numbers to the case where probability measures
are no longer additive. An important feature of these strong laws of large
numbers is to provide a frequentist perspective on capacities.Comment: 10 page
Resolving the Ellsberg Paradox by Assuming that People Evaluate Repetitive Sampling
Ellsberg (1961) designed a decision experiment where most people violated the axioms of rational choice. He asked people to bet on the outcome of certain random events with known and with unknown probabilities. They usually preferred to bet on events with known probabilities. It is shown that this behavior is reasonable and in accordance with the axioms of rational decision making if it is assumed that people consider bets on events that are repeatedly sampled instead of just sampled once
- …