20,476 research outputs found

    Innovation dialogue - Being strategic in the face of complexity - Conference report

    Get PDF
    The Innovation Dialogue on Being Strategic in the Face of Complexity was held in Wageningen on 31 November and 1 December 2009. The event is part of a growing dialogue in the international development sector about the complexities of social, economic and political change. It builds on two previous events hosted the Innovation Dialogue on Navigating Complexity (May 2009) and the Seminar on Institutions, Theories of Change and Capacity Development (December 2008). Over 120 people attended the event coming from a range of Dutch and international development organizations. The event was aimed at bridging practitioner, policy and academic interests. It brought together people working on sustainable business strategies, social entrepreneurship and international development. Leading thinkers and practitioners offered their insights on what it means to "be strategic in complex times". The Dialogue was organized and hosted by the Wageningen UR Centre for Development Innovation working with the Chair Groups of Communication & Innovation Studies, Disaster Studies, Education & Competence Studies and Public Administration & Policy as co; organisers. The theme of the Dialogue aligns closely with Wageningen UR’s interest in linking technological and institutional innovation in ways that enable ‘science for impact’

    Deformable Shape Completion with Graph Convolutional Autoencoders

    Full text link
    The availability of affordable and portable depth sensors has made scanning objects and people simpler than ever. However, dealing with occlusions and missing parts is still a significant challenge. The problem of reconstructing a (possibly non-rigidly moving) 3D object from a single or multiple partial scans has received increasing attention in recent years. In this work, we propose a novel learning-based method for the completion of partial shapes. Unlike the majority of existing approaches, our method focuses on objects that can undergo non-rigid deformations. The core of our method is a variational autoencoder with graph convolutional operations that learns a latent space for complete realistic shapes. At inference, we optimize to find the representation in this latent space that best fits the generated shape to the known partial input. The completed shape exhibits a realistic appearance on the unknown part. We show promising results towards the completion of synthetic and real scans of human body and face meshes exhibiting different styles of articulation and partiality.Comment: CVPR 201

    An update on statistical boosting in biomedicine

    Get PDF
    Statistical boosting algorithms have triggered a lot of research during the last decade. They combine a powerful machine-learning approach with classical statistical modelling, offering various practical advantages like automated variable selection and implicit regularization of effect estimates. They are extremely flexible, as the underlying base-learners (regression functions defining the type of effect for the explanatory variables) can be combined with any kind of loss function (target function to be optimized, defining the type of regression setting). In this review article, we highlight the most recent methodological developments on statistical boosting regarding variable selection, functional regression and advanced time-to-event modelling. Additionally, we provide a short overview on relevant applications of statistical boosting in biomedicine

    Design of innovation policy in the context of climate change: Moroccan innovation policy

    Get PDF
    ABSTRACTContext and background:Internationally, innovation policy is recognized as one of the main policies used to tackle sustainability challenges such as climate change, especially in the mission-oriented innovation policy era. In the Moroccan context, the current national strategy for innovation is “Morocco Innovation” adopted in 2009. Besides the former policy, the treatment of problems such as climate change has evolved through sectoral policies.Goal and Objectives:Our analysis aims to evaluate the actual design of innovation policy in Morocco in terms of its ability to address sustainability issues such as climate change. Hence, we can identify the new lines to follow by politicians to design efficient innovation policies to face major societal challenges.Methodology:To assess policy design, we propose a framework combining the main contributions currently existing in the literature with an earlier modified version of a framework of failures for the design of innovation policies for transformative change. It accounts for 12 adaptation lines, that a policy should follow to achieve a transformative change and overcome grand challenges. According to the suggested framework, we analyzed the current situation of innovation policy through the case of climate change policy in Morocco. In this sense, conformities and deviations of innovation policies in Morocco are then identified with proposals for paths of reform.Results:With the third generation, the main lines of adaptation added in the framework proposed are developmental evaluation and misconduct management. The current national innovation policy in Morocco suffers from huge shortcomings, it remains in the era of national systems of innovation policies and did not enter the third generation yet.  For the context of climate change, the country knew a succession of plans for climate change from the National Plan to tackle Global Warming (PGW) adopted in 2009 to the National Climate Plan (NCP) in 2020. In this sequence of plans, innovation was included in some pillars, nevertheless, it remained of the same nature over time. Hence, we need an explicit national strategy for innovation designed in a transformative change approach oriented toward missions shaping major challenges designed according to the 12 adaptation lines proposed

    Robustness - a challenge also for the 21st century: A review of robustness phenomena in technical, biological and social systems as well as robust approaches in engineering, computer science, operations research and decision aiding

    Get PDF
    Notions on robustness exist in many facets. They come from different disciplines and reflect different worldviews. Consequently, they contradict each other very often, which makes the term less applicable in a general context. Robustness approaches are often limited to specific problems for which they have been developed. This means, notions and definitions might reveal to be wrong if put into another domain of validity, i.e. context. A definition might be correct in a specific context but need not hold in another. Therefore, in order to be able to speak of robustness we need to specify the domain of validity, i.e. system, property and uncertainty of interest. As proofed by Ho et al. in an optimization context with finite and discrete domains, without prior knowledge about the problem there exists no solution what so ever which is more robust than any other. Similar to the results of the No Free Lunch Theorems of Optimization (NLFTs) we have to exploit the problem structure in order to make a solution more robust. This optimization problem is directly linked to a robustness/fragility tradeoff which has been observed in many contexts, e.g. 'robust, yet fragile' property of HOT (Highly Optimized Tolerance) systems. Another issue is that robustness is tightly bounded to other phenomena like complexity for which themselves exist no clear definition or theoretical framework. Consequently, this review rather tries to find common aspects within many different approaches and phenomena than to build a general theorem for robustness, which anyhow might not exist because complex phenomena often need to be described from a pluralistic view to address as many aspects of a phenomenon as possible. First, many different robustness problems have been reviewed from many different disciplines. Second, different common aspects will be discussed, in particular the relationship of functional and structural properties. This paper argues that robustness phenomena are also a challenge for the 21st century. It is a useful quality of a model or system in terms of the 'maintenance of some desired system characteristics despite fluctuations in the behaviour of its component parts or its environment' (s. [Carlson and Doyle, 2002], p. 2). We define robustness phenomena as solution with balanced tradeoffs and robust design principles and robustness measures as means to balance tradeoffs. --

    SLOPE - Adaptive variable selection via convex optimization

    Get PDF
    We introduce a new estimator for the vector of coefficients β\beta in the linear model y=Xβ+zy=X\beta+z, where XX has dimensions n×pn\times p with pp possibly larger than nn. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to minbRp12yXb22+λ1b(1)+λ2b(2)++λpb(p),\min_{b\in\mathbb{R}^p}\frac{1}{2}\Vert y-Xb\Vert _{\ell_2}^2+\lambda_1\vert b\vert _{(1)}+\lambda_2\vert b\vert_{(2)}+\cdots+\lambda_p\vert b\vert_{(p)}, where λ1λ2λp0\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_p\ge0 and b(1)b(2)b(p)\vert b\vert_{(1)}\ge\vert b\vert_{(2)}\ge\cdots\ge\vert b\vert_{(p)} are the decreasing absolute values of the entries of bb. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical 1\ell_1 procedures such as the Lasso. Here, the regularizer is a sorted 1\ell_1 norm, which penalizes the regression coefficients according to their rank: the higher the rank - that is, stronger the signal - the larger the penalty. This is similar to the Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289-300] procedure (BH) which compares more significant pp-values with more stringent thresholds. One notable choice of the sequence {λi}\{\lambda_i\} is given by the BH critical values λBH(i)=z(1iq/2p)\lambda_{\mathrm {BH}}(i)=z(1-i\cdot q/2p), where q(0,1)q\in(0,1) and z(α)z(\alpha) is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with λBH\lambda_{\mathrm{BH}} provably controls FDR at level qq. Moreover, it also appears to have appreciable inferential properties under more general designs XX while having substantial power, as demonstrated in a series of experiments running on both simulated and real data.Comment: Published at http://dx.doi.org/10.1214/15-AOAS842 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore