1,078 research outputs found

    From Social Data Mining to Forecasting Socio-Economic Crisis

    Full text link
    Socio-economic data mining has a great potential in terms of gaining a better understanding of problems that our economy and society are facing, such as financial instability, shortages of resources, or conflicts. Without large-scale data mining, progress in these areas seems hard or impossible. Therefore, a suitable, distributed data mining infrastructure and research centers should be built in Europe. It also appears appropriate to build a network of Crisis Observatories. They can be imagined as laboratories devoted to the gathering and processing of enormous volumes of data on both natural systems such as the Earth and its ecosystem, as well as on human techno-socio-economic systems, so as to gain early warnings of impending events. Reality mining provides the chance to adapt more quickly and more accurately to changing situations. Further opportunities arise by individually customized services, which however should be provided in a privacy-respecting way. This requires the development of novel ICT (such as a self- organizing Web), but most likely new legal regulations and suitable institutions as well. As long as such regulations are lacking on a world-wide scale, it is in the public interest that scientists explore what can be done with the huge data available. Big data do have the potential to change or even threaten democratic societies. The same applies to sudden and large-scale failures of ICT systems. Therefore, dealing with data must be done with a large degree of responsibility and care. Self-interests of individuals, companies or institutions have limits, where the public interest is affected, and public interest is not a sufficient justification to violate human rights of individuals. Privacy is a high good, as confidentiality is, and damaging it would have serious side effects for society.Comment: 65 pages, 1 figure, Visioneer White Paper, see http://www.visioneer.ethz.c

    Easy-to-implement hp-adaptivity for non-elliptic goal-oriented problems

    Get PDF
    The FEM has become a foundational numerical technique in computational mechanics and civil engineering since its inception by Courant in 1943 Courant1943. Originating from the Ritz method and variational calculus, the FEM was primarily employed to derive solutions for vibrational systems. A distinctive strength of the FEM is its capability to represent mathematical models through the weak variational formulation of PDE, facilitating computational feasibility even in intricate geometries. However, the search for accuracy often imposes a significant computational task. In the FEM, adaptive methods have emerged to balance the accuracy of solutions with computational costs. The hh-adaptive FEM designs more efficient meshes by reducing the mesh size hh locally while keeping the polynomial order of approximation pp fixed (usually p=1,2p=1,2). An alternative approach to the hh-adaptive FEM is the pp-adaptive FEM, which locally enriches the polynomial space pp while keeping the mesh size hh constant. By dynamically adapting hh and pp, the hphp-adaptive FEM achieves exponential convergence rates. Adaptivity is crucial for obtaining accurate solutions. However, the traditional focus on global norms, such as L2L^2 or H1H^1, might only sometimes serve the requirements of specific applications. In engineering, controlling errors in specific domains related to a particular QoI is often more critical than focusing on the overall solution. That motivated the development of GOA strategies. In this dissertation, we develop automatic GO hphp-adaptive algorithms tailored for non-elliptic problems. These algorithms shine in terms of robustness and simplicity in their implementation, attributes that make them especially suitable for industrial applications. A key advantage of our methodologies is that they do not require computing reference solutions on globally refined grids. Nevertheless, our approach is limited to anisotropic pp and isotropic hh refinements. We conduct multiple tests to validate our algorithms. We probe the convergence behavior of our GO hh- and pp-adaptive algorithms using Helmholtz and convection-diffusion equations in one-dimensional scenarios. We test our GO hphp-adaptive algorithms on Poisson, Helmholtz, and convection-diffusion equations in two dimensions. We use a Helmholtz-like scenario for three-dimensional cases to highlight the adaptability of our GO algorithms. We also create efficient ways to build large databases ideal for training DNN using hphp MAGO FEM. As a result, we efficiently generate large databases, possibly containing hundreds of thousands of synthetic datasets or measurements

    Proceedings of SAT Competition 2021 : Solver and Benchmark Descriptions

    Get PDF
    Non peer reviewe

    ImageJ2: ImageJ for the next generation of scientific image data

    Full text link
    ImageJ is an image analysis program extensively used in the biological sciences and beyond. Due to its ease of use, recordable macro language, and extensible plug-in architecture, ImageJ enjoys contributions from non-programmers, amateur programmers, and professional developers alike. Enabling such a diversity of contributors has resulted in a large community that spans the biological and physical sciences. However, a rapidly growing user base, diverging plugin suites, and technical limitations have revealed a clear need for a concerted software engineering effort to support emerging imaging paradigms, to ensure the software's ability to handle the requirements of modern science. Due to these new and emerging challenges in scientific imaging, ImageJ is at a critical development crossroads. We present ImageJ2, a total redesign of ImageJ offering a host of new functionality. It separates concerns, fully decoupling the data model from the user interface. It emphasizes integration with external applications to maximize interoperability. Its robust new plugin framework allows everything from image formats, to scripting languages, to visualization to be extended by the community. The redesigned data model supports arbitrarily large, N-dimensional datasets, which are increasingly common in modern image acquisition. Despite the scope of these changes, backwards compatibility is maintained such that this new functionality can be seamlessly integrated with the classic ImageJ interface, allowing users and developers to migrate to these new methods at their own pace. ImageJ2 provides a framework engineered for flexibility, intended to support these requirements as well as accommodate future needs
    corecore