115 research outputs found
LSST Science Book, Version 2.0
A survey that can cover the sky in optical bands over wide fields to faint
magnitudes with a fast cadence will enable many of the exciting science
opportunities of the next decade. The Large Synoptic Survey Telescope (LSST)
will have an effective aperture of 6.7 meters and an imaging camera with field
of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over
20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with
fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a
total point-source depth of r~27.5. The LSST Science Book describes the basic
parameters of the LSST hardware, software, and observing plans. The book
discusses educational and outreach opportunities, then goes on to describe a
broad range of science that LSST will revolutionize: mapping the inner and
outer Solar System, stellar populations in the Milky Way and nearby galaxies,
the structure of the Milky Way disk and halo and other objects in the Local
Volume, transient and variable objects both at low and high redshift, and the
properties of normal and active galaxies at low and high redshift. It then
turns to far-field cosmological topics, exploring properties of supernovae to
z~1, strong and weak lensing, the large-scale distribution of galaxies and
baryon oscillations, and how these different probes may be combined to
constrain cosmological models and the physics of dark energy.Comment: 596 pages. Also available at full resolution at
http://www.lsst.org/lsst/sciboo
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Advanced techniques and algorithms to collect, analyze and visualize spatiotemporal data from social media feeds
LSST Science Book, Version 2.0
A survey that can cover the sky in optical bands over wide fields to faint magnitudes with a fast cadence will enable many of the exciting science opportunities of the next decade. The Large Synoptic Survey Telescope (LSST) will have an effective aperture of 6.7 meters and an imaging camera with field of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over 20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a total point-source depth of r~27.5. The LSST Science Book describes the basic parameters of the LSST hardware, software, and observing plans. The book discusses educational and outreach opportunities, then goes on to describe a broad range of science that LSST will revolutionize: mapping the inner and outer Solar System, stellar populations in the Milky Way and nearby galaxies, the structure of the Milky Way disk and halo and other objects in the Local Volume, transient and variable objects both at low and high redshift, and the properties of normal and active galaxies at low and high redshift. It then turns to far-field cosmological topics, exploring properties of supernovae to z~1, strong and weak lensing, the large-scale distribution of galaxies and baryon oscillations, and how these different probes may be combined to constrain cosmological models and the physics of dark energy
Recommended from our members
Extracting Computational Representations of Place with Social Sensing
Place-based GIS are at the forefront of GIScience research and characterized by textual descriptions, human conceptualizations as well as the spatial-semantic relationships among places. The concepts of places are difficult to handle in geographic information science and systems because of their intrinsic vagueness. They arise from the complex interaction of individuals, society, and the environment. The exact delineation of vague regions is challenging as their borders are vague and the membership within a region varies non-monotonically and as a function of context. Consequently, vague regions are difficult to handle computationally, e.g., in spatial analysis, cartography, geographic information retrieval, and GIS workflows in general. The emergence of big data brings new opportunities for us to understand the place semantics from large-scale volunteered geographic information and data streams, such as geotags, texts, activity streams, and GPS trajectories. The term "social sensing" describes such individual-level big geospatial data and the associated analysis methods. In this dissertation, I present a generalizable, data-driven framework that complements classical top-down approaches by extracting the representations of vague cognitive regions and function regions from bottom-up approaches using spatial statistics and machine learning techniques with various social sensing sources. I demonstrate how to derive crisp boundaries for cognitive and functional regions from points of interest data, and show how natural language processing techniques can enrich our understanding of places and form a foundation for the semantic characterization of place types and the generalization of regions. This work makes contributions to the development of computational methodologies for extracting vague cognitive regions and functional regions using data-driven approaches as well as the novel semantic generalization processing technique
Volunteered Geographic Information
This open access book includes methods for retrieval, semantic representation, and analysis of Volunteered Geographic Information (VGI), geovisualization and user interactions related to VGI, and discusses selected topics in active participation, social context, and privacy awareness. It presents the results of the DFG-funded priority program "VGI: Interpretation, Visualization, and Social Computing" (2016-2023). The book includes three parts representing the principal research pillars within the program. Part I "Representation and Analysis of VGI" discusses recent approaches to enhance the representation and analysis of VGI. It includes semantic representation of VGI data in knowledge graphs; machine-learning approaches to VGI mining, completion, and enrichment as well as to the improvement of data quality and fitness for purpose. Part II "Geovisualization and User Interactions related to VGI" book explores geovisualizations and user interactions supporting the analysis and presentation of VGI data. When designing these visualizations and user interactions, the specific properties of VGI data, the knowledge and abilities of different target users, and technical viability of solutions need to be considered. Part III "Active Participation, Social Context and Privacy Awareness" of the book addresses the human impact associated with VGI. It includes chapters on the use of wearable sensors worn by volunteers to record their exposure to environmental stressors on their daily journeys, on the collective behavior of people using location-based social media and movement data from football matches, and on the motivation of volunteers who provide important support in information gathering, filtering and analysis of social media in disaster situations. The book is of interest to researchers and advanced professionals in geoinformation, cartography, visual analytics, data science and machine learning
Mapping and monitoring forest remnants : a multiscale analysis of spatio-temporal data
KEYWORDS : Landsat, time series, machine learning, semideciduous Atlantic forest, Brazil, wavelet transforms, classification, change detectionForests play a major role in important global matters such as carbon cycle, climate change, and biodiversity. Besides, forests also influence soil and water dynamics with major consequences for ecological relations and decision-making. One basic requirement to quantify and model these processes is the availability of accurate maps of forest cover. Data acquisition and analysis at appropriate scales is the keystone to achieve the mapping accuracy needed for development and reliable use of ecological models.The current and upcoming production of high-resolution data sets plus the ever-increasing time series that have been collected since the seventieth must be effectively explored. Missing values and distortions further complicate the analysis of this data set. Thus, integration and proper analysis is of utmost importance for environmental research. New conceptual models in environmental sciences, like the perception of multiple scales, require the development of effective implementation techniques.This thesis presents new methodologies to map and monitor forests on large, highly fragmented areas with complex land use patterns. The use of temporal information is extensively explored to distinguish natural forests from other land cover types that are spectrally similar. In chapter 4, novel schemes based on multiscale wavelet analysis are introduced, which enabled an effective preprocessing of long time series of Landsat data and improved its applicability on environmental assessment.In chapter 5, the produced time series as well as other information on spectral and spatial characteristics were used to classify forested areas in an experiment relating a number of combinations of attribute features. Feature sets were defined based on expert knowledge and on data mining techniques to be input to traditional and machine learning algorithms for pattern recognition, viz . maximum likelihood, univariate and multivariate decision trees, and neural networks. The results showed that maximum likelihood classification using temporal texture descriptors as extracted with wavelet transforms was most accurate to classify the semideciduous Atlantic forest in the study area.In chapter 6, a multiscale approach to digital change detection was developed to deal with multisensor and noisy remotely sensed images. Changes were extracted according to size classes minimising the effects of geometric and radiometric misregistration.Finally, in chapter 7, an automated procedure for GIS updating based on feature extraction, segmentation and classification was developed to monitor the remnants of semideciduos Atlantic forest. The procedure showed significant improvements over post classification comparison and direct multidate classification based on artificial neural networks.</p
Volunteered Geographic Information
This open access book includes methods for retrieval, semantic representation, and analysis of Volunteered Geographic Information (VGI), geovisualization and user interactions related to VGI, and discusses selected topics in active participation, social context, and privacy awareness. It presents the results of the DFG-funded priority program "VGI: Interpretation, Visualization, and Social Computing" (2016-2023). The book includes three parts representing the principal research pillars within the program. Part I "Representation and Analysis of VGI" discusses recent approaches to enhance the representation and analysis of VGI. It includes semantic representation of VGI data in knowledge graphs; machine-learning approaches to VGI mining, completion, and enrichment as well as to the improvement of data quality and fitness for purpose. Part II "Geovisualization and User Interactions related to VGI" book explores geovisualizations and user interactions supporting the analysis and presentation of VGI data. When designing these visualizations and user interactions, the specific properties of VGI data, the knowledge and abilities of different target users, and technical viability of solutions need to be considered. Part III "Active Participation, Social Context and Privacy Awareness" of the book addresses the human impact associated with VGI. It includes chapters on the use of wearable sensors worn by volunteers to record their exposure to environmental stressors on their daily journeys, on the collective behavior of people using location-based social media and movement data from football matches, and on the motivation of volunteers who provide important support in information gathering, filtering and analysis of social media in disaster situations. The book is of interest to researchers and advanced professionals in geoinformation, cartography, visual analytics, data science and machine learning
Essays on Machine Learning in Risk Management, Option Pricing, and Insurance Economics
Dealing with uncertainty is at the heart of financial risk management and asset pricing. This cumulative dissertation consists of four independent research papers that study various aspects of uncertainty, from estimation and model risk over the volatility risk premium to the measurement of unobservable variables.
In the first paper, a non-parametric estimator of conditional quantiles is proposed that builds on methods from the machine learning literature. The so-called leveraging estimator is discussed in detail and analyzed in an extensive simulation study. Subsequently, the estimator is used to quantify the estimation risk of Value-at-Risk and Expected Shortfall models. The results suggest that there are significant differences in the estimation risk of various GARCH-type models while in general estimation risk for the Expected Shortfall is higher than for the Value-at-Risk.
In the second paper, the leveraging estimator is applied to realized and implied volatility estimates of US stock options to empirically test if the volatility risk premium is priced in the cross-section of option returns. A trading strategy that is long (short) in a portfolio with low (high) implied volatility conditional on the realized volatility yields average monthly returns that are economically and statistically significant.
The third paper investigates the model risk of multivariate Value-at-Risk and Expected Shortfall models in a comprehensive empirical study on copula GARCH models. The paper finds that model risk is economically significant, especially high during periods of financial turmoil, and mainly due to the choice of the copula.
In the fourth paper, the relation between digitalization and the market value of US insurers is analyzed. Therefore, a text-based measure of digitalization building on the Latent Dirichlet Allocation is proposed. It is shown that a rise in digitalization efforts is associated with an increase in market valuations.:1 Introduction
1.1 Motivation
1.2 Conditional quantile estimation via leveraging optimal quantization
1.3 Cross-section of option returns and the volatility risk premium
1.4 Marginals versus copulas: Which account for more model risk in multivariate risk forecasting?
1.5 Estimating the relation between digitalization and the market value of
insurers
2 Conditional Quantile Estimation via Leveraging Optimal Quantization
2.1 Introduction
2.2 Optimal quantization
2.3 Conditional quantiles through leveraging optimal quantization
2.4 The hyperparameters N, λ, and γ
2.5 Simulation study
2.6 Empirical application
2.7 Conclusion
3 Cross-Section of Option Returns and the Volatility Risk Premium
3.1 Introduction
3.2 Capturing the volatility risk premium
3.3 Empirical study
3.4 Robustness checks
3.5 Conclusion
4 Marginals Versus Copulas: Which Account for More Model Risk in Multivariate Risk Forecasting?
4.1 Introduction
4.2 Market risk models and model risk
4.3 Data
4.4 Analysis of model risk
4.5 Model risk for models in the model confidence set
4.6 Model risk and backtesting
4.7 Conclusion
5 Estimating the Relation Between Digitalization and the Market Value of
Insurers
5.1 Introduction
5.2 Measuring digitalization using LDA
5.3 Financial data & empirical strategy
5.4 Estimation results
5.5 Conclusio
- …
