334 research outputs found
Capturing the Laws of (Data) Nature
Model fitting is at the core of many scientific and industrial
applications. These models encode a wealth of domain
knowledge, something a database decidedly lacks. Except for
simple cases, databases could not hope to achieve a deeper
understanding of the hidden relationships in the data yet.
We propose to harvest the statistical models that users fit
to the stored data as part of their analysis and use them to
advance physical data storage and approximate query answering
to unprecedented levels of performance. We motivate
our approach with an astronomical use case and discuss its
pote
BATSE observations of BL Lac Objects
The Burst and Transient Source Experiment (BATSE) on the Compton Gamma-Ray Observatory has been shown to be sensitive to non-transient hard X-ray sources in our galaxy, down to flux levels of 100 mCrab for daily measurements, 3 mCrab for integrations over several years. We use the continuous BATSE database and the Earth Occultation technique to extract average flux values between 20 and 200 keV from complete radio- and X-ray- selected BL Lac samples over a 2 year period
Deep integration of machine learning Into column stores
We leverage vectorized User-Defined Functions (UDFs) to efficiently integrate unchanged machine learning pipelines into an analytical data management system. The entire pipelines including data, models, parameters and evaluation outcomes are stored and executed inside the database system. Experiments using our MonetDB/Python UDFs show greatly improved performance due to reduced data movement and parallel processing opportunities. In addition, this integration enables meta-analysis of models using relational queries
Fast in-database cross-matching of high-cadence, high-density source lists with an up-to-date sky model
Coming high-cadence wide-field optical telescopes will image hundreds of thousands of sources per minute. Besides inspecting the near real-time data streams for transient and variability events, the accumulated data archive is a wealthy laboratory for making complementary scientific discoveries. The goal of this work is to optimise column-oriented database techniques to enable the construction of a full-source and light-curve database for large-scale surveys, that is accessible by the astronomical community. We adopted LOFAR's Transients Pipeline as the baseline and modified it to enable the processing of optical images that have much higher source densities. The pipeline adds new source lists to the archive database, while cross-matching them with the known cataloguedsources in order to build a full light-curve archive. We investigated several techniques of indexing and partitioning the largest tables, allowing for faster positional source look-ups in the cross matching algorithms. We monitored all query run times in long-term pipeline runs where we processed a subset of IPHAS data that have image source density peaks over 170,000 per field of view (500,000 deg−2). Our analysis demonstrates that horizontal table partitions of declination widths of one-degree control the query run times. Usage of an index strategy where the partitions are densely sorted according to source declination yields ano
- …