114 research outputs found
Modularis: Modular Relational Analytics over Heterogeneous Distributed Platforms
The enormous quantity of data produced every day together with advances in
data analytics has led to a proliferation of data management and analysis
systems. Typically, these systems are built around highly specialized
monolithic operators optimized for the underlying hardware. While effective in
the short term, such an approach makes the operators cumbersome to port and
adapt, which is increasingly required due to the speed at which algorithms and
hardware evolve. To address this limitation, we present Modularis, an execution
layer for data analytics based on sub-operators, i.e.,composable building
blocks resembling traditional database operators but at a finer granularity. To
demonstrate the advantages of our approach, we use Modularis to build a
distributed query processing system supporting relational queries running on an
RDMA cluster, a serverless cloud platform, and a smart storage engine.
Modularis requires minimal code changes to execute queries across these three
diverse hardware platforms, showing that the sub-operator approach reduces the
amount and complexity of the code. In fact, changes in the platform affect only
sub-operators that depend on the underlying hardware. We show the end-to-end
performance of Modularis by comparing it with a framework for SQL processing
(Presto), a commercial cluster database (SingleStore), as well as
Query-as-a-Service systems (Athena, BigQuery). Modularis outperforms all these
systems, proving that the design and architectural advantages of a modular
design can be achieved without degrading performance. We also compare Modularis
with a hand-optimized implementation of a join for RDMA clusters. We show that
Modularis has the advantage of being easily extensible to a wider range of join
variants and group by queries, all of which are not supported in the hand-tuned
join.Comment: Accepted at PVLDB vol. 1
Adaptive Execution of Compiled Queries
Compiling queries to machine code is arguably the most efficient way for executing queries. One often overlooked problem with compilation, however, is the time it takes to generate machine code. Even with fast compilation frameworks like LLVM, Generating machine code for complex queries routinely takes hundreds of milliseconds. Such compilation times can be a major disadvantage for workloads that execute many complex, but quick queries. To solve this problem, we propose an adaptive execution framework, which dynamically and transparently switches from interpretation to compilation. We also propose a fast bytecode interpreter for LLVM, which can execute queries without costly translation to machine code and thereby dramatically reduces query latency. Adaptive execution is dynamic, fine-grained, and can execute different code paths of the same query using different execution modes. Our extensive evaluation shows that this approach achieves optimal performance in a wide variety from settings---low latency for small data sets and maximum throughput for large data sizes
Incorporating Census Data into a Geospatial Student Database
The University of New Mexico(UNM) stores data on students, faculty, and staff at the University. The data is used to generate reports and fill surveys for several local, statewide and nationwide reporting entities. The reports convey statistical and analytical information such as the graduation rates, retention, performance, ethnicity, age, and gender of students. Furthermore, the Institute of Design and Innovation (IDI), and the Office of Institutional Analytics (OIA) at UNM use the data provided for various predictive studies aimed at improving student outcomes.
This thesis proposes geospatial data as an additional layer of information for the data repository. The paper runs through the general steps involved in setting up a geospatial database using PostgreSQL and geospatial extensions including PostGIS, Tiger Geocoder, and Address Standardizer. With geospatial functionality incorporated into the data repository, the university can know how far students live, which amenities are in proximity to students, and other geospatial features which describe students’ journeys through college.
To demonstrate how the university could exploit geospatial functionality a dataset of UNM students is spatially joined to socioeconomic data from the United States’ Census Bureau. Various student related geospatial queries are shown, as well as, how to set up a geospatial database
Analyzing data in the Internet of Things
The Internet of Things (IoT) is growing fast. According to the International Data Corporation (IDC), more than 28 billion things will be connected to the Internet by 2020—from smartwatches and other wearables to smart cities, smart homes, and smart cars. This O’Reilly report dives into the IoT industry through a series of illuminating talks and case studies presented at 2015 Strata + Hadoop World Conferences in San Jose, New York, and Singapore.
Among the topics in this report, you’ll explore the use of sensors to generate predictions, using data to create predictive maintenance applications, and modeling the smart and connected city of the future with Kafka and Spark.
Case studies include:
Using Spark Streaming for proactive maintenance and accident prevention in railway equipment
Monitoring subway and expressway traffic in Singapore using telco data
Managing emergency vehicles through situation awareness of traffic and weather in the smart city pilot in Oulu, Finland
Capturing and routing device-based health data to reduce cardiovascular disease
Using data analytics to reduce human space flight risk in NASA’s Orion program
This report concludes with a discussion of ethics related to algorithms that control things in the IoT. You’ll explore decisions related to IoT data, as well as opportunities to influence the moral implications involved in using the IoT
- …