URL of JLESC: https://jlesc.github.io

URL of the 5th JLESC workshop: https://jlesc.github.io/events/5th-jlesc-workshop/

Talk by Pierre Matri: Ty ́r: Blob Storage Systems Meet Built-In Transactions

Abstract: Concurrent Big Data applications often require high-performance storage, as well as ACID (Atomicity, Consistency, Isolation, Durability) transaction sup- port. Blobs (binary large objects) are an increasingly popular low-level model for addressing the storage needs of such applications, providing a solid base for developing higher-level storage solutions, such as object stores or distributed file systems. However, today’s blob storage systems typically offer no transaction semantics. This demands users to coordinate access to data carefully in or- der to avoid race conditions, inconsistent writes, overwrites and other problems that cause erratic behavior. We argue there is a gap between existing storage solutions and application requirements, which limits the design of transaction- oriented applications. In this talk, we briefly introduce Ty ́r, the first blob stor- age system to provide built-in, multiblob transactions, while retaining sequential consistency and high throughput under heavy access concurrency.

Talk by Gabriel Antoniu: Spark versus Flink: Understanding Performance in Big Data Analytics Frameworks

Abstract: Big Data analytics has recently gained increasing popularity as a tool to process large amounts of data on-demand. Spark and Flink are two Apache-based data analytics frameworks that facilitate the development of multi-step data pipelines using directly acyclic graph patterns. Making the most out of these frameworks is challenging because efficient executions strongly rely on complex parameter configurations and on an in-depth understanding of the underlying architectural choices. Although extensive research has been devoted to improving and evalu- ating the performance of such analytics frameworks, most of them benchmark them against Hadoop, as a baseline, a rather unfair comparison considering the fundamentally different design principles. This work aims to bring some justice in this respect, by directly comparing the performance of Spark and Flink. Our goal is to identify and explain the impact of the different architectural choices and the parameter configurations on the perceived end-to-end performance. To this end, we develop a methodology for correlating the parameter settings and the operators execution plan with the resource usage. We use this methodology to dissect the performance of Spark and Flink with several representative batch and iterative workloads on up to 100 nodes. We highlight how performance correlates to operators, to resource usage and to the specifics of the internal framework design.