PLDS'20 Workshop

Programming Languages and Distributed Systems

March 5th & 6th 2020 at RISE Computer Science, Electrum Kista, Stockholm, Sweden

Programme

Day 1 | Thursday 5 March
12:30 - 13:00   Coffee Break
Coffee, tea, and cookies will be served in the workshop room.
13:00 - 13:30   Selected challenges in concurrent and distributed programming   
Philipp Haller (KTH Royal Institute of Technology)
We present three challenges in concurrent and distributed programming, as well as recent results addressing them. The first challenge consists of ensuring fault-tolerance properties in typed programming languages. The main question is how to enforce fault-tolerance properties for well-typed programs, as opposed to specific algorithms or systems. Towards addressing this question, we present the first correctness results for a typed calculus with first-class lineages. The second challenge consists of using data with different consistency properties safely within the same distributed application. To address this challenge, we propose a novel type system which provides a noninterference guarantee: mutations of potentially-inconsistent data cannot be observed via access to consistent data types. As a third challenge we propose the design of a concurrent domain-specific language for parallelizing static analysis problems.
13:30 - 14:00   Why time is evil and what to do about it: designing distributed systems as pure functional programs plus interaction points   
Peter Van Roy (Université catholique de Louvain)
There exists a useful purely functional subset of distributed programming. Purely functional distributed computations do not interact with the real world (because all inputs must be known in advance), but they support message asynchrony and reordering, and they can be used to build networks of communicating agents. General distributed programming consists of purely functional distributed programming plus interaction points for real-world interactions. Experience shows that realistic distributed systems are mostly functional, i.e., they have very few interaction points. We give a precise formal definition of interaction points and we present a design language, called PROP (Piecewise Relative Observable Purity) to specify distributed systems explicitly as a purely functional core plus interaction points. We aim to turn this into a practical tool that can leverage the powerful techniques available to functional programming for distributed systems design.
14:00 - 14:30   Seamless Batch and Stream Computation on Heterogeneous Hardware with Arcon   
Paris Carbone (RISE Research Institutes of Sweden)
Contemporary end-to-end data pipelines need to combine many diverse workloads (data stream analytics, ML training, graph algorihms etc.). For each of these types of workloads exist several frontends today exposed in different programming languages as well as runtimes tailored to support a respective frontend and possibly a hardware architecture (e.g., GPUs). The resulting pipelines suffer in terms of complexity and performance due to excessive type conversions, full materialization of intermediate results and lack of cross-frontend computation sharing capabilities.
In this talk we introduce the Arcon system, the core principles behind it and our past work that influenced its conception. Arcon aims to provide a unified approach to declare and execute analytical tasks across data type-boundaries. The system achieves that through Arc, an intermediate language that captures batch and stream transformations as well as a task-driven distributed runtime facilitated by Kompact, a Rust actor framework.
14:30 - 15:00   Arc: An MLIR dialect for Data Analytics   
Klas Segeljakt (KTH Royal Institute of Technology)

Modern day end-to-end data analytics pipelines are centered around transforming various kinds of collection-based data types, including tensors, tables, graphs, and streams. Current data analyics frameworks are typically designed around one or two of these data types. Examples of such frameworks include Apache Calcite which leverages relational streams, Halide which supports streaming stencil computations, and Lara offers unified support for operations on bags with matrices.

In the CDA project, the goal is to leverage the ability to express and optimise operations over more than two types of collections. To this end, we propose Arc, an intermediate representation for data analytics. Arc is implemented as a dialect in MLIR (Multi-Level IR) which is a highly modular and hackable compiler framework built on top of LLVM. This talk will introduce the design ideas behind the latest installment of Arc, and will give an overview of MLIR and its role in achieving our goal.

15:00 - 15:30   Coffee Break
Coffee, tea, and cookies will be served in the workshop room.
15:30 - 16:00   Dark silicon — a currency we do not control   
Holger Pirk (Imperial College London)
The breakdown of dennard scaling changed the game of processor design: no longer can the entire die be filled with “always-on” components — some regions must be powered up and down at runtime to prevent the chip from overheating. Such “dim” or “dark” silicon is the new currency of chip design, raising the question: what functionality should be implemented in dark silicon? Viable candidates are any non-essential units that support important applications. Naturally, database researchers were quick to claim this resource, arguing that it should be used to implement instructions and primitives supporting database workloads.
In this talk, we argue that, due to economic constraints, such a design is unlikely to be implemented in mainstream server chips. Instead, chip designers will spend silicon on high-volume market segments such as AI, Security or Graphics/AR which require a different set of primitives. Consequently, database researchers need to find uses for the actual functionality of chips rather than wishing for features that are economically infeasible. Let us develop innovative ways to exploit the “hardware we have, not the hardware we wish to have at a later time”. In the talk, we discuss examples of creative use of hardware for data management purposes such as TLBs for MVCC, Transactional Memory for statistics collection and hardware graphics shaders for data analytics. We also highlight some processor functionality that still calls for creative use such as many floating point instructions, integrated sound processors and some of the model-specific registers.
16:00 - 18:00   Fundamentals of Stream Data Processing with Apache Beam and Google Cloud Dataflow   
Cosmin Arad (Google Cloud, Big Data Analytics)

This talk presents a unified model for streaming and batch data processing. We start from the history of large-scale data processing systems at Google and their evolution towards higher levels of abstraction culminating in the Beam programming model. We delve into the fundamentals of out-of-order unbounded data stream processing and we show how Beam's powerful abstractions for reasoning about time greatly simplify this complex task. Beam provides a model that allows developers to focus on the four important questions that must be answered by any stream processing pipeline:

  • What results are being calculated,
  • Where in event time they are calculated,
  • When in processing time they are materialized, and
  • How refinements of results relate.

We discuss how these key questions and abstractions enable important properties of correctness, expressive power, composability, flexibility, and modularity. Furthermore, by cleanly separating these questions from runtime characteristics, Beam programs become portable across multiple runtime environments, both proprietary, such as Google Cloud Dataflow, and open-source, such as Flink, Spark, Samza, etc. We close with a presentation of the execution model and show how the Google Cloud Dataflow service optimizes and orchestrates the distributed execution and autoscaling of a Beam pipeline.


Day 2 | Friday 6 March
08:30 - 09:00   Breakfast
Coffee, tea, and sandwiches will be served in the workshop room.
09:00 - 09:30   Towards Distribution Transparency for Supervised ML With Oblivious Training Functions   
Jim Dowling (Logical Clocks/KTH Royal Institute of Technology)
Building and productionizing Machine Learning (ML) models is a process of interdependent steps of iterative code updates, including exploratory model design, hyperparameter tuning, ablation experiments, and model training. Industrial-strength ML involves doing this at scale, using many compute resources, and this requires rewriting the training code to account for distribution. The result is that moving from a single host program to a cluster hinders iterative development of the software, as iterative development would require multiple versions of the software to be maintained and kept consistent. In this talk, we introduce the distribution oblivious training function as an abstraction for ML development in Python, whereby developers can reuse the same training function when running a notebook on a laptop or performing scale-out hyperparameter search and distributed training on clusters. Programs written in our framework look like industry-standard ML programs as we factor out dependencies using best-practice programming idioms (such as functions to generate models and data batches). We believe that our approach takes a step towards unifying single-host and distributed ML development.
09:30 - 10:00   Why languages for distributed systems are inevitable   
Guido Salvaneschi (Technical University of Darmstadt)
Over the last few years, ubiquitous connectivity has led to data being constantly generated at an unprecedented rate. As a result, large amounts of data are constantly being processed in a heterogeneous infrastructure which stems from the convergence of edge (IoT, mobile) and cloud computing. This poses fundamental challenges in software design, especially with respect to fault tolerance, data consistency, and privacy.
In this presentation, we discuss recent research results we achieved in this context at various levels. We describe an innovative programming framework that improves and simplifies the design of data-intensive applications. We also present the use of our programming framework on real-world case studies, emphasizing how to achieve fault tolerance and data consistency. Finally, we propose how to account for privacy in the software engineering process for data-intensive distributed applications.
10:00 - 10:30   A Programming Language and End-To-End Toolchain for Real-Time Systems
Saranya Natarajan (KTH Royal Institute of Technology)
Complex real-time systems are traditionally developed in several disjoint steps: (i) decomposition of applications into sets of recurrent tasks, (ii) worst-case execution time estimation, and (iii) schedulability analysis. Each step is already in itself complex and error-prone, and the composition of all three poses a nontrivial integration problem. In particular, it is challenging to obtain an end- to-end analysis of timing properties of the whole system due to practical differences between the interfaces of tools for extracting task models, execution time analysis, and schedulability tests. In this talk, we first introduce a programming language for real-time system called Timed C. We then introduce a seamless and pragmatic end-to-end compilation and timing analysis toolchain. The toolchain takes a Timed C program and automatically translates the timing primitives into executable code, measures execution times, verifies temporal correctness using an extended schedulability test, and performs sensitivity analysis.
10:30 - 11:00   Coffee Break
Coffee, tea, and cookies will be served in the workshop room.
11:00 - 12:00   Experiences from Testing and Verifying “Real-World” Concurrent and Distributed Systems   
Kostis Sagonas (Department of Information Technology, Uppsala University)

This talk will report on some experiences in applying two different stateless model checking (SMC) tools in two different case studies. The first of them applied Nidhugg, an SMC tool for C/Pthread programs, to the code of Tree RCU, the Hierarchical Read-Copy-Update synchronization mechanism for mutual exclusion used in the Linux kernel, a low-level and quite complex concurrent program. The second case study applied Concuerror, an SMC tool for Erlang programs, to test and verify, during their design phase by engineers at VMWare, chain repair methods for CORFU, a distributed shared log which aims to be scalable and reliable in the presence of failures and asynchrony.

Besides the results from these two case studies, we will try to present some experiences and lessons learned about engaging in testing projects of that size and complexity.