Skip to content

Commit e663af3

Browse files
alexanderb14mallamanis
authored andcommitted
Add some compiler papers
1 parent e7425a1 commit e663af3

2 files changed

Lines changed: 27 additions & 0 deletions

File tree

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
---
2+
layout: publication
3+
title: "Compiler-based graph representations for deep learning models of code"
4+
authors: A. Brauckmann, A. Goens, S. Ertel, J. Castrillon
5+
conference: CC
6+
year: 2020
7+
bibkey: brauckmann2020compiler
8+
additional_links:
9+
- {name: "ACM", url: "https://dl.acm.org/doi/abs/10.1145/3377555.3377894"}
10+
tags: ["representation", "compilation", "optimization", "GNN"]
11+
---
12+
In natural language processing, novel methods in deep learning, like recurrent neural networks (RNNs) on sequences of words, have been very successful. These methods have also been used recently for tasks in compiler optimization, like heterogeneous mapping of OpenCL kernels or predicting thread coarsening factors for optimal execution times. In contrast to natural languages, programming languages usually have a well-defined structure. This structure is what enables compilers to reason about programs on the foundations of graphs, such as abstract syntax trees (ASTs) or control-data flow graphs (CDFGs).
13+
In this paper, we argue that we should use these graph structures instead of word sequences for learning compiler optimization tasks. To this end we apply recently proposed graph neural networks (GNNs) for learning predictive compiler tasks on two representations based on ASTs and CDFGs. Experimental results show how these representations improve upon the accuracy of the state-of-the-art in the task of heterogeneous OpenCL mapping, while providing orders of magnitude faster inference times, which are crucial for compiler optimizations. When testing on benchmark suites not included for training, our graph-based methods significantly outperform the state-of-the art by 12 percentage points in terms of accuracy, and are the only ones to perform better than a random mapping. When testing on the task of predicting thread coarsening factors, we expose current limitations of deep learning in compilers. We show how all of the deep learning approaches proposed so far, including our graph-based models, fail to produce an overall speedup with their predictions.
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
---
2+
layout: publication
3+
title: "ComPy-Learn: A toolbox for exploring machine learning representations for compilers"
4+
authors: A. Brauckmann, A. Goens, J. Castrillon
5+
conference: FDL
6+
year: 2020
7+
bibkey: brauckmann2020compy
8+
additional_links:
9+
- {name: "IEEE", url: "https://ieeexplore.ieee.org/abstract/document/9232946"}
10+
- {name: "Code", url: "https://github.com/tud-ccc/compy-learn"}
11+
tags: ["representation", "compilation", "optimization", "GNN"]
12+
---
13+
Deep Learning methods have not only shown to improve software performance in compiler heuristics, but also e.g. to improve security in vulnerability prediction or to boost developer productivity in software engineering tools. A key to the success of such methods across these use cases is the expressiveness of the representation used to abstract from the program code. Recent work has shown that different such representations have unique advantages in terms of performance. However, determining the best-performing one for a given task is often not obvious and requires empirical evaluation.
14+
Therefore, we present ComPy-Learn, a toolbox for conveniently defining, extracting, and exploring representations of program code. With syntax-level language information from the Clang compiler frontend and low-level information from the LLVM compiler backend, the tool supports the construction of linear and graph representations and enables an efficient search for the best-performing representation and model for tasks on program code.

0 commit comments

Comments
 (0)