Skip to content

Commit 5d1e630

Browse files
author
Miltos Allamanis
committed
Add two recently read papers.
1 parent 5093558 commit 5d1e630

2 files changed

Lines changed: 28 additions & 0 deletions

File tree

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
---
2+
layout: publication
3+
title: "CoSQA: 20,000+ Web Queries for Code Search and Question Answering"
4+
authors: Junjie Huang, Duyu Tang, Linjun Shou, Ming Gong, Ke Xu, Daxin Jiang, Ming Zhou, Nan Duan
5+
conference: ACL
6+
year: 2021
7+
bibkey: huang2021cosqa
8+
additional_links:
9+
- {name: "ArXiV", url: "https://arxiv.org/abs/2105.13239"}
10+
- {name: "Code", url: "https://github.com/Jun-jie-Huang/CoCLR"}
11+
tags: ["dataset", "search"]
12+
---
13+
Finding codes given natural language query is beneficial to the productivity of software developers.
14+
Future progress towards better semantic matching between query and code requires richer supervised training resources.
15+
To remedy this, we introduce the CoSQA dataset. It includes 20,604 labels for pairs of natural language queries and codes,
16+
each annotated by at least 3 human annotators. We further introduce a contrastive learning method dubbed CoCLR to enhance query-code matching, which works as a data augmenter to bring more artificially generated training instances. We show that evaluated on CodeXGLUE with the same CodeBERT model, training on CoSQA improves the accuracy of code question answering by 5.1%, and incorporating CoCLR brings a further improvement of 10.5%.

_publications/peng2021how.markdown

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
---
2+
layout: publication
3+
title: "How could Neural Networks understand Programs?"
4+
authors: Dinglan Peng, Shuxin Zheng, Yatao Li, Guolin Ke, Di He, Tie-Yan Liu
5+
conference: ICML
6+
year: 2021
7+
bibkey: peng2021how
8+
additional_links:
9+
- {name: "ArXiV", url: "https://arxiv.org/abs/2105.04297"}
10+
tags: ["transformers"]
11+
---
12+
Semantic understanding of programs is a fundamental problem for programming language processing (PLP). Recent works that learn representations of code based on pre-training techniques in NLP have pushed the frontiers in this direction. However, the semantics of PL and NL have essential differences. These being ignored, we believe it is difficult to build a model to better understand programs, by either directly applying off-the-shelf NLP pre-training techniques to the source code, or adding features to the model by the heuristic. In fact, the semantics of a program can be rigorously defined by formal semantics in PL theory. For example, the operational semantics, describes the meaning of a valid program as updating the environment (i.e., the memory address-value function) through fundamental operations, such as memory I/O and conditional branching. Inspired by this, we propose a novel program semantics learning paradigm, that the model should learn from information composed of (1) the representations which align well with the fundamental operations in operational semantics, and (2) the information of environment transition, which is indispensable for program understanding. To validate our proposal, we present a hierarchical Transformer-based pre-training model called OSCAR to better facilitate the understanding of programs. OSCAR learns from intermediate representation (IR) and an encoded representation derived from static analysis, which are used for representing the fundamental operations and approximating the environment transitions respectively. OSCAR empirically shows the outstanding capability of program semantics understanding on many practical software engineering tasks.

0 commit comments

Comments
 (0)