Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet Duplicate
paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
iclr_2018_ryQu7f-RZ
On the Convergence of Adam and Beyond
Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSProp, Adam, Adadelta, Nadam are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. In many applications, e.g. learning with large...
accepted-oral-papers
This paper analyzes a problem with the convergence of Adam, and presents a solution. It identifies an error in the convergence proof of Adam (which also applies to related methods such as RMSProp) and gives a simple example where it fails to converge. The paper then repairs the algorithm in a way that guarantees conver...
test
[ "HkhdRaVlG", "H15qgiFgf", "Hyl2iJgGG", "BJQcTsbzf", "HJXG6sWzG", "H16UnjZMM", "ryA-no-zz", "HJTujoWGG", "ByhZijZfG", "SkjC2Ni-z", "SJXpTMFbf", "rkBQ_QuWf", "Sy5rDQu-z", "SJRh-9lef", "Bye7sLhkM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "public", "public", "public", "author", "public" ]
[ "The paper presents three contributions: 1) it shows that the proof of convergence Adam is wrong; 2) it presents adversarial and stochastic examples on which Adam converges to the worst possible solution (i.e. there is no hope to just fix Adam's proof); 3) it proposes a variant of Adam called AMSGrad that fixes the...
[ 9, 8, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ryQu7f-RZ", "iclr_2018_ryQu7f-RZ", "iclr_2018_ryQu7f-RZ", "HkhdRaVlG", "H15qgiFgf", "Sy5rDQu-z", "SJXpTMFbf", "SkjC2Ni-z", "Hyl2iJgGG", "iclr_2018_ryQu7f-RZ", "iclr_2018_ryQu7f-RZ", "HkhdRaVlG", "iclr_2018_ryQu7f-RZ", "Bye7sLhkM", "iclr_2018_ryQu7f-RZ" ]
iclr_2018_BJ8vJebC-
Synthetic and Natural Noise Both Break Neural Machine Translation
Character-based neural machine translation (NMT) models alleviate out-of-vocabulary issues, learn morphology, and move us closer to completely end-to-end translation systems. Unfortunately, they are also very brittle and easily falter when presented with noisy data. In this paper, we confront NMT models with synthetic...
accepted-oral-papers
The pros and cons of this paper cited by the reviewers can be summarized below: Pros: * The paper is a first attempt to investigate an under-studied area in neural MT (and potentially other applications of sequence-to-sequence models as well) * This area might have a large impact; existing models such as Google Transl...
train
[ "SJoXiUUNM", "SkABkz5gM", "BkQzs54VG", "BkVD7bqlf", "SkeZfu2xG", "SyTfeD5bz", "B1dT1vqWf", "HJ1vJDcZz", "HyRwAIqWf", "rJIbAd7-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "Thanks for your thoughtful response to my review.", "This paper investigates the impact of character-level noise on various flavours of neural machine translation. It tests 4 different NMT systems with varying degrees and types of character awareness, including a novel meanChar system that uses averaged unigram ...
[ -1, 7, -1, 7, 8, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, 4, 4, -1, -1, -1, -1, -1 ]
[ "HJ1vJDcZz", "iclr_2018_BJ8vJebC-", "rJIbAd7-z", "iclr_2018_BJ8vJebC-", "iclr_2018_BJ8vJebC-", "rJIbAd7-z", "BkVD7bqlf", "SkABkz5gM", "SkeZfu2xG", "iclr_2018_BJ8vJebC-" ]
iclr_2018_Hk2aImxAb
Multi-Scale Dense Networks for Resource Efficient Image Classification
In this paper we investigate image classification with computational resource limits at test time. Two such settings are: 1. anytime classification, where the network’s prediction for a test example is progressively updated, facilitating the output of a prediction at any time; and 2. budgeted batch classification, wher...
accepted-oral-papers
As stated by reviewer 3 "This paper introduces a new model to perform image classification with limited computational resources at test time. The model is based on a multi-scale convolutional neural network similar to the neural fabric (Saxena and Verbeek 2016), but with dense connections (Huang et al., 2017) and with ...
train
[ "rJSuJm4lG", "SJ7lAAYgG", "rk6gRwcxz", "Hy_75oomz", "HkJRFjomf", "HJiXYjjQz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This work proposes a variation of the DenseNet architecture that can cope with computational resource limits at test time. The paper is very well written, experiments are clearly presented and convincing and, most importantly, the research question is exciting (and often overlooked). \n\nMy only major concern is t...
[ 8, 7, 10, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_Hk2aImxAb", "iclr_2018_Hk2aImxAb", "iclr_2018_Hk2aImxAb", "rJSuJm4lG", "SJ7lAAYgG", "rk6gRwcxz" ]
iclr_2018_HJGXzmspb
Training and Inference with Integers in Deep Neural Networks
Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simu...
accepted-oral-papers
High quality paper, appreciated by reviewers, likely to be of substantial interest to the community. It's worth an oral to facilitate a group discussion.
train
[ "SkzPEnBeG", "rJG2o3wxf", "SyrOMN9eM", "HJ7oecRZf", "r1t-e5CZf", "ryW51cAbG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes a method to train neural networks with low precision. However, it is not clear if this work obtains significant improvements over previous works. \n\nNote that:\n1)\tWorking with 16bit, one can train neural networks with little to no reduction in performance. For example, on ImageNet with AlexN...
[ 7, 7, 8, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1 ]
[ "iclr_2018_HJGXzmspb", "iclr_2018_HJGXzmspb", "iclr_2018_HJGXzmspb", "SkzPEnBeG", "rJG2o3wxf", "SyrOMN9eM" ]
iclr_2018_HJGv1Z-AW
Emergence of Linguistic Communication from Referential Games with Symbolic and Pixel Input
The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks. Here we scale up this research by using contemporary deep learning methods and by training reinforcement-learning neur...
accepted-oral-papers
Important problem (analyzing the properties of emergent languages in multi-agent reference games), a number of interesting analyses (both with symbolic and pixel inputs), reaching a finding that varying the environment and restrictions on language result in variations in the learned communication protocols (which in hi...
train
[ "HJ3-u2Ogf", "H15X_V8yM", "BytyNwclz", "S1XPn0jXG", "r1QdpPjXf", "SJWDw1iXG", "ryjhESdQG", "S1GjVrOmz", "rJylbvSzG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "--------------\nSummary:\n--------------\nThis paper presents a series of experiments on language emergence through referential games between two agents. They ground these experiments in both fully-specified symbolic worlds and through raw, entangled, visual observations of simple synthetic scenes. They provide ri...
[ 7, 9, 5, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJGv1Z-AW", "iclr_2018_HJGv1Z-AW", "iclr_2018_HJGv1Z-AW", "iclr_2018_HJGv1Z-AW", "SJWDw1iXG", "H15X_V8yM", "HJ3-u2Ogf", "HJ3-u2Ogf", "BytyNwclz" ]
iclr_2018_Hkbd5xZRb
Spherical CNNs
Convolutional Neural Networks (CNNs) have become the method of choice for learning problems involving 2D planar images. However, a number of problems of recent interest have created a demand for models that can analyze spherical images. Examples include omnidirectional vision for drones, robots, and autonomous cars, mo...
accepted-oral-papers
This work introduces a trainable signal representation for spherical signals (functions defined in the sphere) which are rotationally equivariant by design, by extending CNNs to the corresponding group SO(3). The method is implemented efficiently using fast Fourier transforms on the sphere and illustrated with compelli...
train
[ "r1VD9T_SM", "r1rikDLVG", "SJ3LYkFez", "B1gQIy9gM", "Bkv4qd3bG", "r1CVE6O7f", "Sy9FmTuQM", "ryi-Q6_Xf", "HkZy7TdXM", "S1rz4yvGf" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "How to describe the relationships between these two papers?", "Thank you for the feedback; I maintain my opinion.", "Summary:\n\nThe paper proposes a framework for constructing spherical convolutional networks (ConvNets) based on a novel synthesis of several existing concepts. The goal is to detect patterns i...
[ -1, -1, 8, 7, 9, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, 3, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Hkbd5xZRb", "ryi-Q6_Xf", "iclr_2018_Hkbd5xZRb", "iclr_2018_Hkbd5xZRb", "iclr_2018_Hkbd5xZRb", "Bkv4qd3bG", "SJ3LYkFez", "B1gQIy9gM", "S1rz4yvGf", "iclr_2018_Hkbd5xZRb" ]
iclr_2018_S1CChZ-CZ
Ask the Right Questions: Active Question Reformulation with Reinforcement Learning
We frame Question Answering (QA) as a Reinforcement Learning task, an approach that we call Active Question Answering. We propose an agent that sits between the user and a black box QA system and learns to reformulate questions to elicit the best possible answers. The agent probes the system with, potenti...
accepted-oral-papers
this submission presents a novel way in which a neural machine reader could be improved. that is, by learning to reformulate a question specifically for the downstream machine reader. all the reviewers found it positive, and so do i.
train
[ "r10KoNDgf", "HJ9W8iheM", "Hydu7nFeG", "Hk9DKzYzM", "H15NIQOfM", "SJZ0UmdfM", "BkGlU7OMz", "BkXuXQufM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper proposes active question answering via a reinforcement learning approach that can learn to rephrase the original questions in a way that can provide the best possible answers. Evaluation on the SearchQA dataset shows significant improvement over the state-of-the-art model that uses the original question...
[ 7, 6, 8, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1CChZ-CZ", "iclr_2018_S1CChZ-CZ", "iclr_2018_S1CChZ-CZ", "BkGlU7OMz", "Hydu7nFeG", "r10KoNDgf", "HJ9W8iheM", "iclr_2018_S1CChZ-CZ" ]
iclr_2018_rJTutzbA-
On the insufficiency of existing momentum schemes for Stochastic Optimization
Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov's accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). Rigorously speak...
accepted-oral-papers
The reviewers unanimously recommended that this paper be accepted, as it contains an important theoretical result that there are problems for which heavy-ball momentum cannot outperform SGD. The theory is backed up by solid experimental results, and the writing is clear. While the reviewers were originally concerned th...
train
[ "Sy3aR8wxz", "Sk0uMIqef", "Sy2Sc4CWz", "SkEtTX6Xz", "BJqEtWdMf", "SyL2ub_fM", "rkv8dZ_fz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "I like the idea of the paper. Momentum and accelerations are proved to be very useful both in deterministic and stochastic optimization. It is natural that it is understood better in the deterministic case. However, this comes quite naturally, as deterministic case is a bit easier ;) Indeed, just recently people s...
[ 7, 7, 8, -1, -1, -1, -1 ]
[ 4, 3, 5, -1, -1, -1, -1 ]
[ "iclr_2018_rJTutzbA-", "iclr_2018_rJTutzbA-", "iclr_2018_rJTutzbA-", "iclr_2018_rJTutzbA-", "Sy2Sc4CWz", "Sk0uMIqef", "Sy3aR8wxz" ]
iclr_2018_Hk6kPgZA-
Certifying Some Distributional Robustness with Principled Adversarial Training
Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We address this problem through the principled lens of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian...
accepted-oral-papers
This paper attracted strong praise from the reviewers, who felt that it was of high quality and originality. The broad problem that is being tackled is clearly of great importance. This paper also attracted the attention of outside experts, who were more skeptical of the claims made by the paper. The technical merits...
train
[ "S1pdil8Sz", "rkn74s8BG", "HJNBMS8rf", "rJnkAlLBf", "H1g0Nx8rf", "rklzlzBVf", "HJ-1AnFlM", "HySlNfjgf", "rkx-2-y-f", "rkix5PTQf", "rJ63YwTQM", "HyFBKPp7z", "Hkzmdv67G", "rJBbuPTmz", "Hk2kQP3Qz", "BJVnpJPXM", "H1wDpaNbM" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public", "public" ]
[ "We just received an email notification abut this comment a few minutes ago and somehow did not receive any notification of the original comment uploaded on 21 January. We will upload a response later today.", "Apologies for the (evidently) tardy response. We have now uploaded a response to the area chair's comme...
[ -1, -1, -1, -1, -1, -1, 9, 9, 9, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "H1g0Nx8rf", "rJnkAlLBf", "rklzlzBVf", "S1pdil8Sz", "iclr_2018_Hk6kPgZA-", "rJBbuPTmz", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-", "Hk2kQP3Qz", "BJVnpJPXM", "BJVnpJPXM", "H1wDpaNbM", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-", "iclr_...
iclr_2018_HktK4BeCZ
Learning Deep Mean Field Games for Modeling Large Population Behavior
"We consider the problem of representing collective behavior of large populations and predicting the(...TRUNCATED)
accepted-oral-papers
"The reviewers are unanimous in finding the work in this paper highly novel and significant. They h(...TRUNCATED)
val
["BkGA_x3SG","ByGPUUYgz","rJLBq1DVM","S1PF1UKxG","rJBLYC--f","BycoZZimG","HyRrEDLWG","SJJDxd8Wf","r1(...TRUNCATED)
["author","official_reviewer","official_reviewer","official_reviewer","official_reviewer","author","(...TRUNCATED)
["We appreciate your suggestions for further improving the precision of our language, and we underst(...TRUNCATED)
[ -1, 8, -1, 8, 10, -1, -1, -1, -1 ]
[ -1, 4, -1, 3, 5, -1, -1, -1, -1 ]
["rJLBq1DVM","iclr_2018_HktK4BeCZ","SJJDxd8Wf","iclr_2018_HktK4BeCZ","iclr_2018_HktK4BeCZ","iclr_201(...TRUNCATED)
End of preview. Expand in Data Studio

This is PeerSum, a multi-document summarization dataset in the peer-review domain. More details can be found in the paper accepted at EMNLP 2023, Summarizing Multiple Documents with Conversational Structure for Meta-review Generation. The original code and datasets are public on GitHub.

Please use the following code to download the dataset with the datasets library from Huggingface.

from datasets import load_dataset
peersum_all = load_dataset('oaimli/PeerSum', split='all')
peersum_train = peersum_all.filter(lambda s: s['label'] == 'train')
peersum_val = peersum_all.filter(lambda s: s['label'] == 'val')
peersum_test = peersum_all.filter(lambda s: s['label'] == 'test')

The Huggingface dataset is mainly for multi-document summarization. Each sample comprises information with the following keys:

* paper_id: str (a link to the raw data)
* paper_title: str
* paper_abstract, str
* paper_acceptance, str
* meta_review, str
* review_ids, list(str)
* review_writers, list(str)
* review_contents, list(str)
* review_ratings, list(int)
* review_confidences, list(int)
* review_reply_tos, list(str)
* label, str, (train, val, test)

You can also download the raw data from Google Drive. The raw data comprises more information and it can be used for other analysis for peer reviews.

Downloads last month
62

Paper for oaimli/PeerSum