AttSum: Joint Learning of Focusing and Summarization with Neural Attention
-
Upload
kodaira-tomonori -
Category
Science
-
view
95 -
download
0
Transcript of AttSum: Joint Learning of Focusing and Summarization with Neural Attention
![Page 1: AttSum: Joint Learning of Focusing and Summarization with Neural Attention](https://reader035.fdocument.pub/reader035/viewer/2022073114/5a647a987f8b9a5d568b47e9/html5/thumbnails/1.jpg)
AttSum: Joint Learning of Focusing and Summarization
with Neural AttentionZiqiang Cao, Wenjie Li, Sujian Li, Furu Wei and Yanran Li
Coling 2016
発表者:小平 知範
1
![Page 2: AttSum: Joint Learning of Focusing and Summarization with Neural Attention](https://reader035.fdocument.pub/reader035/viewer/2022073114/5a647a987f8b9a5d568b47e9/html5/thumbnails/2.jpg)
Abstract• Task: Extractive query-focused summarization
- query relevance ranking and sentence saliency ranking
• Main contributions:- They apply the attention mechanism that tries to simulate human attentive reading behavior for query-focused summarization.- They propose a joint neural network model to learn query relevance ranking and sentence saliency ranking simultaneously
2
![Page 3: AttSum: Joint Learning of Focusing and Summarization with Neural Attention](https://reader035.fdocument.pub/reader035/viewer/2022073114/5a647a987f8b9a5d568b47e9/html5/thumbnails/3.jpg)
Query-Focused Sentence Ranking
CNN Layer
Pooling Layer
Ranking Layer
3
![Page 4: AttSum: Joint Learning of Focusing and Summarization with Neural Attention](https://reader035.fdocument.pub/reader035/viewer/2022073114/5a647a987f8b9a5d568b47e9/html5/thumbnails/4.jpg)
1. CNN Layer3. Query-Focused Sentence Ranking
4
v(wi: wi+j) = concatenation
Convolution Layer Max-over-time pooling
f (●) = non-linear function
Wth ∈ Rl x hk h = window size k = embeding size cth ∈ Rl
![Page 5: AttSum: Joint Learning of Focusing and Summarization with Neural Attention](https://reader035.fdocument.pub/reader035/viewer/2022073114/5a647a987f8b9a5d568b47e9/html5/thumbnails/5.jpg)
2. Pooling Layer
• Query relevance:
• The document Embedding:
3. Query-Focused Sentence Ranking
5
M ∈ Rl x l
![Page 6: AttSum: Joint Learning of Focusing and Summarization with Neural Attention](https://reader035.fdocument.pub/reader035/viewer/2022073114/5a647a987f8b9a5d568b47e9/html5/thumbnails/6.jpg)
3. Ranking Layer
• rank a sentence according to cosine similarity
• Cosine Similarity:
3. Query-Focused Sentence Ranking
6
![Page 7: AttSum: Joint Learning of Focusing and Summarization with Neural Attention](https://reader035.fdocument.pub/reader035/viewer/2022073114/5a647a987f8b9a5d568b47e9/html5/thumbnails/7.jpg)
ex. Training Process
• Cost Function:
• s+: High ROUGE score, s-: rest
• Ω is margin threthold
3. Query-Focused Sentence Ranking
7
![Page 8: AttSum: Joint Learning of Focusing and Summarization with Neural Attention](https://reader035.fdocument.pub/reader035/viewer/2022073114/5a647a987f8b9a5d568b47e9/html5/thumbnails/8.jpg)
Sentence Selection• 1. discard sentences less than 8 words
• 2. sort descending order
• 3. They iteratively dequeue the top-ranked sentence, and append it to the current summary if it is non-redundant.
• non-redundant: new bi-gram ratio > .5
8
![Page 9: AttSum: Joint Learning of Focusing and Summarization with Neural Attention](https://reader035.fdocument.pub/reader035/viewer/2022073114/5a647a987f8b9a5d568b47e9/html5/thumbnails/9.jpg)
Dataset• DUC 2005 ~ 2007, query-focused multi-document
summarization task.
• Preprocessing: StanfordCoreNLP (ssplit, tokenize)
• Summary: the length limit of 250 words
• Validation: 3-fold cross-validation
4. Experiments
9
![Page 10: AttSum: Joint Learning of Focusing and Summarization with Neural Attention](https://reader035.fdocument.pub/reader035/viewer/2022073114/5a647a987f8b9a5d568b47e9/html5/thumbnails/10.jpg)
Model Setting
• Word Embedding: (50 dimention, trained on News corpus)- don’t update word embeddings in the training process
• word window size h = 2
• Convolution output l = 50
• margin Ω = 0.5
10
4. Experiments
![Page 11: AttSum: Joint Learning of Focusing and Summarization with Neural Attention](https://reader035.fdocument.pub/reader035/viewer/2022073114/5a647a987f8b9a5d568b47e9/html5/thumbnails/11.jpg)
Evaluation Metrics
• ROUGE-2
11
4. Experiments
![Page 12: AttSum: Joint Learning of Focusing and Summarization with Neural Attention](https://reader035.fdocument.pub/reader035/viewer/2022073114/5a647a987f8b9a5d568b47e9/html5/thumbnails/12.jpg)
Baselines• LEAD
• QUERY_SIM
• MultiMR (Wan and iao, 2009)
• SVR (Ouyang et al., 2011)
• DocEmb (Kobayashi et al., 2015)
• ISOLATION: AttSum w/o attention mechanism
12
4. Experiments
![Page 13: AttSum: Joint Learning of Focusing and Summarization with Neural Attention](https://reader035.fdocument.pub/reader035/viewer/2022073114/5a647a987f8b9a5d568b47e9/html5/thumbnails/13.jpg)
Summarization Performance
13
4. Experiments
![Page 14: AttSum: Joint Learning of Focusing and Summarization with Neural Attention](https://reader035.fdocument.pub/reader035/viewer/2022073114/5a647a987f8b9a5d568b47e9/html5/thumbnails/14.jpg)
Conclusion
• Propose a novel query-focuesed summarization system called AttSum, which jointly handles saliency ranking and relevance ranking.
14