Description
文本摘要是自然语言处理中比较难的一个任务,别说是用机器来做文摘了,就连人类做文摘的时候都需要具备很强的语言阅读理解能力和归纳总结能力。新闻的摘要要求编辑能够从新闻事件中提取出最关键的信息点,重新组织语言来写摘要;paper的摘要需要作者从全文中提取出最核心的工作,然后用更加精炼的语言写成摘要;综述性的paper需要作者通读N篇相关topic的paper之后,用最概括的语言将每篇文章的贡献、创新点写出来,并且对比每篇文章的方法各有什么优缺点。自动文摘本质上做的一件事情是信息过滤,从某种意义上来说,和推荐系统的功能有一点像,都是为了让大家更快地找到感兴趣的东西,只是用了不同的手段而已。
2013年顶会论文
Mikolov, T., et al. (2013) Efficient Estimation of Word Representations in Vector Space. ArXiv e-prints
2014年顶会论文
Bahdanau, D., et al. (2014). “Neural machine translation by jointly learning to align and translate.” arXiv preprint arXiv:1409.0473.
2015顶会论文
Rush, A. M., et al. (2015). A Neural Attention Model for Abstractive Sentence Summarization, Association for Computational Linguistics.
2016顶会论文
Cao, Z., et al. (2016). AttSum: Joint Learning of Focusing and Summarization with Neural Attention, The COLING 2016 Organizing Committee.
Cheng, J. and M. Lapata (2016). Neural Summarization by Extracting Sentences and Words, Association for Computational Linguistics.
Chopra, S., et al. (2016). Abstractive Sentence Summarization with Attentive Recurrent Neural Networks, Association for Computational Linguistics.
Hsieh, Y.-L., et al. (2016). 運用序列到序列生成架構於重寫式自動摘要(Exploiting
Sequence-to-Sequence Generation Framework for Automatic Abstractive Summarization)[In Chinese], The Association for Computational Linguistics and Chinese Language Processing (ACLCLP).Iyer, S., et al. (2016). Summarizing Source Code using a Neural Attention Model, Association for Computational Linguistics.
Jadon, M. K. and A. Pareek (2016). A method for Automatic Text Summarization using Consensus of Multiple Similarity Measures and Ranking Techniques, NLP Association of India.
Kim, M., et al. (2016). Towards Abstraction from Extraction: Multiple Timescale Gated Recurrent Unit for Summarization, Association for Computational Linguistics.
Kim, Y., et al. (2016). Character-aware neural language models. Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. Phoenix, Arizona, AAAI Press: 2741-2749.
Li, C., et al. (2016). Using Relevant Public Posts to Enhance News Article Summarization, The COLING 2016 Organizing Committee.
Li, J. J., et al. (2016). The Role of Discourse Units in Near-Extractive Summarization, Association for Computational Linguistics.
Li, W., et al. (2016). Abstractive News Summarization based on Event Semantic Link Network, The COLING 2016 Organizing Committee.
Luo, W., et al. (2016). An Improved Phrase-based Approach to Annotating and Summarizing Student Course Responses, The COLING 2016 Organizing Committee.
Mehta, P. (2016). From Extractive to Abstractive Summarization: A Journey, Association for Computational Linguistics.
Nallapati, R., et al. (2016). Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond, Association for Computational Linguistics.
Polsley, S., et al. (2016). CaseSummarizer: A System for Automated Summarization of Legal Texts, The COLING 2016 Organizing Committee.
Zopf, M., et al. (2016). Beyond Centrality and Structural Features: Learning Information Importance for Text Summarization, Association for Computational Linguistics.
2017顶会论文
Bossard, A. and C. Rodrigues (2017). An Evolutionary Algorithm for Automatic
Summarization, INCOMA Ltd.Chali, Y., et al. (2017). Towards Abstractive Multi-Document Summarization Using Submodular Function-Based Framework, Sentence Compression and Merging, Asian Federation of Natural Language Processing.
Hua, X. and L. Wang (2017). A Pilot Study of Domain Adaptation Effect for Neural Abstractive Summarization, Association for Computational Linguistics.
Isonuma, M., et al. (2017). Extractive Summarization Using Multi-Task Learning with Document Classification, Association for Computational Linguistics.
J Kurisinkel, L., et al. (2017). Abstractive Multi-document Summarization by Partial Tree Extraction, Recombination and Linearization, Asian Federation of Natural Language Processing.
Lee, G. H. and K. J. Lee (2017). Automatic Text Summarization Using Reinforcement Learning with Embedding Features, Asian Federation of Natural Language Processing.
Li, P., et al. (2017). Deep Recurrent Generative Decoder for Abstractive Text Summarization. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics.
Miller, J. and K. McCoy (2017). Topic Model Stability for Hierarchical Summarization, Association for Computational Linguistics.
Nema, P., et al. (2017). Diversity driven attention model for query-based abstractive summarization, Association for Computational Linguistics.
Neubig, G. (2017) Neural Machine Translation and Sequence-to-sequence Models: A Tutorial. ArXiv e-prints
Ouyang, J., et al. (2017). Crowd-Sourced Iterative Annotation for Narrative Summarization Corpora, Association for Computational Linguistics.
Rücklé, A. and I. Gurevych (2017). Real-Time News Summarization with Adaptation to Media Attention, INCOMA Ltd.
Schluter, N. (2017). The limits of automatic summarisation according to ROUGE, Association for Computational Linguistics.
See, A., et al. (2017). Get To The Point: Summarization with Pointer-Generator Networks, Association for Computational Linguistics.
Suzuki, J. and M. Nagata (2017). Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization, Association for Computational Linguistics.
Tan, J., et al. (2017). Abstractive Document Summarization with a Graph-Based Attentional Neural Model, Association for Computational Linguistics.
Xu, Y., et al. (2017). Decoupling Encoder and Decoder Networks for Abstractive Document Summarization, Association for Computational Linguistics.
Yang, Y., et al. (2017). Detecting (Un)Important Content for Single-Document News Summarization, Association for Computational Linguistics.
Yasunaga, M., et al. (2017). Graph-based Neural Multi-Document Summarization, Association for Computational Linguistics.
Zhou, Q., et al. (2017). Selective Encoding for Abstractive Sentence Summarization, Association for Computational Linguistics.
2018年顶会论文
Amplayo, R. K., et al. (2018). Entity Commonsense Representation for Neural Abstractive Summarization, Association for Computational Linguistics.
Cao, Z., et al. (2018). Retrieve, Rerank and Rewrite: Soft Template Based Neural Summarization, Association for Computational Linguistics.
Chen, J. and H. Zhuge (2018). Abstractive Text-Image Summarization Using Multi-Modal Attentional Hierarchical RNN, Association for Computational Linguistics.
Chen, Y.-C. and M. Bansal (2018). Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting, Association for Computational Linguistics.
Cohan, A., et al. (2018). A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents, Association for Computational Linguistics.
Devlin, J., et al. (2018) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv e-prints
Fan, A., et al. (2018). Controllable Abstractive Summarization, Association for Computational Linguistics.
Gehrmann, S., et al. (2018). Bottom-Up Abstractive Summarization, Association for Computational Linguistics.
Grusky, M., et al. (2018). Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies, Association for Computational Linguistics.
Hardy, H. and A. Vlachos (2018). Guided Neural Language Generation for Abstractive Summarization using Abstract Meaning Representation, Association for Computational Linguistics.
Hsu, W.-T., et al. (2018). A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss, Association for Computational Linguistics.
Jadhav, A. and V. Rajan (2018). Extractive Summarization with SWAP-NET: Sentences and Words from Alternating Pointer Networks, Association for Computational Linguistics.
Jain, P., et al. (2018). A Mixed Hierarchical Attention Based Encoder-Decoder Approach for Standard Table Summarization, Association for Computational Linguistics.
Jiang, Y. and M. Bansal (2018). Closed-Book Training to Improve Summarization Encoder Memory, Association for Computational Linguistics.
Kedzie, C., et al. (2018). Content Selection in Deep Learning Models of Summarization, Association for Computational Linguistics.
Krishna, K. and B. V. Srinivasan (2018). Generating Topic-Oriented Summaries Using Neural Attention, Association for Computational Linguistics.
Kryściński, W., et al. (2018). Improving Abstraction in Text Summarization, Association for Computational Linguistics.
Lebanoff, L., et al. (2018). Adapting the Neural Encoder-Decoder Framework from Single to Multi-Document Summarization, Association for Computational Linguistics.
Li, W., et al. (2018). Improving Neural Abstractive Document Summarization with Structural Regularization, Association for Computational Linguistics.
Lin, J., et al. (2018). Global Encoding for Abstractive Summarization, Association for Computational Linguistics.
Liu, Y., et al. (2018). Controlling Length in Abstractive Summarization Using a Convolutional Neural Network, Association for Computational Linguistics.
Mathur, P., et al. (2018). “Multi-lingual neural title generation for e-Commerce browse pages.” arXiv preprint arXiv:1804.01041.
Mitcheltree, C., et al. (2018). Using Aspect Extraction Approaches to Generate Review Summaries and User Profiles, Association for Computational Linguistics.
Narayan, S., et al. (2018). Ranking Sentences for Extractive Summarization with Reinforcement Learning, Association for Computational Linguistics.
ShafieiBavani, E., et al. (2018). Summarization Evaluation in the Absence of Human Model Summaries Using the Compositionality of Word Embeddings, Association for Computational Linguistics.
Shang, G., et al. (2018). Unsupervised Abstractive Meeting Summarization with Multi-Sentence Compression and Budgeted Submodular Maximization, Association for Computational Linguistics.
Song, K., et al. (2018). Structure-Infused Copy Mechanisms for Abstractive Summarization, Association for Computational Linguistics.
Sulem, E., et al. (2018). Semantic Structural Evaluation for Text Simplification, Association for Computational Linguistics.
Sulem, E., et al. (2018). Simple and Effective Text Simplification Using Semantic and Neural Methods, Association for Computational Linguistics.
Wang, Y. and H.-y. Lee (2018). Learning to Encode Text as Human-Readable Summaries using Generative Adversarial Networks, Association for Computational Linguistics.
Wang, Y., et al. (2018). Neural Related Work Summarization with a Joint Context-driven Attention Mechanism, Association for Computational Linguistics.
Zhou, Q., et al. (2018). Neural Document Summarization by Jointly Learning to Score and Select Sentences, Association for Computational Linguistics.