TLDRs (Too Long; Didn't Read) are super-short summaries of the main objective and results of a scientific paper generated using expert background knowledge and the latest GPT-3 style NLP techniques. This new feature is available in beta for nearly 10 million papers and counting in the computer science domain in Semantic Scholar.
Staying up to date with scientific literature is an important part of any researchers’ workflow, and parsing a long list of papers from various sources by reading paper abstracts is time-consuming.
TLDRs help users make quick informed decisions about which papers are relevant, and where to invest the time in further reading. TLDRs also provide ready-made paper summaries for explaining the work in various contexts, such as sharing a paper on social media.
"Information overload is a top problem facing scientists. Semantic Scholar's automatically generated TLDRs help researchers quickly decide which papers to add to their reading list."Isabel Cachola, Johns Hopkins University PhD Student, Former Pre-Doctoral Young Investigator at AI2, and Author of TLDR: Extreme Summarization of Scientific Documents
"People often ask why are TLDRs better than abstracts, but the two serve completely different purposes. Since TLDRs are 20 words instead of 200, they are much faster to skim." Daniel S. Weld, Head of the Semantic Scholar research group at the Allen Institute for AI, Professor of Computer Science at the University of Washington, Author of TLDR: Extreme Summarization of Scientific Documents
"This is one of the most exciting applications I have seen in recent years! Not only are TLDRs useful for navigating through papers quickly, they also hold great potential for human-centered AI. Semantic Scholar has millions of users who can provide feedback, and help continually improve the technology underlying TLDRs." Mirella Lapata, AI2 Scientific Advisory Board Member, Professor in the School of Informatics at the University of Edinburgh
TLDR Dropping half of the feature detectors from a feedforward neural network reduces overfitting and improves performance on held-out test data.
TLDR We propose a state-of-the-art pipelined method for training neural paragraph-level question answering models on document QA data.
TLDR We introduce TLDR generation for scientific papers, a new automatic summarization task with high source compression, requiring expert background knowledge and complex language understanding.
TLDR In this paper we address the issue of unbalanced morphological ambiguities in Hebrew, a highly ambiguous MRL in which vowels are generally omitted.
To learn more about the research powering TLDRs, read the paper TLDR: Extreme Summarization of Scientific Documents from authors Isabel Cachola, Kyle Lo, Arman Cohan, and Daniel S. Weld from the Semantic Scholar team at AI2.
Send your feedback about TLDRs on Semantic Scholar to: email@example.com.