Generating Philosophical Statements using Interpolated Markov Models and Dynamic Templates
- Speaker: Thomas Winters
- Type: Conference talk
- Date: 2019-08-07
- Location: ESSLLI2019: 31st European Summer School in Logic, Language and Information
Automatically imitating input text is a common task in natural language generation, often used to create humorous results. Classic algorithms for learning to imitate text, e.g. simple Markov chains, usually have a trade-off between originality and syntactic correctness. We present two ways of automatically parodying philosophical statements from examples overcoming this issue, and show how these can work in interactive systems as well. The first algorithm uses interpolated Markov models with extensions to improve the quality of the generated texts. For the second algorithm, we propose dynamically extracting templates and filling these with new content. To illustrate these algorithms, we implemented TorfsBot, a Twitterbot imitating the witty, semi-philosophical tweets of professor Rik Torfs, the previous KU Leuven rector. We found that users preferred generative models that focused on local coherent sentences, rather than those mimicking the global structure of a philosophical statement. The proposed algorithms are thus valuable new tools for automatic parody as well as template learning systems.
Slides
Related paper
Related projects
TorfsBot
Twitterbot automatically imitating Rik Torfs, the previous rector of KU Leuven