UPDATE: both the Twitter and Mastodon versions now take
advantage of the extra headroom to post longer versions of the
text with line-breaks, much closer to the original NaNoGenMo version.
@neuralgae uses a
neural net to generate an image from a set of random categories,
classifies the result using a neural net (not necessarily the same one) and repeats, giving
sequences of surreal objects which are kind of related to one
another. The categories are used to generate the accompanying text
using a basic Markov generator.
It's based on my entry
for National Novel Generation Month (NaNoGenMo) in
2015. Source code and heaps of technical notes are available in
the GitHub repository.
- Bird, Steven, Edward Loper and Ewan Klein (2009), Natural Language Processing with
Python, O’Reilly Media Inc.
- Mordvintsev, Alexander, Tyka, Michael and Olah, Christopher (2015)
Deepdream GitHub repository
- Øygard, Audun (2015), Visualising GoogLeNet classes
- Princeton University (2010), WordNet, Princeton University
- Thyssen, Anthony (2011), ImageMagick Canvas Creation
- Weinhaus, Fred (2015), Perlin
- B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva (2014) Places205-GoogLeNet
- B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and
A. Oliva. (2014) “Learning Deep Features for Scene Recognition using Places Database.” Advances in Neural Information Processing Systems 27 (NIPS)