fast-neural-style: Real-Time Style Transfer

I followed up a reference to fast-neural-style from Twitter and spent a glorious hour experimenting with this code. Very cool stuff indeed. It’s documented in Perceptual Losses for Real-Time Style Transfer and Super-Resolution by Justin Johnson, Alexandre Alahi and Fei-Fei Li.

The basic idea is to use feed-forward convolutional neural networks to generate image transformations. The networks are trained using perceptual loss functions and effectively apply style transfer.

What is “style transfer”? You’ll see in a moment.

As a test image I’ve used my Twitter banner, which I’ve felt for a while was a little bland. It could definitely benefit from additional style.


What about applying the style of van Gogh’s The Starry Night?


That’s pretty cool. A little repetitive, perhaps, but that’s probably due to the lack of structure in some areas of the input image.

How about the style of Picasso’s La Muse?


Again, rather nice, but a little too repetitive for my liking. I can certainly imagine some input images on which this would work well.

Here’s another take on La Muse but this time using instance normalisation.


Repetition vanished.

What about using some abstract contemporary art for styling?


That’s rather trippy, but I like it.

Using a mosaic for style creates an interesting effect. You can see how the segments of the mosaic are echoed in the sky.


Finally using Munch’s The Scream. The result is dark and forboding and I just love it.


Maybe it’s just my hardware, but these transformations were not quite a “real-time” process. Nevertheless, the results were worth the wait. I certainly now have multiple viable options for an updated Twitter header image.

Related Projects

If you’re interested in these sorts of projects (and, hey, honestly who wouldn’t be?) then you might also like these:

Talks about Bots

Seth Juarez and Matt Winkler having an informal chat about bots.

Matt Winkler talking about Bots as the Next UX: Expanding Your Apps with Conversation at the Microsoft Machine Learning & Data Science Summit (2016).

At the confluence of the rise in messaging applications, advances in text and language processing, and mobile form factors, bots are emerging as a key area of innovation and excitement. Bots (or conversation agents) are rapidly becoming an integral part of your digital experience: they are as vital a way for people to interact with a service or application as is a web site or a mobile experience. Developers writing bots all face the same problems: bots require basic I/O, they must have language and dialog skills, and they must connect to people, preferably in any conversation experience and language a person chooses. This code-heavy talk focuses on how to solve these problems using the Microsoft Bot Framework, a set of tools and services to easily build bots and add them to any application. We’ll cover use cases and customer case studies for enhancing an application with a bot, and how to build a bot, focusing on each of the key problems: how to integrate with various messaging services, how to connect to users, and how to process language to understand the user’s intent. At the end of this talk, developers will be equipped to get started adding bots to their applications, understanding both the fundamental concepts as well as the details of getting started using the Bot Framework.