Creating Masterpieces: Toward Content-Aware Style Transfer
Dr. Greg Wolffe, email@example.com
Amongst the methods available for machine learning and artificial intelligence, neural networks are well known for their flexibility and robustness at problem solving. In particular, convolutional neural networks (CNNs) are the method of choice for computer vision problems because of their effectiveness at object recognition and classification. Somewhat surprisingly, recent research has employed CNNs for creative purposes; in A Neural Algorithm of Artistic Style, Gatys et al. demonstrated the use of deeply-connected convolution networks to extract the style representation of famous pieces of art and apply it to photographs.
The goal of this project was to enhance that basic approach by introducing style-masks based on a segmentation of the content image. It uses the Torch framework for deep machine learning, a modified VGG-19 CNN for object recognition, and a Lua-like scripting language to develop a new algorithm for transferring artistic style. The improved algorithm uses image segmentation to generate weight masks specific to each individual style layer. Applying these masks to the computation of gradients produces higher-fidelity images that are more faithful to content image features and color, while still incorporating the target style. Although the results are, by definition, subjective, the project was successful at developing a new artistic style transfer algorithm.
Taylor, Christopher, "Creating Masterpieces: Toward Content-Aware Style Transfer" (2016). Technical Library. 257.
This document is currently not available here.