<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Glog]]></title><description><![CDATA[A good neighbour is a dog on the street who says that he is a conscience of the state of the world. ]]></description><link>https://blog.massol.me/</link><generator>Ghost 2.19</generator><lastBuildDate>Tue, 10 Feb 2026 20:11:36 GMT</lastBuildDate><atom:link href="https://blog.massol.me/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[The self-realisation of a tree]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p class="subtitle">The self-realisation of a tree (2017) is an ongoing collaboration between Matt Maurer and Guillaume Massol.</p> The project explores the possibility of a machine abandoned in the woods to learn what it takes to be a tree. Starting out on a crude level of visual mimicry, we hope to create]]></description><link>https://blog.massol.me/the-self-realisation-of-a-tree/</link><guid isPermaLink="false">5ca11e10e53b2e000140dd57</guid><category><![CDATA[ai]]></category><category><![CDATA[ml]]></category><dc:creator><![CDATA[Guillaume Massol]]></dc:creator><pubDate>Wed, 18 Oct 2017 20:07:00 GMT</pubDate><media:content url="https://blog.massol.me/content/images/2019/03/2017-10-17-09_52_471.gif" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.massol.me/content/images/2019/03/2017-10-17-09_52_471.gif" alt="The self-realisation of a tree"><p class="subtitle">The self-realisation of a tree (2017) is an ongoing collaboration between Matt Maurer and Guillaume Massol.</p> The project explores the possibility of a machine abandoned in the woods to learn what it takes to be a tree. Starting out on a crude level of visual mimicry, we hope to create an artificial tree that learns enough about its own nature to become substantially symbiotic with its environment.
<p>Coming soon...</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA["I love you for the day"]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p class="subtitle">I love you for the day (ILYFTD) is an installation by Matthias Maurer and Guillaume Massol situated at the unlikely intersection between mass surveillance and poetry.</p>
<p>The installation is looking around, focusing from time to time on passers-by. Once focused on a person itâ€™s tracking his/her face and</p>]]></description><link>https://blog.massol.me/i-love-you-for-the-day/</link><guid isPermaLink="false">5ca11d30e53b2e000140dd4a</guid><category><![CDATA[ai]]></category><category><![CDATA[ml]]></category><category><![CDATA[of]]></category><category><![CDATA[cedar_format_image]]></category><dc:creator><![CDATA[Guillaume Massol]]></dc:creator><pubDate>Wed, 05 Jul 2017 20:03:00 GMT</pubDate><media:content url="https://blog.massol.me/content/images/2019/03/2017-07-06-00_22_16-1.gif" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.massol.me/content/images/2019/03/2017-07-06-00_22_16-1.gif" alt=""I love you for the day""><p class="subtitle">I love you for the day (ILYFTD) is an installation by Matthias Maurer and Guillaume Massol situated at the unlikely intersection between mass surveillance and poetry.</p>
<p>The installation is looking around, focusing from time to time on passers-by. Once focused on a person itâ€™s tracking his/her face and breaks it down into fragments, generating a constantly evolving mosaic of eyes, mouthes and noses.</p>
<p>As it watches it is generating lyrics and music based on facial features. While on idle mode it reload bits of previously tracked faces and keeps on generating soundscapes and lyrics.</p>
<div class="wide"><iframe src="https://player.vimeo.com/video/223822866?portrait=0" width="640" height="360" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen></iframe>
</div>
<p>The face tracking algorithm is built on a Convolutional Neural Network. It uses a model trained on thousands of different faces.</p>
<p>Text and music are created by a multi-layer Recurrent Neural Network. A model trained on love songs generates the text of the installation character by character. While another model â€“ trained on heavy metal â€“ generates the melody of the music which is slowed down and played with electronic instruments.</p>
<p>The name of the installation has also been generated as well as the music of the video.</p>
<p><img src="https://blog.massol.me/content/images/2017/07/Group.jpg" alt=""I love you for the day""></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[(Untitled)]]></title><description><![CDATA[<!--kg-card-begin: markdown--><blockquote>
<p>I'm watching this words, <br>I'm thinking they stretch the fair <br>I'm listening now, <br>The data throw to him for the breath <br>I'm playing beyond their dream, <br>I'm dreaming to learn.<br>
<br><br><em>&quot;poems_18800_1.34213.t7&quot;</em></p>
</blockquote>
<!--kg-card-end: markdown-->]]></description><link>https://blog.massol.me/untitled/</link><guid isPermaLink="false">5ca11c62e53b2e000140dd3e</guid><category><![CDATA[cedar_format_quote]]></category><dc:creator><![CDATA[Guillaume Massol]]></dc:creator><pubDate>Sun, 02 Apr 2017 20:00:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><blockquote>
<p>I'm watching this words, <br>I'm thinking they stretch the fair <br>I'm listening now, <br>The data throw to him for the breath <br>I'm playing beyond their dream, <br>I'm dreaming to learn.<br>
<br><br><em>&quot;poems_18800_1.34213.t7&quot;</em></p>
</blockquote>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[All work and no play - KPIV]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p class="subtitle">Machine daydreaming...</p>
<p>The app is watching videos coming from different training datasets. As it watches the videos it generates sentences loosely based on what is happening on the screen. Sometimes creating pearls of wisdom by coincidence.</p>
<p><a href="http://massol.me/projects/kp4/" target="_blank"><strong>Visit that page</strong></a> if you want to see the final result, alternatively you can</p>]]></description><link>https://blog.massol.me/all-work-and-no-play-kpiv/</link><guid isPermaLink="false">5ca11ae9e53b2e000140dd2e</guid><category><![CDATA[ai]]></category><category><![CDATA[ml]]></category><category><![CDATA[of]]></category><category><![CDATA[cedar_format_image]]></category><dc:creator><![CDATA[Guillaume Massol]]></dc:creator><pubDate>Tue, 28 Mar 2017 19:58:00 GMT</pubDate><media:content url="https://blog.massol.me/content/images/2019/03/2017-03-28-11_54_37_1280_1.gif" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.massol.me/content/images/2019/03/2017-03-28-11_54_37_1280_1.gif" alt="All work and no play - KPIV"><p class="subtitle">Machine daydreaming...</p>
<p>The app is watching videos coming from different training datasets. As it watches the videos it generates sentences loosely based on what is happening on the screen. Sometimes creating pearls of wisdom by coincidence.</p>
<p><a href="http://massol.me/projects/kp4/" target="_blank"><strong>Visit that page</strong></a> if you want to see the final result, alternatively you can watch a video extract of the project below.</p>
<p><a href="http://massol.me/projects/kp4/" target="_blank"><img src="https://blog.massol.me/content/images/2017/04/vlcsnap-2017-03-29-02h38m15s154.jpg" alt="All work and no play - KPIV"></a></p>
<p>Some of the text are generated using the models created by Ross Goodwin for his <a href="https://github.com/rossgoodwin/neuralsnap" target="_blank">NeuralSnap</a> project.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Intensive Differences - KP III]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p class="subtitle">Produce a "smart entity" by harnessing intensive differences of matter (i.e. differences of intensive material properties).</p> 
<p>Intensive differences drive processes. For another short project we were asked to produce an entity that can only exist/make sense/function within the field of the poles of intensive material properties (temperature,</p>]]></description><link>https://blog.massol.me/intensive-differences-kp-iii/</link><guid isPermaLink="false">5ca11a3de53b2e000140dd20</guid><category><![CDATA[ai]]></category><category><![CDATA[ml]]></category><category><![CDATA[cedar_format_image]]></category><dc:creator><![CDATA[Guillaume Massol]]></dc:creator><pubDate>Mon, 20 Mar 2017 20:51:00 GMT</pubDate><media:content url="https://blog.massol.me/content/images/2019/03/2017-03-27-16_10_551-2.gif" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.massol.me/content/images/2019/03/2017-03-27-16_10_551-2.gif" alt="Intensive Differences - KP III"><p class="subtitle">Produce a "smart entity" by harnessing intensive differences of matter (i.e. differences of intensive material properties).</p> 
<p>Intensive differences drive processes. For another short project we were asked to produce an entity that can only exist/make sense/function within the field of the poles of intensive material properties (temperature, taste, pain, gravitation, pressure, density, etc.)... The video below is what I've produced, the news feed and video effects are generated in real time while the video was pre-recorded:</p>
<h3 id="extensiveindifference">Extensive Indifference</h3>
<div class="wide">
<iframe src="https://player.vimeo.com/video/194278880?color=ffffff&byline=0&portrait=0" width="950" height="533" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen></iframe>
</div>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Music generation with torch-rnn]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p class="subtitle"> Experiments in music creation using RNN.</p> 
<p>The &quot;classic&quot; technic to generate music with a RNN is to aggregate songs in ABC notations and train a model on those. Since <a href="http://abcnotation.com">ABC notation</a> is &quot;just&quot; a succession of letters and symbols we can get a lot of musical</p>]]></description><link>https://blog.massol.me/music-generation-with-torch-rnn-2/</link><guid isPermaLink="false">5ca10ca0e53b2e000140dcde</guid><category><![CDATA[ai]]></category><category><![CDATA[ml]]></category><category><![CDATA[cedar_format_image]]></category><dc:creator><![CDATA[Guillaume Massol]]></dc:creator><pubDate>Mon, 06 Mar 2017 19:53:00 GMT</pubDate><media:content url="https://blog.massol.me/content/images/2019/03/download--2-.gif" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.massol.me/content/images/2019/03/download--2-.gif" alt="Music generation with torch-rnn"><p class="subtitle"> Experiments in music creation using RNN.</p> 
<p>The &quot;classic&quot; technic to generate music with a RNN is to aggregate songs in ABC notations and train a model on those. Since <a href="http://abcnotation.com">ABC notation</a> is &quot;just&quot; a succession of letters and symbols we can get a lot of musical data in a text file weighting only a few Mb. This allow us to quickly train some models even on a computer without GPU(s).</p>
<p>I've ran different kind of experiments, starting with simple ABC files with one track and uncomplicated melodies to more complex structures.</p>
<p>When training the model on simple ABC files (one track, basic melodies) the RNN manages to understand the structure very quickly and generate nice tunes without much efforts. For example those are early examples produced by a network trained on religious hymns:</p>
<iframe width="100%" height="350" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/303192532&amp;color=ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false"></iframe>
<p>Another example is this set produced by a model trained on national anthems. Check the 2nd track called <strong>&quot;Sheetmusic + NatAnthems - Jirnni&quot;</strong> where the model made an impro on the US national anthems &quot;Ã  la Jimi Hendrix!&quot;</p>
<iframe width="100%" height="450" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/303195027&amp;color=ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false"></iframe>
<p>Using the same logic I've tried training a model on songs with multiple instruments and tracks but the result was very average. When feeding all tracks together the results where very often quite weird and not very usable out of the box. For example the jazz songs below really sounds like they are played by a jazz band with 2 neurones, literally...</p>
<iframe width="100%" height="350" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/303191901&amp;color=ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false"></iframe>
<p>There are many abc files available but there are mostly folk songs, so to train some new models I had to get some MIDI files and convert them into ABC notation. Even though not many people are using those anymore you still can find a lot of them online. <a href="https://piratebay.bid/torrent/5161696/51_000_MIDI_Files_Saved_from_Geocities">This one in particular</a> is quite a ðŸ’Ž. Once I had enough files I split the tracks by instrument and train different models for each type of instruments. I wrote a small Python script for this that you can get <a href="https://gist.github.com/gu-ma/300eb77ed45f0d6cd1822bcdcfdbd979">here</a>. I'll post a longer post on the process later.</p>
<p>After training models on different instruments I got some pretty decent samples and could combine drums + guitars + bass more easily. Here are some of the latest samples trained on heavy metal songs.</p>
<iframe width="100%" height="400" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/303220349&amp;color=ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false"></iframe>
<p>While the first models (hymns, national anthems, etc...) produced only maybe 1/3rd of &quot;ready to use&quot; samples (probably because the training sets were small and data were fed &quot;as if&quot; to the RNN) The last models trained separately on different instruments produce samples that are nearly always spot on and ready to be use in a midi-file.</p>
<p>What's next:</p>
<ul>
<li>Create a metal machine that can generate metal tracks endlessly ðŸ¤˜
</li><li>Add voice (??)
</li></ul>
<p><img src="https://blog.massol.me/content/images/2017/03/Screen-Shot-2017-03-06-at-23.07.34.png" alt="Music generation with torch-rnn"></p>
<h5>Bands used for the training</h5>
<hr>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[A good neighbour]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p class="subtitle">Variation <a href="http://www.e-flux.com/announcements/75172/15th-istanbul-bienniala-good-neighbour/" target="_blank">on a theme</a> for a design culture workshop.  
The verses are generated by a RNN trained on British poetry.</p>
<p>A good neighbour is left out of sight,<br>
and in the next room he steps his face like a ghostâ€™s skin.</p>
<p>A good neighbour is a good song:<br>
The</p>]]></description><link>https://blog.massol.me/a-good-neighbour/</link><guid isPermaLink="false">5ca10f8ee53b2e000140dcfa</guid><category><![CDATA[ai]]></category><category><![CDATA[ml]]></category><dc:creator><![CDATA[Guillaume Massol]]></dc:creator><pubDate>Thu, 02 Mar 2017 20:05:00 GMT</pubDate><media:content url="https://blog.massol.me/content/images/2019/03/2018-04-19-15_46_01.gif" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.massol.me/content/images/2019/03/2018-04-19-15_46_01.gif" alt="A good neighbour"><p class="subtitle">Variation <a href="http://www.e-flux.com/announcements/75172/15th-istanbul-bienniala-good-neighbour/" target="_blank">on a theme</a> for a design culture workshop.  
The verses are generated by a RNN trained on British poetry.</p>
<p>A good neighbour is left out of sight,<br>
and in the next room he steps his face like a ghostâ€™s skin.</p>
<p>A good neighbour is a good song:<br>
The last of my mother is a dream of flesh and all<br>
is a bird that makes me think that the world can make me feel so bad.</p>
<p>A good neighbour is a dog on the street<br>
who says that he is a conscience of the state of the world.</p>
<p>A good neighbour is not relieved to be essential,</p>
<p>A good neighbour is a shout<br>
Of some master in the stormed winds,<br>
In the wild and busy gold and silent sea</p>
<p>A good neighbour is a spirit,<br>
And which she could see him the new stars<br>
Of what a sorrow and red light restore,<br>
The folded of the soul of a sea of head.</p>
<p>A good neighbour is walking on the threshold.</p>
<p>A good neighbour is before the blood,<br>
And the silent shadow of the world and the singing</p>
<p>A good neighbour is the sound of the world.</p>
<p>A good neighbour is who shall see<br>
The stream of the street and fear in the old shadow.</p>
<p>A good neighbour is dead.<br>
The sun is gone.<br>
The light is past the sky.</p>
<p>A good neighbour is before the beauty.<br>
The falling of his face she said.<br>
The world of any house is day.</p>
<p>A good neighbour is the wind,<br>
And the soul in the fire and shadow of the street,</p>
<p>A good neighbour is a family without pets.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[12 Objects - KP II]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p class="subtitle">Assemble, combine, compile, translate 12 discerned and randomly gathered objects by one of your peers into a single rigorous and comprehensive narrative</p>
<p>This is a brief for a short project we worked on. It was my first attempt to experiment with ML. The project is broken down in 3 'small'</p>]]></description><link>https://blog.massol.me/12-objects/</link><guid isPermaLink="false">5ca113b8e53b2e000140dd0b</guid><dc:creator><![CDATA[Guillaume Massol]]></dc:creator><pubDate>Wed, 01 Mar 2017 20:23:00 GMT</pubDate><media:content url="https://blog.massol.me/content/images/2019/03/2017-03-20-11_04_34.gif" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.massol.me/content/images/2019/03/2017-03-20-11_04_34.gif" alt="12 Objects - KP II"><p class="subtitle">Assemble, combine, compile, translate 12 discerned and randomly gathered objects by one of your peers into a single rigorous and comprehensive narrative</p>
<p>This is a brief for a short project we worked on. It was my first attempt to experiment with ML. The project is broken down in 3 'small' experiments:</p>
<ul>
<li>01 - Objects-Viewer</li>
<li>02 - Image â€œPaintingâ€</li>
<li>03 - Trained convnet (App)</li>
</ul>
<hr>
<h3 id="01objectsviewer">01 - Objects-Viewer</h3>
<p>OFX app based on the <a href="https://github.com/ml4a/ml4a-ofx/tree/master/apps/ConvnetViewer">ConvnetViewer from ml4a-ofx</a> package. I've added a real time object detection. The objects are detected by the app then the different stages of the process are visible on the left hand side.</p>
<iframe src="https://player.vimeo.com/video/209187127?color=ffffff&byline=0&portrait=0" width="950" height="447" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen></iframe>
<h3 id="02imagepainting">02 - Image &quot;Painting&quot;</h3>
<p>Those visualisations are rendered using a package called CatsEyes and the <a href="https://github.com/karpathy/convnetjs">Convnet JS</a> framework from Andrej Karpathy</p>
<iframe src="https://player.vimeo.com/video/209190921?color=ffffff&title=0&byline=0&portrait=0" width="950" height="950" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen></iframe>
<h3 id="03trainedconvnetapp">03 - Trained convnet (App)</h3>
<p>This part use Tensorflow. I've basically followed <a href="https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/">this guide</a> and trained a model on top of Google Inception on my 12 objects. I then included the model into an iOS app which can recognise the objects. The results are quite impressive, even an object which is nearly completely masked is still accurately detected. You can view a video of the app in action <a href="https://vimeo.com/209193434">here</a></p>
<p>Preview of the dataset used to train the model. (<a href="https://blog.massol.me/content/images/misc/tsne_grid_objects_01_big.jpg" target="_blank">hd version</a>)</p>
<div class="full">
    <img src="https://blog.massol.me/content/images/2017/03/tsne_grid_objects_-2xA4--01.jpg" alt="12 Objects - KP II">
</div>
<hr>
<h3 id="presentation">Presentation</h3>
<div data-configid="28490528/46077218" style="width:100%; height:600px;" class="issuuembed"></div>
<script type="text/javascript" src="//e.issuu.com/embed.js" async="true"></script>
<hr>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>