Thursday, October 02, 2014

Boo Linode! You can do better!

I usually don't complain publicly about things that bother me. I also generally try not to post content that I don't find technically interesting. But Linode really got to me yesterday.

Back in January 2013, I signed up for Linode's $20/mo plan to host one of the My Vexflow backup replicas. The master replica is a dedicated machine in a colo facility in Toronto, and Linode was where I would divert traffic to if there were master failures.

The plans available at the time were (courtesy Internet Archive):

I picked the Linode 512 plan, which at the time seemed appropriate.

About a month ago, I decided to turn down my dedicated machine and move the master replica somwhere else. I spent a few weeks dockerizing the application and building automation to make it easy to move replicas around as needed.

After some digging in, I decided to move the master replica to Linode, and the two slaves to GCE and AWS respectively. (Nearly a decade working as an SRE at Google has made me paranoid about single-points-of-failure, both technical and organizational.)

I logged onto Linode to setup a new instance, and discovered that for $10/mo, I could get a 1GB/1CPU/24G-SSD up and running right away. And for an additional $2/mo I get daily backups.

Great! They slashed their prices! I ran some load tests on the 1GB plan, after which I decided to switch to the 4GB plan.

Before switching, I checked my billing history to see where the funds where coming from, only to discover that I was paying a whopping $25/mo for half the advertised capacity of the 1GB plan: $20/mo for the plan plus $5/mo for backups.

Surely this is a mistake, I thought.

I contacted their billing support and asked them about it. We had a lot of back and forth, and the more responses I got, the more frustrated I got.

(The link to our whole conversation is below.)

I was sad to learn that:

1) They're willingly engaging in dishonest billing. The choice not to proportionally cut prices for the existing plans, nor to notify customers of the new plans is by design. It's the type of choice telcos and cable companies regularly make. From a business perspective, it's short sighted. From a customer perspective it's despicable.

Also, many of the major cloud providers (AWS, GCE, Azure) also recently announced large price cuts -- which were immediately applied to all customers. That just sounds like the right thing to do.

2) They don't know how to value their customers. The fact that they didn't reimburse me doesn't bother me nearly as much as how they made me feel. The amount to me is trivial, and to them should be even more, especially in light of the fact that I've been with them for nearly two years and have referred many people to them. Sure, they were very apologetic -- in fact, they apologized in almost every message -- but it's the actions that matter, not the words.

3) They defend their actions with intentionally misleading statements. They had the audacity to tell me that the old 512 plan was somehow better than the new 1GB plan because it had "access to more CPU cores." Any engineer worth his salt knows that 1 dedicated CPU is better than 4 heavily oversubscribed CPUs.

Sure, you might be thinking, Why is this dude making a big fuss over a few bucks?

Let me be clear: It's not about the money, it's about the principles. It's about trust. It's about valuing your customers. As cynical as one might be about the world, it is actually possible to be both ethical and profitable.

So, Goodbye Linode! Hello Digital Ocean!

You can read the whole conversation here (with names blurred out). Interesting factoid: except for the very last one, every single response was from a different rep.

Friday, May 02, 2014

New in VexFlow (May 2014)

Lots of commits into the repository lately. Thanks to Cyril Silverman for may of these. Here are some of the highlights:

Chord Symbols

This includes subscript/superscript support in TextNote and support for common symbols (dim, half-dim, maj7, etc.)

Stave Line Arrows

This is typically used in instructional material.

Slurs

Finally, we have slurs. This uses a new VexFlow class called Curve. Slurs are highly configurable.

Improved auto-positioning of Annotations and Articulations

Annotations and Articulations now self-position based on note, stem, and beam configuration.

Grace Notes

VexFlow now has full support for Grace Notes. Grace Note groups can contain complex rhythmic elements, and are formatted using the same code as regular notes.

Auto-Beam Imnprovements

Lots more beaming options, including beaming over rests, stemlet rendering, and time-signature aware beaming.

Tab-Stem Features

You can (optionally) render Tab Stems through stave lines.

That's all, Folks!

Thursday, February 20, 2014

VexFlow Goodies

As announced on the VexFlow mailing list, here are some fun things you might enjoy.

VexTab Audio Player

The VexTab audio player, which My VexFlow uses to play notated music, is now available in the VexTab repository (player.coffee). The player is based on MIDI.js and requires a set SoundFont instruments.

Tablature with Rhythm Stems

VexFlow now supports tablature with rhythm information thanks to Cyril Silverman.

The stems work well with modifiers such as dots and articulations (fermatas, strokes, etc.)

Vex Fretboard

The code to render fretboard diagrams is now available on Github. It lets you render guitar and bass fretboards like these:

VexWarp

VexWarp is a web audio experiment I've been working on which allows you to slow down, pitch shift, and loop your audio and video files, all in the browser. There's also a chrome plugin for offline access. At its core is a Javascript implementation of a phase vocoder variant.

Thats all folks!

Tuesday, January 15, 2013

Hear your Music!

My VexFlow now lets you hear what you write. Click the play button on the demo below to hear for yourself.

To enable playback, all you need to do is add the line options player=true to the beginning of your VexTab notation. Here's a bebop example:

See A Jazz Blues Progression for more musical examples.

To see the rest of the new features for this release, go to New in VexFlow.

Wednesday, January 09, 2013

New in VexFlow!

It's been a year since I've posted anything, so how about some news?

Lots of new features in VexFlow and VexTab.

My VexFlow

We launched My VexFlow, a content publishing platform for musicians based on VexFlow. The platform allows musicians, music teachers, students to create, publish, and share content with beautiful music notation, tablature, chord charts, diagrams, etc.

Here are some strawmen that I created to show what My VexFlow can do:

You can also embed content from My VexFlow into your own sites and blogs, just like you would a YouTube video.

Here's an example.

The above notation was created on My VexFlow, and embedded into this blog post.

A New VexTab

VexTab was overhauled once again, this time rewritten in CoffeScript and Jison. The code is much cleaner, more flexible, and more manageable.

The new VexTab is no longer just for guitarists -- it's got note/octave syntax in addition to the fret/string syntax. It also supports almost all VexFlow features, such as tuplets, rests, articulations, score symbols and modifiers, etc.

It's also moved to its own repository: github.com/0xfe/vextab

Guitar and Bass Fretboards

My VexFlow has support for guitar fretboards, which you can customize and use for descriptive diagrams.

Slash Notation

Render chord charts with rhythmic hints using slash notation. This is also handy for some drum notation.

Lots More!

This isn't all. There are score annotations, tuplets, stave annotations, etc. etc. See the complete list at: New in My VexFlow.

Experiment and enjoy!

Monday, January 02, 2012

More K-Means Clustering Experiments on Images

I spent a little more time experimenting with k-means clustering on images and realized that I could use these clusters to recolor the image in interesting ways.

I wrote the function save_recolor to replace pixels from the given clusters (replacements) with new ones of equal intensity, as specified by the rgb_factors vector. For example, the following code will convert pixels of the first two clusters to greyscale.

> save_recolor("baby.jpeg", "baby_new.jpg", replacements=c(1,2),
               rgb_factors=c(1/3, 1/3, 1/3))

It's greyscale because the rgb_factors distributes the pixel intensity evenly among the channels. A factor of c(20/100, 60/100, 20/100) would make pixels from the cluster 60% more green.

Let's get to some examples. Here's an unprocessed image, alongside its color clusters. I picked k=10. You can set k by specifying the palette_size parameter to save_recolor.

Here's what happens when I remove the red (the first cluster).

> save_recolor("baby.jpeg", "baby_new.jpg", replacements=1)

In the next image, I keep the red, and remove everything else.

> save_recolor("baby.jpeg", "baby_new.jpg", replacements=2:10)

Below, I replace the red cluster pixels, with green ones of corresponding intensity.

> save_recolor("baby.jpeg", "baby_new.jpg", replacements=1,
               rgb_factors=c(10/100, 80/100, 10/100))

And this is a fun one: Get rid of everything, keep just the grass.

> save_recolor("baby.jpeg", "baby_new.jpg", replacements=c(1,3:10))

I tried this on various images, using different cluster sizes, replacements, and RGB factors, with lots of interesting results. Anyhow, you should experiment with this yourselves and let me know what you find.

I should point out that nothing here is novel or new -- it's all well known in image processing circles. It's still pretty impressive what you can do when you apply simple machine learning algorithms to other areas.

Okay, as in all my posts, the code is available in my GitHub repository:

https://github.com/0xfe/experiments/blob/master/r/recolor.rscript

Happy new year!

Saturday, December 31, 2011

K-Means Clustering and Art

Cross posted from Google+.

My coworker at Google, Tony Rippy, has for a while been working on a fascinating problem. Take all the pixels of a photograph, and rearrange them so that the final image looks like an artist's palette -- something to which you can take a paintbrush and recreate the original image.

He's got some really good looking solutions which he might post if you ask him nicely. :-)

This turns out to be a tricky problem, and its hard to come up with an objective measure of the quality of any given solution. In fact, the quality is very subjective.

Anyhow, while studying the K-means clustering algorithm from ml-class, it struck me that k-means could be used to help with extracting a small palette of colors from an image. For example, by using each of the RGB channels as features, and euclidian distance as the similarity metric, one could run stock k-means to generate clusters of similar colors.

I coded up a quick R script to test this and got some interesting results. Here is an example of an image with its potential palette. Recall that the second image is simply the first image with the pixels rearranged.

I experimented with various values of k (number of clusters) for the different images. It turns out that it's pretty hard to algorithmically pre-determine this number (although there are various techniques that do exist.) The water villa pic above has 15 clusters, the nursery pic below has 20, and the cartoon has 6.

Note that this is only one subproblem of the original one; there is also the subproblem of placement, which I skirted around by simply arranging the colors in vertical bands across the final image. I'm pretty sure no artist's palette looks like this.

Also, these palettes aren't very "clean". Since the original pictures themselves are noisy, some of this noise arbitrarily creep into the various clusters. Working with a filtered version of the picture would be cheating, so we won't do that. But we might be able to extract the noisy pixels, put them in a special cluster, and run k-means on the remaining pixels.

Okay, enough talk. Here's the code: https://github.com/0xfe/experiments/blob/master/r/palette.rscript

First install cclust and ReadImages packages from CRAN, and try out the algorithm in an R console:

> source('/path/to/palette.rscript')
> plot_palette('/path/to/some/image.jpg')

This will produce a plot with the original image and the transformed one next to each other, like the attached pics below. It uses 10 clusters by default, for a palette of 10 colors. You can change this by passing the cluster count as the second parameter to plot_palette.

> plot_palette('/path/to/some/image.jpg', 20)

That's all folks!

Sunday, August 21, 2011

A Web Audio Spectrum Analyzer

In my last post, I went over some of the basics of the Web Audio API and showed you how to generate sine waves of various frequencies and amplitudes. We were introduced to some key Web Audio classes, such as AudioContext, AudioNode, and JavaScriptAudioNode.

This time, I'm going to go take things a little further and build a realtime spectrum analyzer with Web Audio and HTML5 Canvas. The final product plays a remote music file, and displays the frequency spectrum overlaid with a time domain graph.

The demo is here: JavaScript Spectrum Analyzer. The code for the demo is in my GitHub repository.

The New Classes

In this post we introduce three new Web Audio classes: AudioBuffer, AudioBufferSourceNode, and RealtimeAnalyzerNode.

An AudioBuffer represents an in-memory audio asset. It is usually used to store short audio clips and can contain multiple channels.

An AudioBufferSourceNode is a specialization of AudioNode that serves audio from AudioBuffers.

A RealtimeAnalyzerNode is an AudioNode that returns time- and frequency-domain analysis information in real time.

The Plumbing

To begin, we need to acquire some audio. The API supports a number of different formats, including MP3 and raw PCM-encoded audio. In our demo, we retrieve a remote audio asset (an MP3 file) using AJAX, and use it to populate a new AudioBuffer. This is implemented in the RemoteAudioPlayer class (js/remoteaudioplayer.js) like so:

RemoteAudioPlayer.prototype.load = function(callback) {
  var request = new XMLHttpRequest();
  var that = this;
  request.open("GET", this.url, true);
  request.responseType = "arraybuffer";
  request.onload = function() {
    that.buffer = that.context.createBuffer(request.response, true);
    that.reload();
    callback(request.response);
  }

  request.send();
}
Notice that the jQuery's AJAX calls aren't used here. This is because jQuery does not support the arraybuffer response type, which is required for loading binary data from the server. The AudioBuffer is created with the AudioContext's createBuffer function. The second parameter, true, tells it to mix down all the channels to a single mono channel.

The AudioBuffer is then provided to an AudioBufferSourceNode, which will be the context's audio source. This source node is then connected to a RealTimeAnalyzerNode, which in turn is connected to the context's destination, i.e, the computer's output device.

var source_node = context.createBufferSource();
source_node.buffer = audio_buffer;

var analyzer = context.createAnalyser();
analyzer.fftSize = 2048; // 2048-point FFT
source_node.connect(analyzer);
analyzer.connect(context.destination);
To start playing the music, call the noteOn method of the source node. noteOn takes one parameter: a timestamp indicating when to start playing. If set to 0, it plays immediately. To start playing the music 0.5 seconds from now, you can use context.currentTime to get the reference point.
// Play music 0.5 seconds from now
source_node.noteOn(context.currentTime + 0.5);
It's also worth noting that we specified the granularity of the FFT to 2048 by setting the analyzer.fftSize variable. For those unfamiliar with DSP theory, this breaks the frequency spectrum of the audio into 2048 points, each point representing the magnitude of the n/2048th frequency bin.

The Pretty Graphs

Okay, it's now all wired up -- how do I get the pretty graphs? The general strategy is to poll the analyzer every few milliseconds (e.g., with window.setInterval), request the time- or frequency-domain data, and then render it onto a HTML5 Canvas element. The analyzer exports a few different methods to access the analysis data: getFloatFrequencyData, getByteFrequencyData, getByteTimeDomainData. Each of these methods populate a given ArrayBuffer with the appropriate analysis data.

In the below snippet, we schedule an update() function every 50ms, which breaks the frequency-domain data points into 30 bins, and renders a bar representing the average magnitude of the points in each bin.

canvas = document.getElementById(canvas_id);
canvas_context = canvas.getContext("2d");

function update() {
  // This graph has 30 bars.
  var num_bars = 30;

  // Get the frequency-domain data
  var data = new Uint8Array(2048);
  analyzer.getByteFrequencyData(data);

  // Clear the canvas
  canvas_context.clearRect(0, 0, this.width, this.height);

  // Break the samples up into bins
  var bin_size = Math.floor(length / num_bars);
  for (var i=0; i < num_bars; ++i) {
    var sum = 0;
    for (var j=0; j < bin_size; ++j) {
      sum += data[(i * bin_size) + j];
    }

    // Calculate the average frequency of the samples in the bin
    var average = sum / bin_size;

    // Draw the bars on the canvas
    var bar_width = canvas.width / num_bars;
    var scaled_average = (average / 256) * canvas.height;

    canvas_context.fillRect(i * bar_width, canvas.height, bar_width - 2,
                         -scaled_average);
}

// Render every 50ms
window.setInterval(update, 50);

// Start the music
source_node.noteOn(0);
A similar strategy can be employed for time-domain data, except for a few minor differences: Time-domain data is usually rendered as waves, so you might want to use lot more bins and plot pixels instead of drawing bars. The code that renders the time and frequency domain graphs in the demo is encapsulated in the SpectrumBox class in js/spectrum.js.

The Minutiae

I glossed over a number of things in this post, mostly with respect to the details of the demo. You can learn it all from the source code, but here's a summary for the impatient:

The graphs are actually two HTML5 Canvas elements overlaid using CSS absolute positioning. Each element is used by its own SpectrumBox class, one which displays the frequency spectrum, the other which displays the time-domain wave.

The routing of the nodes is done in the onclick handler to the #play button -- it takes the AudioSourceNode from the RemoteAudioPlayer, routes it to node of the frequency analyzer, routes that to the node of the time-domain analyzer, and then finally to the destination.

Bonus: Another Demo

That's all folks! You now have the knowhow to build yourself a fancy new graphical spectrum analyzer. If all you want to do is play with the waves and stare at the graphs, check out my other demo: The Web Audio Tone Analyzer (source). This is really just the same spectrum analyzer from the first demo, connected to the tone generator from the last post.

References

As a reminder, all the code for my posts is available at my GitHub repository: github.com/0xfe.

The audio track used in the demo is a discarded take of Who-Da-Man, which I recorded with my previous band Captain Starr many many years ago.

Finally, don't forget to read the Web Audio API draft specification for more information.

Enjoy!