Posts filed under: Coding

Looking at song popularity

While digging around the CBC R3Labs data, a question came up - What does it mean for a track to be “popular” on R3?

Fortunately, it’s pretty easy to find the number of times the top 10% or 20% of tracks are played, but we also thought it would be interesting to compare some of this “popularity” data from the R3 website with that of music site We found the comparison to be actually quite interesting, in a geeky/push-the-glasses-back-up-on-nose kinda way.

We looked at plays of the top 100 tracks on both services for a given week, and found that “popularity” is noticeably skewed towards the mega hits on, in comparison to R3. For example, the most popular track on accounts for well over twice as many of top 100 plays as its R3 counterpart. Also, the top 20 tracks on account for almost 40% of the plays of the top 100 songs. This is in contrast to less then 30% for the top 20 on R3. Check out the chart below to see the differences.

While we don’t pretend to know all the reasons for the difference in the popularity curves between these two services, it’s certainly fun to speculate! Perhaps CBC R3 visitors are more exploratory then users, often venturing out past the obvious tracks on the website. Or maybe Canadian audiences are not as influenced by the massive music marketing machine as the predominately US based audience. In a perfect world, I would like to imagine that Canada’s history of providing recording and tour grants for artists has helped fuel both the creation of this large back catalog of interesting music, while at the same time, helping build demand.

What do you think is behind difference in “popularity” between R3 and

(This is a repost from the R3Labs blog over at CBC R3)

Posted on: 10.03.25 | no comments

Latest Project: BirdHerd

This is mostly a repost, but I wanted to let everyone know about my latest project: BirdHerd.

I built BirdHerd because I needed a way for the many startups I work with to use Twitter effectively.

In the past, we always just shared a common password amongst team members, but this became troublesome when a team member would leave and we’d have to change the password, and then we’d have to tell everyone what the new password is. Or someone would forget the password, reset it it to something new, and effectively lock everyone out. Needless to say, this was not ideal, and had the undesired effect of causing people to not update the group Twitter account as often as they could.

Next, I started looking at a few existing Twitter tools, but I definitely wanted something that would work from any client or device. Our team members use TweetDeck, HootSuite, SMS and the Twitter web site, and I knew that if they had to log into a different web app to update the group account (like other group Twitter tool CoTweet forces you to do), the updates wouldn’t flow as fast as they could/should. I also wanted to use oAuth to manage passwords safely. Most importantly, I wanted to be able to both communicate publicly by posting messages from the Twitter account, but I also wanted to be able to send a private group message to all my team members (through a DM).

I couldn’t find such a tool, and so, with a firm belief that there had to be a better way, BirdHerd was born – and you can try it right now. It’s even getting some good initial press! I hope it makes using Twitter with your group or team easy.

Posted on: 10.02.11 | no comments

Small pieces loosely joined - to a ski hill

I’ve always been a fan of the “small pieces loosely joined” approach to building simple web apps. A little bit of something from here, hook it up to there, and voilà! Sometimes something useful can spring into existence.

With the coming ski season (which I’m really excited about) I was spending some time on the Whistler Blackcomb website checking out the conditions, and I happened to notice a few public data feeds. One of these feeds contains the status of all the lifts on Whistler and Blackcomb, so of course, I had to hook it up to Twitter. The WhistlerBot updates its twitter stream every time a lifts status changes, and from my visit to the ski hill last weekend, it seems to do so in pretty close to real time.

When I was on the hill, I turned on mobile notifications for just this account (so I don’t get distracted with other Twitter noise) and now my pocket will vibrate whenever the Peak chair changes from standby to open, for example. No one likes to ski in a tracked out bowl, right?…

While I was at it, I also threw up the slightly hilarious, but still useful and gave the WhistlerBot its own home.

Hope other folks find these useful too.

Posted on: 09.11.17 | no comments

Is Magento right for your next ecommerce project

I recently launched a new ecommerce site using the open source package Magento. This is the first site I’ve worked on that uses Magento, so I thought I would jot down some of my early impressions.

- Tried the online demo, and loved the user experience and admin dashboard. Very polished UI for an Open Source project. (+1 for Magento)
- Huge download (-1 for Magento)
- Convoluted download process and scant details for the SVN checkout (-1 for Magento)
- The Zend framework. Magento is built on the Zend Framework, and in my opinion, the Zend Framework is like a hole in the head. Everyone has their favorite flavour of ice cream, so this is really just my personal preference, but coming from something like Rails to the Zend Framework is like running full speed into a brick wall. Does one really need an XML configuration file to point to the location of other XML configuration files? Yeeessh (-2 for Magento)
- Decent web admin tools to add products and create a catalog. Clients could (mostly) grok it. (+1 for Magento)
- Pretty straight forward to theme, although it could be easier. (+1 for Magento)

Totally subjective and largely meaningless score: -1 (Magento’s weaknesses slightly outweigh positives, in my opinion)

So in summary, Magento is a large, (overly?) complex code base with a great admin and frontend user interface. It’s fairly easy to skin, but I found developing custom functionality a drag. Keep in mind, this was the first Magento site I’ve built, and there’s always a learning curve for every project.

My biggest complaint though is something that I have trouble even articulating. It’s a “vibe”, if you will. Having worked on the Drupal project for a number of years, I have seen first hand how an open source project can (and should) be run. The Magento team could learn a lot from looking at the development processes that projects like Drupal and WordPress employ. For example, with Magento extensions, in most cases, it seemed to not be possible to download actual source code, but rather, you had to enter a key and then Magento would download the package for you. In almost all cases, it feels like the actual source code is kept away from end users. Where is the place to file, and track bugs against various extensions? Where can I browse the source code online? How can I contribute bug fixes? To me, an open source project should be more then trial ware, with an option to upgrade to a premium edition. Open source software is largely about community development, and I don’t see an active developer community hacking away on Magento.

Other alternatives: (open source) Ubercart, Spree (Monthly fee) Shopify, FoxyCart

Posted on: | 2 comments

Introducing: SXSW Gig Guide

With around 1600 bands playing at SXSW, planning your time is critical. With the inevitable slipping of gig schedules, knowing who else is playing at a given venue is really important, and so I threw together this tool to help myself plan my evenings entertainment. You might also find it helpful:

I screen scraped the SXSW site, plotted the venue data on a map and then scrapped artist MP3s from the SXSW site, and hooked those into the new Yahoo Media Player. The end result is you can click on a venue marker, and then listen to the bands that are playing there that evening.

It was also interesting in how long each part of this app took to build:

  • Using Ruby and the totally awesome hpricot HTML parser to write the screen scraper to grab HTML from and convert data to JSON - 20 minutes
  • Writing the JS to show the JSON data on the map - 1 hour
  • Fussing with CSS and HTML layout (which is still broken!) - 3+ hours!

Moral of the story - Either learn more CSS, or surround myself with talented developers and designers so I don’t have to. I’m inclined to do the later…

Next up is to incorporate this data into Rilli so it’s possible to build your own calendar and see where your friends are going.

Thanks to Jeremy Keith for inspiration, and Mike Purvis for some Map code.

UPDATE: Now working in IE7. Thanks to Lucas for the bug report.

UPDATE 2: A few folks have asked for the data I’ve scraped from the SXSW site. The (huge) JSON file can be found here. Go nuts…

Posted on: 08.03.06 | one comment

Introducing (and the Drupal way VS roll’yer own)

A few rainy weekends ago, I had the urge to roll up my sleeves and build something. Every web application I’ve built in the last few years has been built on Drupal, and I wanted to see (remind myself) what it would be like to develop an application from scratch (where “scratch” = a random collection of open source components combined with bits from Drupal). I also had given myself the timeframe of a weekend, so building “Basecamp right” was out. I wanted a simple coding project.

Since I’m extremely childish, immature, and a fan of the practical joke, I decided to build a web app that let you send an email to someone, but make it appear as if it comes from someone else. I decided to call it Prankmail. Frivolous, slightly dangerous, and perfect for a rainy weekend indoors.

Skipping to the good part, you can check it out at:

And if you would like to read about how I built it, and how the
“build from scratch” process compares to building a Drupal site, continue on:

Before I really started with application specific code, I wanted to grab a few Open Source pieces and get a simple and efficient framework to build on (We’re not using Drupal, after all).

I wanted to have clean URL’s (none of that goofy query string nonsense) so I took a look at how Drupal uses MOD_REWRITE and .htaccess files and cobbled together a simple solution. I’ve also been looking at the new Zend framework, and I like the way it maps URL’s to controllers. I am also a fan of Drupal menu system, so I hacked up the arg() function from Drupal, and made it route URL’s to controllers. I like Ruby’s “convention over configuration” and so by default, I set it up so a URL of ‘/message/123′ gets passed to a function ‘controller_message’ with an arg of ‘123′, if that function exists.

I really like Drupal’s very light weight DB abstraction layer, and so I grabbed that as well (mostly because the syntax is so natural to me at this point), and stuck it in my web app.

For templating, I decided to give Smarty one last try. Every time I struggled with the syntax, or found myself fighting along, I could hear Rasmus saying “PHP *is* a templating language, stupid!” Next time, I think I will just stick with a ‘pure’ PHP templating solution.

Setting up this simple framework let me write the rest of the application with speed and ease. Included libraries aside, the actual application itself is quite small.

For help on the UI, I used JQuery and specifically thickbox. I also found the visual JQuery site to be a great resource when I got tripped up on syntax. JQuery is truly fun to code with, and it’s a great feeling to have so much control over the DOM, with such simple and beautiful syntax.

I mean, how is the following snippet not poetry in it’s simplicity:
//add a token to the form that we will check to
//make sure the form processor only looks at forms
//submitted from this page...

The above snippet leads into how I was concerned with spammers and bots submitting mail, so I borrowed a technique from Jack Born and used JQuery to append a token (MD5′d salt + timestamp) to the submission forms DOM, and at the same time inserting the same token into the Session. When the form is submitted, I check to make sure that the timestamp is “fresh” (within 20 minutes), and that the two tokens match. This should prevent, or at least make it more difficult for a bot to make a post directly to my form processor (and I don’t have to use a silly CAPTCHA).

So how did the whole process work, compared to using my usual framework of choice, Drupal?

Setting up the whole framework from scratch was definitely the fun part. I really enjoyed having a chance to “do things my way”, rather then hunt through documentation or lines of code in search of an answer. I think many people code because it feels good to build something, and while Drupal gives you many things “for free”, it was very enjoyable and satisfying to dive in and do it myself.

When it got to the point of adding comments and RSS feeds to the site, I have to admit that it was proving to get a bit tedious, and it would have been nice just to “turn comments on”, in Drupal. But all that being said, the whole exercise reminds me that there are many ways to get a job done. Drupal is a fantastic tool for a specific set of problems, but it’s just that.

Moral of the story is?

There are no silver bullets, and building silly websites is easy and fun.

Now go send some mail!

Posted on: 06.11.28 | 2 comments

Back from Brussels

So I’m back in San Francisco after returning from Brussels and one insane week packed with conferences. (EuroOSCON, GovCamp, DrupalCon, and BarCampBrussels). I’m exhausted, but buzzing with excitement. There are just too many people doing interesting things in this world.

Next up: San Fran Drupaler’s should unite and throw something serious down at the upcoming Yahoo! Hack Day. I’d like to go, so if there are other Drupal heads in attendance, I’d love to work on any kind of hack involving Drupal, Yahoo! maps, concert data, and Send me a shout if you’re interested!

Posted on: 06.09.27 | one comment

Speeding up XML-RPC calls in Drupal

Through Bryght, I’ve been working on an interesting project that really exercises the “toolkit” and “web services platform” like nature of Drupal. The final architecture of our project ended up having many distributed processes, all sending data to Drupal through XML-RPC, and we were using Drupal mainly to aggregate and display the results.

Performance was very important, and we were having a huge bottle neck with Drupal’s XML-RPC library. XML-RPC calls were talking a long fricken time, and we were making tons of them.

Fortunately I had help from Walkah and Moshe, and I thought I would pass on what we did to drastically increase XML-RPC performance. This worked for us, so maybe it might help someone else out.

The problem is of course that Drupal does a full bootstrap for every XML-RPC call.

Our solution:
The first, simple (and huge) win was to utilize a PHP opcode cacher - this drastically reduces this bootstrapping time. We used eAccelerator, but there are many others.

Secondly, since we were working in a controlled environment (we knew what modules were going to be called via XML-RPC) we were able to hack the xmlrpc.php file to avoid a full bootstrap. The downside is we had to hardcode a few path things into the file, but again, this is for a particular app/site, and the performance gains were worth it.

Courtesy of Moshe, here is our revised xmlrpc.php file (you might want to rename it to xmlrpcs.php or something):

< ?php
// $Id$

* @file
* Optimized php page for handling xml-rpc requests. Your module must use declare the hook_init() or hook_exit()
* so that it is loaded during the boostrap. Otherise, use hook_xmlrpc as usual. Note that only required modules are loaded at this point.
* @author
* Moshe Weitzman

$path = ‘./sites/mwpb-9.local/modules/’;
$module = ‘my-xmlrpc-module’; //the name of the module where your xml-rpc calls are

include_once ‘./includes/’;
include_once ‘./includes/’;
include_once ‘./includes/’;

require_once $path. $module. ‘.module’;
$function = $module. ‘_xmlrpc’;
$callbacks = $function();
// print_r($callbacks);

// put any or other core functions here that your module relies upon.
function t($string) {
return $string;

This was enough to get a huge increase in performance. (Yeah!)

Further thoughts:
One could use “mysql_query()” and avoid bootstrap all together, although I don’t think this is very much overhead, and the benefits of using Drupal’s “Database abstraction layer” probably out way any performance overhead here.

I wonder if there is any way to get Drupal to intelligently load necessary code instead of doing a full bootstrap. The “hardcoding” method above certainly works, but I am craving a more elegant and universal solution. Any thoughts?

Posted on: 06.04.19 | one comment

Playlist module released

Over the last little while I’ve been working on a playlist module for Drupal. Farsheed was also working on a playlist module. So we have combined forces and now present to you the fruit of our labours: The new, improved, ass kicking playlist module.

Features include:

  • support for mulitple playlist formats, including xspf, m3u, pls
  • podcastable playlists
  • AJAX ordering of audio tracks
  • multiple ways to add an audio track to a playlist
  • For the geeks in the crowd, playlists are implemented as a new node type.

Farsheed has set up a nice demo site where you can go and kick tires a bit, but you have to sign up first to create your own playlists. Enjoy!

Posted on: 05.09.27 | 3 comments