Programming + Design

How to get a Github post-recieve URL hook working on a non-public site
Written by Brett Brewer   
Tuesday, 24 May 2011

I've been getting cozy with Github lately and I thought I should share my experiences of getting a post-receive URL hook working so that when someone does a git push to my main development repository it will force my dev site to do a git pull to deploy all the latest changes from the repository to the staging server. In my case, I have a fairly secure development server running on a domain that has no public DNS records. This means that to view the site, someone must have the ip-hostname mapping in their computer's host file. This also means that Github has no way of finding the site to call any post-recieve url scripts to trigger the git pull on the remote site. What follows is a list of everything that needed to be done to make this work.

First off, if you're using PHP like me, and your server has the slightest modicum of security, there's no way you're going to be able to successfully issue a shell_exec('git pull') command via php and have it work because php will only have the priviliges of your web server and chances are your repos and site files are owned by a different user. Enter a nice little apache module called "suphp". I can't get into the details of how to prperly set up suphp because I had my dedicated server experts do that for me, but assuming you can get that set up so that your staging server serves your development site under the same user that owns the site files and git repos, you can then set up a php script that can successfully do a shell_exec('git pull') and have it work. So, assuming you got suphp set up, or you're running a really insecure setup that allows your webserver user to run shell commands, you can create a php script on your server such as this:

$payload = json_decode(stripslashes(@$_POST['payload']));
$message = print_r(@$payload,true);
$message .= shell_exec('/usr/bin/git pull');
mail('','gihub post receive hook fired',$message);
echo $message;

That was my first post-recieve URL hook script. I saved it on my staging domain's webroot folder and then in the Github admin for my repos, I just entered something like This worked great so long as my staging domain had a public dns record that allowed Github to find it, but for security reasons I didn't really want most people having access to the staging server, so I removed the public dns records and just added a hostfile mapping for my domain to my computers so I could browse it like it had a normal dns record. Of course, this broke the Github post-receive url hook. The first thing I thought to do was to ask Github support for a way to map my ip to my staging server in their system so I fired off an email to them to see if they would comply and then immediately thought of a rather simple solution. Github can't find a site with no DNS records, but it could easily find my server via its ip address. Since my staging server serves some other domains, I couldn't just give Github the server ip and expect it to find the right domain to call the script on, so I created a new apache config file to handle all requests that don't map to a specific server. For example if your staging server is on ip address, you could create an apache VirtualHost container such as this:

        ErrorLog /var/log/default.error
        CustomLog /var/log/default.access combined
        DocumentRoot /var/www/html
       #turn on suphp for this site
        suPHP_Engine on
        AddHandler x-httpd-php .php .php3 .php4 .php5
        suPHP_AddHandler x-httpd-php

This makes any requests to your main ip address look for files in the defaut apache docroot at /var/www/html. The suPHP_Engine directives force the requests for php files to be handled by suPHP so it uses whatever user you've got configured for suPHP. There are ways to get suPHP to use different users based on directive you put in the VirtualHost containers, but that's another story. We only needed one user so we've got suPHP configured to always use that user and restricted to only work in certain directories. Securing your system is up to you. So anyway, I threw my post-receive URL php script in /var/www/html and add some things to allow it to update the proper dev site....


 //put your own email address here if you want to receive email updates
 //when commits happen otherwise leave it blank or set it to false
 $email = "";
 //set $output_message to true if you want to be able to
 //view the output of the script in a browser
 $output_message = true;
 $message = "Github script called on staging server<br/>";
 //look for a request var named 'site'. This way you could theoretically
 //have the same script update different sites depending on the site
 //you pass in when you call the script.
 $site = isset($_GET['site'])?$_GET['site']:isset($_POST['site'])?$_POST['site']:"";
   $payload = json_decode(stripslashes(@$_POST['payload']));
   $message .= print_r(@$payload,true);
   $message .= shell_exec('cd /path/to/staging/site/dir;/usr/bin/git pull');
   $message .= "No valid site specified";
   mail($email,'gihub post receive hook fired',$message);
   echo $message;

Then for the post-recieve-url hook in Github I used something like this:

There is an apparently undocumented feature in the Github post-receive-url hook that converts any $_GET vars passed in the url into $_POST vars, so in addition to the "payload" post var you will get any other vars you passed as $_GET vars in your $_POST array. I just included the check for a $_GET['site'] var in my script so that I can also call the script from a normal request in my browser. The above script obviously provides very little security, but since the domain I'm using it for has no public dns records, there's not much chance of someone stumbling across it without knowing the site's IP. Since the script restricts the "git pull" command to running only on the domains specified in the $_GET['site'] request var, the worst someone could do is update my staging server with the latest code in my repos if they called it with the right site in the request, which is sort of the whole point of the script anyway so that's not much of a worry. Having the script email me the results every time it runs keeps me informed of what is going on, so if anyone starts messing around with the script it will be hard to miss in my inbox.

Now all is well in my Github world. 

Last Updated ( Tuesday, 24 May 2011 )
Watson puts humans on notice in Man vs. Machine Jeopardy Tournament
Written by Brett Brewer   
Monday, 14 February 2011

IBM's Jeopardy-playing supercomputer named "Watson" just played and nearly beat both of the top winning Jeopardy champs of all time. I missed the show, but then found this video of the first half of it on YouTube. I had hoped to find the second half posted by the time I got done watching it, but no such luck. I did a little searching and apparently Watson played one competitor to a tie, while handily trouncing Ken Jennings, the winningest player in Jeopardy's history. So marks a milestone in computing history. The technology behind Watson can be used to answer questions of all kinds in a variety of industries, perhaps even solving problems that have heretofore been too complex for humans to reliably answer themselves. Could we be on the verge of a sci-fi like future where some giant supercomputer finds the solutions to all our problems and we simply carry out its instructions? Probably not, but I can dream.

UPDATE: Both parts are now on YouTube:


Last Updated ( Monday, 14 February 2011 )
Distributing requests across multiple hostnames using consistent hashing
Written by Brett Brewer   
Sunday, 06 February 2011

If you've ever delved much into web site performance optimization, you're probably familiar with Google's PageSpeed and Yahoo's YSlow Firefox plugins and the typical list of suggestions they provide for improving page load times. One of the more common and least implemented suggestions (or most often wrongly implemented) is the idea of splitting request for resources across multiple hostnames. The idea is that since the early days of browsers there has been a limit on the number of resources that can be downloaded in parallel per hostname. The early browsers had a limit of 2 parallel downloads per hostname and I'm really not sure if that limit has increased or by how much, but the point is, if you split your requests across a few different hostnames, your browser can download more resources in parallel without blocking loading of other elements. Unfortunately, there are very few good explanations of how to actually achieve this without destroying most of the inherent benefits of browser caching. You might be tempted just to set up some new CNAMEs for the new hostnames in your DNS and then write a simple function that will randomly choose a hostname to serve each of your static resources from, but this is actually a bad idea because you have no guarantee that the resources will be served from the same URL on subsequent requests, so you will lose the benefits of browser caching. So then you might think, okay, I'll just use a static variable in a function to serve content sequentially in a round-robin fashion, so each resource would be served from the next host in the list and you just cycle through them repeatedly. As long as your pages never change and your resources are always in the exact same sequence on your page, then this would work fine, but if you add an element somewhere, you'll throw off the sequence for the other page elements and you'll end up busting your cache again. So what is an aspiring site optimization wizard to do? The answer turns out to be quite simple - use something called "consistent hashing".  Consistent hashing is the same thing used by popular backend technologies such as Memcached to determine which server in a pool of multiple Memcached servers to pull a particular resource from. Basically you create an algorithm that allows you to hash your filenames in such a way that they will always map to a particular server. This can get a little complicated when used for actual caching where you may want to have a file mapped to multiple servers for failover purposes, but for something as simple as spreading requests across multiple hostnames, all you really need is an algorithm that you can use to consistently map a particular filename to a single hostname. Fortunately for all of us PHP developers, there's a neat little class calledFlexihash that is suited for both simple and more complex uses of consistent hashing.

But enought of the useless background info, let's see how this would work in real life. For the sake of arguement, let's say you're running a fairly big ecommerce site and you're already serving your static content from a Content Delivery Network (CDN) such as Akamai. You serve your web site requests from and your images are all served from This mean you most likely have a CNAME record set up in your DNS corresponding to You now decide you want to add 3 more hostnames that all point to the same static content server as You choose, and and add CNAME records for them to your DNS zone, wherever your DNS is hosted. So now you can serve the same image files from any of the 4 domains you set up as CNAMEs. So, how do we set up our image file requests so that the requests are somewhat randomly served by these different host names, but ensure that every image is requested using the same hostname every time? We use a little library called Flexihash from a nice coder named Paul Annesley. So without further ado, here is a very simple example of how you'd use Flexihash to generate your image urls.

 //There are a couple of ways to include the required Flexihash
 //library files and we'll just assume you figured that part out
 //and included them already.
 //So, assuming your Flexihash lib is already included....
 //Instantiate our Flexihash object, to use the defalult hashing
 //algorithm (CRC32) and to hash each filename to just 1
 //target in our list of servers
 $flexiHash = new Flexihash(null,1);
 //Now set up our list of servers, each with a weight of 1,
 //so that Flexihash knows what to map the input filenames to.
 //set up some test filenames...
 $filename1 = "somefilename.jpg";
 $filename2 = "someotherfilename.jpg";
 $filename3 = "yetanotherfilename.jpg";
 //spit out a message showing how some test filenames
 //will map to specific servers...
 echo "<br/>$filename1 maps to ".$flexiHash->lookup($filename1);
 echo "<br/>$filename2 maps to ".$flexiHash->lookup($filename2);
 echo "<br/>$filename3 maps to ".$flexiHash->lookup($filename3);

So obviously, you'd do this a bit differently in your actual useage scenario. I'm getting ready to roll this out on a site and in my case I converted the Flexihash library to a native Kohana library and then wrote a helper function which uses Flexihash to allow me to get the image url for each of my images. If you're already knowledgable enough to know what consistent hashing is and that you need to use it, then you will hopefully have no trouble using the above example for your own implementation.

So now you have no excuse not to finally go ahead and implement this optimization technique in your next attempt at ecommerce world domination. 
Last Updated ( Tuesday, 24 May 2011 )
Access Denied error in IE with Facebook Javascrkpt SDK
Written by Brett Brewer   
Wednesday, 12 January 2011

For anyone that's spent much time trying to implement the Facebook Javascript API on their web site, you may have come across some odd behavior or unexplained javascript errors in Microsoft Internet Explorer. One of the main causes of problems with the FB Javascript API in IE is a missing 'channelUrl' parameter in your FB.init() function call. To remedy such problems, you create a file such as "fbchannel.html" and in it you place the following contents.

<script src=""></script>
Then in your javascript code, where you call the FB.init() function, set up the parameters to include a channelUrl.
window.fbAsyncInit = function() {
            appId: '1234567890',
            status: true, cookie: true,
            xfbml: true,
            channelUrl: document.location.protocol + '//'
(function() {
    var e = document.createElement('script'); e.async = true;
    e.src = document.location.protocol +

You might also want to make the script src in the fbchannel.html file dynamic so that it loads over https for any secure pages, but if you don't need to load the facebook API on any secure pages, then don't worry about it. Also, be sure that if you have a separate test domain for your test site that you switch the domain for the channelUrl on your test site or you'll get the js error there too. 
Last Updated ( Wednesday, 12 January 2011 )
PHPList unveils new hosted service
Written by Brett Brewer   
Friday, 17 December 2010

The email marketing world is about to be shaken up a bit by a new player with some disruptive pricing plans that may finally force the big players to start lowering their prices. PHPList has been the defacto open source standard for managing mailing lists for as long as I can remember, but as a self-hosted solution, it is often difficult to maintain deliverability. Commercial hosted solutions tend to have much better deliverability than self-hosted solutions because commercial providers maintain strict anti-spam compliance and have relationships with the major email providers and ISPs to ensure that their messages get through spam filters so long as senders comply with anti-spam regulations. Email marketers almost ALWAYs want to break the rules, so if you run a self-hosted solution, it's often impossible to comply with anti spam regulations because the people calling the shots tell the people controlling the software to break the rules. Virtually every person I've ever worked for has wanted to break the email marketing rules/laws at some point. So going with a commercially hosted solution has the secondary benefit of taking that decision of anti-spam compliance out of the hands of your potentially irresponsible clients. Unfortunately, most of the major players in the commercial email marketing space have priced themselves right out of many people's range. They charge ridiculous rates for anything over a few thousand users. Enter PHPList's new hosted solution .

The pricing is less than half of what the nearest commercial players are charging. They claim to have relationships with the major email providers and ISPs and maintain a very strict anti-spam compliance, in fact you have to go through a trial period where they monitor your sending behavior and then slowly grant you increased sending priviliges. Their most expensive pricing tier is just  $180 for 100,000 messages per month. Still not cheap, but it's way cheaper than the alternatives. I have yet to find any commercial service that even advertises a pricing tier with a monthly sending limit over 100,000, which seems odd because it's easy to build a list with over 100,000 users these days. One site I work on just got 40,000 new signups from a 1-week promotional campaign. If we send out two messages per month we'll go over the limit for most commercially hosted programs. So I'm looking forward to seeing how well the hosted PHPList service works and whether the prices stay at their current levels over time.

If you're looking for some alternatives to PHPList, you might want to check out this article, which lists "The Top 8 PHPList Alternatives ", though I'm not sure if they are really THE top 8 alternatives, it might give you some good ideas.  

Last Updated ( Friday, 17 December 2010 )
Now I Really AM a Mac User Again!
Written by Brett Brewer   
Tuesday, 23 November 2010

I had resigned myself to buying a Dell or Lenovo or Asus laptop once the next generation started coming out for 2011, but thanks to the awesome folks at, I'm typing this on my very own 17" MacBook Pro. Of course, the first thing I did was install Windows7 on it. I'm actually amazed at how easy Apple made it to install Windows 7 on their hardware. Using the "Bootcamp Assistant" made the process incredibly painless. And once the OS was installed, I inserted my OSX install disc that came with the laptop and used the Windows Bootcamp program provided by Apple to install all the Windows drivers. Everything works perfectly. In fact, I think Windows 7 actually runs better on this machine than OSX. Even the battery life under Win7 is comparable to OSX. I left OSX on the machine so I can still play with it and run the odd Mac program, but after using both OSes side by side, I really do think Windows 7 runs better on this hardware than SnowLeopard. In fact, at this point, my only real gripe is that Apple STILL hasn't seen fit to put a real "delete" key on their keyboard. They have a key marked "delete" but it's actually just the normal "backspace" key. As most Windows users know, the delete key deletes the character directly in FRONT of the cursor and the backspace deletes the character directly BEHIND the cursor. Not having a real delete key just adds more keystrokes to my life and irritates me. I'll never understand why Apple insists on maintaining these quirks year after year (did they ever even add a 2nd mouse button to their mice?). These are precisely the kinds of things that keep many business users away from the Mac platform entirely. There's no reason whatsoever to have an "eject" button for the optical drive on the keyboard, but not a real delete key. They could have just as easily put the optical drive eject button on the side of the computer next to the optical drive slot, the exact same way they put the little battery charge indicator button on the other side. Anyway, overall I'm still mostly thrilled with the laptop. The battery life isn't quite as good as I'd hoped, but it still beats the pants off any other 17" laptop with this kind of processing power. I'm very interested to see how quickly the battery life starts to decrease as it ages. 

Now time to put this baby to good use!

Last Updated ( Tuesday, 23 November 2010 )
POST data disappearing in Kohana?
Written by Brett Brewer   
Monday, 01 November 2010
Ever been coding a PHP script in Kohana and had all your POST data mysteriously disappear no matter what you do? Do you have any URL rewrite rules that strip the trailing slash off your URL's or vice versa? If so, be sure that you submit your forms to the proper URL with the slash stripped off (or added, depending on your config), otherwise your scripts will silently redirect to the proper URL and you will lose your POST data in the process.
I'm a Mac User Again!
Written by Brett Brewer   
Wednesday, 01 September 2010
NOTE: After writing this article, a family emergency forced me to leave town on short notice and I had to cancel the order as I couldn't receive the laptop. I then tried to get the laptop at the Tucson Apple store before leaving town and suffice it to say, I'm not a Mac user again after all. I'll be writing a followup article on the laptop saga and why I may end up with a Dell XPS Studio laptop or another custom Clevo after all.
I waited and waited for a company other than Apple to come out with a powerful laptop with a 17" 1920x1200 screen that could get 8-10hrs of run time on a battery. After over a year of waiting, I'm still relatively certain that within a few months we will see such a computer from Asus or one of the other high-end manufacturers, but today I could officially wait no longer. As of 2:30pm MST, I am once again the owner of another overpriced Apple computer - a shiny new 17" MacBook Pro.
Everyone knows Macs are overpriced. You pay literally twice as much as you would for similar specs from competitors -- that is until you factor in screen resolution and battery life. The only reason I paid $1500 more for a Mac than for similar hardware from another company was for the 8hr battery life and the 1920x1200 screen resolution. If it weren't for that, this computer wouldn't even be worth half of what I just had to pay. I really wanted a Quad core, which is available from most of the competition, but Apple seems to think that a dual core i7 processor is "state of the art" so that's what I'm stuck with. I opted to upgrade the puny and slow 500GB 5400rmp hard drive to a 7200rpm version for $50, and maxxed out the RAM at 8GB, but for $3000, it's still a somewhat disappointing configuration. The specs available from other manufacturers at this price point are way better. Still, if you don't want to have to find an electrical outlet every 2hrs and you need a REAL high resolution screen and powerful processor, it's the best laptop on the market. If you're just a gamer that can stay near an electrical outlet most of the time, you're better off with any number of other cheaper, more powerful fact I had a really hard time not buying one of thost Asus gaming laptops with the 12hr battery life , but I can't work on a screen that small. 
For some reason, none of the other manufacturers seem to understand that some people still need to do more on their laptop than watch a DVD or play games. Asus offers a powerful gaming laptop that can get 10hrs of battery life and theoretically has the horsepower to do all my work, but the screen resolutions top out at between 1366x768 and 1920x1080 on 15" widescreens.  Sorry, but I'm already sacraficing a 2nd monitor to work on a I need a REAL high resolution screen, not what passes for high resolution in the 1080p HDTV world. I'm looking at you Acer,Asus,HP,Compaq, Lenovo and Dell! Lenovo offers an interesting option of a 2nd smaller monitor that slides out from the main 17" screen on one of their high end models , but it suffers from the 2hr battery life that plagues the entire high-end laptop market -- except for Apple's 17" MacBook Pro. So, despite the somewhat dated specs on the Apple machine as a whole, the battery life and screen resolution still made it the clear winner for my needs. 
Of course, I couldn't order my new laptop off the Apple web site today because their store was broken in all the major browsers -- which is just one of many reasons I stopped using the MacOS back in 2000 -- I was sick of things from Apple not working right. I'm sure they are running their dysfunctional web site on OSX servers, and they probably test their pages on Safari, so it was no surprise to me that I had to call them to order over the phone today when their web site refused to work in Chrome, Firefox3 and IE8. Even after haggling with the salesman to see if I could get a "returning mac user" discount, I could only get $100 off the retail price. Pretty lame for a company that's supposedly trying to get people to switch platforms. But as long as the other PC manufacturers don't step up and give power users a real alternative, I guess apple can charge whatever it wants for its hardware. Anyway, I can't wait for it to arrive so I can install Windows XP on it!
Last Updated ( Monday, 13 September 2010 )
<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>

Results 9 - 16 of 85


Who's Online

© 2017
Joomla! is Free Software released under the GNU/GPL License.