Live from NYC!

It was a wonderful day at the Joomla! Developer Conference.  Keeping to my usual style I took notes in MindMap. I’m sharing as a PDF in case you would like to click some of the links or copy some of the text. Feel free to share any comments.

Joomla Dev ConferenceNYC

A big thanks to all who organized and presented. We at LCS really enjoyed it and look forward to the next few months as Joomla! 1.6 becomes solidified.

People use Eclipse. People use Windows. People use these tools together to develop code. I am not (typically) one of these people. My development environment is usually either Coda on the Mac or vim on the command line. But I do work with people using Eclipse on Windows, and while the code we’re building together is not platform specific, it does help if we all have the same capabilities. So when I added some functionality to our Apache ant deploy script to synchronize files on a remote server using rsync, the next step was, of course, getting it to work on Windows.

Allow me to walk through the configuration, step-by-step. The first thing was to create a target in my build.xml file which ant can use to synchronize files to the remote machine. Something like this:

<target name="rsync_remotehost" >
   <exec executable="rsync" dir="${cfg.someDir}">
        <arg line="-aOvz --chmod=g+w,Da+rX,Fa+r,F-X --exclude .svn . \
            ${rsync.user}@${rsync.server}:${rsync.dir}" />

So really, all that does is define a target which runs rsync inside a specified directory. The important aspects to note are the rsync parameters. For example this: -aOvz These are pretty standard options (a for archive, v for verbose, z does compression), except for the O. I wasn’t used to using this option, but it’s important for a very subtle reason. To explain, first a little background. Typically when I want to synchronize the entire contents of a directory with rsync, I would do this:

rsync -avz path/to/somedir

But this doesn’t quite work in the Eclipse+Ant environment on Windows. The reason is because the variables holding the directory locations get represented in Windows path notation. (Imagine a C: at the beginning and all the directory slashes are backwards.) Rsync can’t handle this. It’s a Unix-based tool. It’s expecting a Unix-style path. So to get around this we set a dir attribute in the exec element which causes ant to change to that directory before execution. We also use a ‘.’ in the rsync argument line to specify that the source contents are this directory. This has an odd side effect: rsync attempts to set times on the corresponding directory at the remote location, which tends to fail. So we use the -O option to tell rsync not to set times on directories.

The other arguments given to rsync are for specifying more sane permissions on the remote files since Windows wanted to kill all permissions for group and world by default (--chmod=g+w,Da+rX,Fa+r,F-X) and for excluding our local Subversion files (--exclude .svn).

So that’s the ant target. Now to the interesting parts. ;-) First we have to get rsync installed on Windows. Cygwin makes this relatively easy. Download and install. Most options are pretty much good at the defaults, except watch out for this screen:

Selecting packages for cygwin

On this screen you need to add a couple of packages, both in the Net section: Net -> rsync and Net -> openssh. Just click once on each of those packages and it will add what you need. Finish up the installation by clicking on ‘Next’ and ‘Finish’.

Once Cygwin and Rsync is installed, there’s still a few more things we need to do in order to get it all to work together correctly. First, you need to add the path to Cygwin’s binaries to the Windows path so that system calls in Eclipse will find the Cygwin binaries. To do that, right-click on ‘My Computer’ and click ‘Properties’. Then click on the ‘Advanced’ tab and then the ‘Environment Variables’ button:

Environment Variables

Next find the section on the bottom called ‘System variables’ and scroll down and double-click on the ‘Path’ line:

Path variable

Insert the following at the end of the ‘Variable value’ line:


Lastly, you need to set up a public key so that when ant runs, it won’t hang waiting for you to input a password (which you can’t do inside Eclipse anyway). To do that, open up Cygwin – you should have a shortcut on your desktop. When it opens run:


Accept the defaults, don’t add a passphrase, just hit enter instead. When the command is done, it will have saved a public and private key for you. We want to upload those to our remote server, like so (you’ll be asked for your password, and possibly to accept the key for the remote host – type ‘yes’ to do so):

scp ~/.ssh/

Next we need to place it somewhere special on the remote host (you’ll be asked for your remote password again):

install -dv ~/.ssh
chmod 0700 ~/.ssh
cat >> ~/.ssh/authorized_keys

Now, test the connection to make sure that you can connect without giving a password:


If all is good just exit and that should be it! The next time you open up Eclipse, it should be able to call rsync and run successfully through your Ant target!

I was looking around for concepts in building a reasonably secure HTML login form without using SSL, and I came across an interesting article (link at end of post). The concept it outlines is fairly simple, and I’m a little annoyed that I didn’t think of this myself earlier.

Essentially, the idea is that the password never actually leaves the client machine. Instead, the client sends a cryptographic hash of the password. For other security reasons, we also don’t want the server to store the password in plain text, so it should only store the hash value of the password.

Of course, this alone isn’t enough, because anyone scanning the wire could simply capture the hash and send that along to the server and authenticate. What we need is a way for the server and the client to agree that they have the same hash value for the password, without actually sending it. To accomplish this, we can set up the server to generate a random string and send that to the client. Then, both server and client append the password’s hash to the random string and perform a hash sum on the combined string. The client then sends that string to the server and if it agrees with the result the server got, we have a valid authentication.

The article referenced above included some sample code to illustrate this functionality, but I believe I can simplify it even further. It’s not a practical, real world example, because we’re not sending a user name or retrieving a password from a stored location on the server. But it should be enough to illustrate the concept and give a developer a head start in however they wish to implement. Personally, I plan to instantiate the code in a class and use XMLHttpRequest instead of traditional POST methods.

Anyway, on to the example code. Note: This example doesn’t actually look up any stored user login information. Instead it simply uses a pre-defined password: ‘password’.

We’ll need two files. The first file generates the server’s shared key and passes along the value to the client as well as the HTML and JavaScript needed to input a password, generate hash values and submit the form to the server.

Create main.php with the content:

// We'll use PHP's session handling to keep track of the server-generated key


// Function to generate a random key.
// Modified from code found at:

function randomString($length) {
    $chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";
    $str = NULL;
    $i = 0;
    while ($i < $length) {
        $num = rand(0, 61);
        $tmp = substr($chars, $num, 1);
        $str .= $tmp;
    return $str;

// Call the function and set the shared key

$key = $_SESSION['key'] = randomString(20);

    <!-- JavaScript that contains the functions which perform the actual hashing -->
    <script type="text/javascript" src=""></script>

    <!-- The following function creates the hash of the concatenated key and password hash
and submits the content to the server via a form -->
    <script type="text/javascript">
	function login() {
		var p = hex_sha1(document.getElementById('pass').value);
		var k = document.getElementById('key').value;
		var h = hex_sha1(k+p);
		var hash = document.getElementById('hash');
		hash.value = h;
		var f = document.getElementById('finalform');
    <form action="javascript:login()" method="post" >
	<input type="hidden" id="key" value="<?php echo $key; ?>" />
	<input type="password" id="pass" />
	<input type="submit" value="Submit" />
    <form action="login.php" method="post" id="finalform">
	<input type="hidden" name="hash" id="hash" />

Next we need the file to handle the submitted values and compare the results. Create login.php with the following contents:

$hash = $_POST['hash'];

$pass = sha1('password');
$key = $_SESSION['key'];

$server_hash = sha1($key.$pass);

if ($server_hash == $hash) {
	echo "MATCH!";
} else {
	echo "NO MATCH!";

That’s pretty much it. If you want to see a little more fluid example in action, see:

Referenced article: PHP – Implementing Secure Login with PHP, JavaScript, and Sessions (without SSL)

Before starting up new projects we make it a custom to take our lessons learned and improve. We are at that point right now and I find myself wanting something to streamline our database architecting and development process. Most of the LAMP design we’ve done in the past has only utilized a handful of tables with minimal relationships. But I’m projecting that our future work will require databases that have more normalized tables with more relationships.

I just realized that through MySQL Workbench we can streamline our process. We typically go through Entity Relationship/Data Design modeling with our clients and this tool can save a few steps. We can go from those diagrams right to the database (live database forward engineering only available in the Enterprise Edition of Workbench). The folks at MySQL have blogged about the screen designs for Workbench that they are developing for Mac OS X. I’m really looking forward to it since I don’t like having to open up Parallels and use Windows for just one tool. It’s certainly worth using the Windows build in the meantime. I’m looking forward to the Alpha release coming in September.

It’s been around for a long time, and it’s had its fair share of abuse. If you’re like me, perhaps you can recall when one of the most popular uses of JavaScript was for dynamic looking buttons. Do a little mouse over on the button and the button glows, or changes shape, or some other little effect which really amounted to swapping out an image. It was often being used more obnoxiously than elegantly.

Then came Flash. Everyone loved it. And again, everyone over-abused it. Finally, it became obvious (at least to me…) that people tend to prefer simpler design with occasional purposeful animation. In walks JavaScript (again).

Developers began using JavaScript in much more powerful, interesting, and ultimately elegant ways. One of the biggest ways being accessing and modifying the DOM. By listening to user initiated events (mouse clicks, keyboard entries), a developer can dynamically alter, rearrange, delete or create new document objects, all on the client side. A user can even initiate a server request (via the XMLHttpRequest object) and receive its reply without reloading the entire page.

The power, flexibility and standard implementation of JavaScript make it a powerful tool in building web-based applications. It would be a mistake to ignore it. I’m certainly getting my hands dirty with it (honestly, more by chance than anything else) and I’ve been loving the experience. A book that I’ve really found a great tool in helping me get the most out of the experience is called The Art & Science of JavaScript. I’d recommend it to anyone in the business or habit of building web-based applications.

(Reader Beware – Oncoming Rant)

With a snowy afternoon and a hot cup of tea I decided to make good use of my time and start a document I’ve put on hold for long enough. In an attempt to open my eyes to more than the Microsoft Office Suite, I started learning/using Pages (Part of the iWork ’08). 

The initial keystrokes were hard enough just to get the thoughts flowing. I was able to get out of the mental rut and put down a few good paragraphs. Unexpectedly Pages crashed. No issue there, it should just restore my document…Right? 

Lets pause there for a moment to note a few things:
1. In the time that I’ve started this blog the automatic autosave in has protected my work every minute, autosaving some 15 times.

2. TextEdit, a very basic word processing program has an autosave feature backing up SQL code I was messing with.
3. Time Machine on my MacBook Pro has backed up my system 12 times since the start of today.
In our world of computing backups, redundancy and autosaving, being able to recover has become common-law! So it was in disbelief that I re-opened my Pages file to find that NOTHING was recovered. There wasn’t even an indication that it tried! Thats right…no autosave in Pages!
I won’t drag on with any more rhetoric on the subject. This isn’t a bash on Pages. Just a rant that the simple programmable things in life should never be forgotten. Autosave is one of them!
Command – S

All you want to do is write some code, and somehow remain profitable. Well, maybe that’s not all you want to do. If you have one, you might be interested in fulfilling your client’s needs, while not going on any “death marches” in the process. You probably want to work on features that excite you with technology you want to use. You might want some flexibility on deadlines in response to unexpected problems. Ok. Great! Now, choose your flavor of methodolgy: Agile, XP, RAD, TDD, Waterfall, RUP, SCRUM…should I go on? Which do you choose? How do you know what will work best for you and your needs? When in doubt, do what everyone else is doing!

Agile with SCRUM is certainly en vogue. The values of Agile development are captured by the Agile Manifesto: (also see: SCRUM is a way of managing the work and communicating status. The two combined certainly make for an effective set of tools and guidelines to make everyone happy. But the first thing you learn about SCRUM (or the first thing I learned anyway) is: SCRUM is common sense! Meaning, if you have bad developers, or good developers with bad practices, SCRUM/Agile won’t help you; firing the developers will! See, common sense at it’s finest.

But seriously. One of the advantages of Agile with SCRUM is that work is clearly exposed as knowledge changes. For example, say you agree to deliver the user administration widget in 4 weeks. But as the days pass, you complete work, but new features are discussed. You can choose with the stakeholders what to do with those features: do we try to squeeze all of them in, or can we release the most essential ones first and then the others as enhancements later? Developers get to talk to stakeholders without all that messy project management stuff getting in the way. Of course, there’s a Product Owner (Team) and a ScrumMaster. But their roles are mainly about helping the developers work on the most important features without any impediments.

I’ve worked with Waterfall, Spiral, a little RUP, and now Agile with SCRUM. I love it! It’s lightweight, it’s simple, it allows for quick response to unexpected issues. I’ll be posting more on this topic as my experience as a ScrumMaster increases.