Wednesday, April 29, 2009

Don't forget your Bitwise Operators

Edit After getting some comments about this post I realised some people might want a little intro into what Bitwise operators are. A great tutorial on it for PHP can be found here

I have had discussions before with other PHP developers, and in fact with developers in general, geeking out about ways to get things done in our respective languages etc. One thing I noted from these chats is that the knowledge of Bitwise operations, and how they can be used to create cleaner, more efficient applications, seems to be lacking. So I thought I would take the opportunity to point out one way that we are using Bitwise operators to make our jobs a little easier here at Synaq in developing Pinpoint 2.

A little bit of a history. Pinpoint 2 is our own development to replace the aging Pinpoint 1 interface which is based on the widely used, open source Mailwatch PHP application. Essentially it is a front end interface for the Mail Security service we provide; scanning companies mail on our servers for viruses, spam, etc, before forwarding the clean mail onto the clients own network. One thing that the old system (and of course the new one) needs to do is store classifications of mail. Some of the types they get classified as are Low Scoring Spam (i.e. probably spam but a chance that it could be clean), High Scoring Spam (i.e. definitely spam with a very slim chance that is clean), Virus, Bad Content (eg. the client blocks all mail with movie attachments), etc, etc. The old Pinpoint 1 based on Mailwatch uses a database schema that stores a 1 or 0 flag for that specific type. As a simplified example:
  • is_high_scoring: 0 or 1
  • is_low_scoring: 0 or 1
  • is_virus : 0 or 1
  • is_bad_content: 0 or 1
As you can see this gets rather limiting because what if, for example, you wanted to add another classification type? You then need to go ahead and alter the table schema in order to accomodate adding another is_* column to the table which is really kludgy and not that easy to implement.

So for Pinpoint 2 we decided to reduce all those classification columns into one and assign each classification a bit value. For example:
  • if clean: classification = 0
  • if low scoring: classification = classification + 1
  • if high scoring: classification = classification + 2
  • if virus: classification = classification + 4
  • if bad content: classification = classification + 8
  • if something else: classification = classification + 16
  • if another something else: classification = classification + 32
So if we had a mail that was classified as high scoring spam with a virus attached and would you know it, the content is also bad its classification value would be :
2 + 4 + 8 = 14
So in our classification column a value of 14 is stored. If we now want to in our interface check the type we do not have to access multiple columns and determine if it contains a 1 or 0 but instead retrieve one value and work our bitwise operators on them. For example with Propel in symfony, if we wanted all messages that were viruses:

$mail_detail_c = new Criteria();
$mail_detail_c->add(MailDetailsPeer::CLASSIFICATION, 4 , Criteria::BINARY_AND);
$virus_mail_obj_array = MailDetailsPeer::doSelect($mail_detail_c);

We now have an array of results with all messages that are viruses. If we wanted all messages that were viruses AND high scoring spam:

$mail_detail_c = new Criteria();
$mail_detail_c->getNewCriterion(MailDetailsPeer::CLASSIFICATION, 4 , Criteria::BINARY_AND);
$classification_criterion = $mail_detail_c->getNewCriterion(MailDetailsPeer::CLASSIFICATION, 4 , Criteria::BINARY_AND);
$classification_criterion->addAnd($mail_detail_c->getNewCriterion(MailDetailsPeer::CLASSIFICATION, 8, Criteria::BINARY_AND);

You can see from all this it is a lot easier to write dynamic queries using bitwise operators than it is to try and add new columns to a schema everytime you add a new classification type.

Wednesday, April 22, 2009

Matt Kohut. A man in need of an education

A scathing title? I know. And to be honest I don't really care. Matt Kohut is Lenovo’s Worldwide Competitive Analyst, and he is sorely in need of an education in operating systems and specifically Linux. In a recent blog on tech.blorge, Mr Kohut is quoted as saying a few hilarious things. Blatantly incorrect statements and remarks about the state of Linux development that is about 5 years behind the times makes for some entertaining reading. The reason I am upset by this, and not just amused, is that it was said by a high level representative for a major player in the hardware industry, someone that the average user looks up to, listens to. Someone in that position shouldn't be allowed to spout his mouth off without at least verifying the most basic of facts. Lets go through a few.
“There were a lot of netbooks loaded with Linux, which saves $50 or $100 or whatever, but from an industry standpoint, there were a lot of returns because people didn’t know what to do with it,” he said.
There is no way to verify whether this is true or not but lets assume it is. The simple reason why no one knows what to do with it is because the world is so ingrained into using Windows that they have no idea that there is something else. They see something different and they think it is immediatly inferior just because it is not familiar. This is, of course, speculation on my part so lets move on for now.

“Linux, even if you’ve got a great distribution and you can argue which one is better or not, still requires a lot more hands-on than somebody who is using Windows.

“You have to know how to decompile codes and upload data, stuff that the average person, well, they just want a computer.

“So, we’ve seen overwhelmingly people wanting to stay with Windows because it just makes more sense: you just take it out of the box and it’s ready to go.”

4 months ago my fiance moved in with me. Her computer was flaky as hell because Windows XP did not like her hardware for whatever reason. The problem is that because no one except Microsoft can see source code there were no guides to help fix her problem on the web so it was either spend a ton of cash or try another Operating System.

She did still want Windows for the familiarity and a "safety net" so we started off by reformatting her drive into two chucks for a dual boot configuration. We installed Windows. Four hours and three restarts later, Windows was up. But this was a pre-SP1 disk she had with her machine so we had to install our own firewall, antivirus and a trove of other "security software" before we went online to install Windows Updates.

Whew! That done we decided to install Ubuntu on the other partition. 45 minutes later she was looking at her Gnome desktop. Her 4 year old printer worked out the box, scanner, the lot.

Then the other day she wanted to get her favourite old game Dungeon Keeper 2 installed so that she could play for a bit. I suggested she just boot into Windows XP cos it was more likely to run. I suggested this simply because I was busy at the time and didn't want to have to go through the hassle of trying to make a game designed for Windows ONLY to run on Linux.

She pouted at me. She actually dropped her lip in a sullen pout and then, and I will never forget, uttered the words "I don't like Windows". I felt so elatedly happy that I got up and got Dungeon Keeper 2 working. And as a side note, there was no problem getting it working. It installed and ran with no fuss whatsoever.

My point? My fiance uses a PC in her job. She is by no means a computer geek or ultra-savvy. She had to ask how to watch a DVD in Ubuntu. I told her to put the disk in. It loaded and she watched with no problem. Decompile codes (whatever that means anyway) and upload files? I beg your pardon?
Kohut argues that for Linux to be successful on netbooks (or notebooks or desktops for that matter), the open source operating system needs to catch up with where Windows is now.

“Linus needs to get to the point where if you want to plug something in, Linux loads the driver and it just works.

“If I need to go to a website and download another piece of code or if I need to reconfigure it for internet, it’s just too hard.

“I’ve played around with Linux enough to know that there are some that are better at this than others. But, there are some that are just plain difficult.”

Ubuntu, as far as an interface goes, exceeded Windows XP and Vista even a year before Vista was released. The combination of Gnome and Compiz or KDE 4.2 blows away anything Microsoft has been able to get Windows to do visually. Stability? Linux has been the predominant server technology keeping hugely complicated web presences and sites running for decades now, so stability is not something to worry about.

A few years ago I tried running a Fedora desktop but I struggled to get my USB DSL modem working to get online. These days? I plug it in. It asks me for username and password, I am online. Thats it. Could it get any easier, Mr Kohut?

“From a vendor perspective, Linux is very hard to support because there are so many different versions out there: do we have Eudora, do we have SUSE, do we have Turbo Max?
This is just evidence of Mr Kohut's lack of expertise in the field. Eudora is a mail client (you know, like Outlook?), Turbo Max has nothing to do with software as far as Google tells me. And no .. you don't have to support every distribution of Linux. Pick one or two (Ubuntu and Fedora are two good ones) and support only them. In fact, charge for Ubuntu and Fedora like you do for Windows installations but instead of it being payment for the software make it payment for support that people can actually get. They get the OS for free, you charge for support, and customers actually do get support on their OS.

I am shocked, angry and a little sad too that someone in that position of influence and power can be so dense, clueless and down-right imbecilic. How can you make remarks on a topic that you obviously know nothing about? What also saddens me is that these comments, my own and those of all the other outraged Linux users, will not by read by the majority of users.

Monday, April 20, 2009

Mac Ads seem to be a little presumptuous

I was checking out the Mac adverts that Apple post on their web site and had to shake my head in amazement once or twice. Admittedly they are funny but for some reason it seems that Apple think all PC's run Windows. I know that the vast majority do run Windows but not all. And the fact that on the PC I have a choice of operating system seems lost on the marketing guys at Apple. Perhaps the IT guys briefing the marketing firm didn't fill them in on the fact that there is another player in the Desktop Operating System arena, namely Ubuntu. With the new Ubuntu Jaunty Jackelope (9.04) about to be released in a couple of days from this post, Ubuntu (and its derivatives such as Kubuntu) are a massive improvement over Windows.

And it got me thinking, what reason do people have to switch to Mac? For the stability of the OS? The built in nature of all the applications? The one downside (and the ONLY reason that I keep a Windows XP installation on my home PC) is that gaming, with some exceptions, does not work on Mac or Ubuntu. But then, when you can buy two PC's with the same hardware specs as a Mac and then install Ubuntu on both for nothing and get all the same benefits as a Mac, I still don't understand why people are so drawn to it. I understand that the Mac is prettier to look at but not everyone has R12 000 for the bottom range Macintosh just for eye candy.

The Mac ads seem to point out that PC's suffer viruses. That they have no applications for producing cool movies, pictures, etc. Crash constantly. And now, need massive hardware upgrades because of the operating system they use. Erm. None of that is true if Ubuntu is used. Ubuntu has access to safe collections of applications. In fact, since my switch to Ubuntu as my primary OS, I have never had to worry about finding an application to do what I want. I needed to find a book cataloguing system because I do have a rather large collection and to keep track of the books I still want can be a little hard. Alexandria is a freely available application that took me 5 minutes to locate on one of these collections. For those already using Ubuntu just do a search in your package manager for Alexandria or on terminal

sudo apt-get install alexandria.

I load up my Windows machine and want to find an application to use and its a few Google searches to find an application written by somebody. I then have to hope that this person is on the level and that its safe to use (i.e. contains no viruses) and that the application will actually work properly on my PC and not slow things down too much.

The funniest thing? I play World of Warcraft (I know, seriously geeky, but thats another discussion). WoW actually plays faster in Ubuntu using Wine (Wine is another application to try and make Windows programs work on Linux) than in Windows XP! I couldn't believe it myself but its true. A game made for Windows runs faster on a competing operating system that has to use a translation layer like Wine.

So no .. I won't be buying a Mac because I am not afraid of malware or viruses, I do not have problems finding applications I need and my operating system is not unstable. In fact... I think I can go get another PC. You stick to your one Apple Mac then.

Wednesday, April 8, 2009

What Trac really needs

Anyone who has read this blog before might have found my previous post where I mentioned how I started getting into setting up my own web server on and that I was also installing Trac, which according to the Trac documentation:
"Trac is a minimalistic approach to web-based management of software projects. Its goal is to simplify effective tracking and handling of software issues, enhancements and overall progress."
Well, I have got Trac up and running. Trac actually relies quite heavily on its command-line client which, to be honest, I have no problem in using. Anyone that develops and/or works on and for a *nix environment is probably more than comfortable using the command-line and probably, like me, finds it far more useful and efficient than any GUI could probably be. There were a few issues however in setting up Trac that I thought I would share here for anyone reading and interested in setting up their own Trac installation.

1. Command Line requires a learning curve

This may seem counter to what I said above but the one advantage a GUI interface has over a command-line is that it is intuitive. With a GUI you can see buttons and prompts beckoning you to use them. With a command line you need to know the commands or ... well you can do nothing. This means that anyone looking to install and run Trac as of now will have to spend extra time learning the, albeit rather basic, commands.

This is alleviated somewhat with a very useful help system as well as fantastic online documentation for Trac, but the fact still remains that that learning curve might put people off.

2. Root access needed

Trac is not a simplistic web application, even though the documentation calls it minimalistic. It requires the person installing to have root access to the machine and is one of the reasons why I am moving to running my own server as opposed to continuing with a shared, managed service as I have done in the past. While it is understandable to some degree because of the SVN integration, again, this requirement will limit the available user-base to those who know how to setup and maintain web servers or have enough dosh to fling around to get their server management company to install it for them.

3. No built-in authentication

Thats right. Unfortuantely Trac does not include its own authentication system, so managing multiple projects for different clients who should not have access to one anothers projects can be a little nightmarish. If you want authentication then Trac expects you to use Apache's own built-in authentication system's (or whichever web server you happen to have installed). This means that anyone installing this also needs to know how to setup Apache in order to authenticate users based on encrypted, password files stored on the server itself and referenced to using Virtual Host settings.

Again, this limits the potential users of Trac to those that are sys-admins or have the money lying around to get someone to do it for them.

4. Lets give Trac a break

I mentioned a few issues I had but lets cut the developers some slack. Why? Well, Trac is only at release 0.11. Yup! They haven't even reached a full release version yet and pretty much what is available is beta-ish. Once you know that Trac development is still steaming ahead and that the "issues" I described above will probably have solutions to make things easier once the development team do incremement that version counter to 1.0, it doesn't seem such a bad deal for a free development management and bug-tracking application. I am pretty sure that in the next few weeks to months we will see Trac become a feature complete system and I cannot wait for that day. So far I am very impressed with what I can do with it and am so glad I stumbled across it a few months ago.

Monday, April 6, 2009

National Skirt Extension Project is a WHAT?!

Yes thats right. The National Skirt Extension Program. This morning sitting in my car listening to a popular, local radio station and I hear an advert for the National Skirt Extension Program which aims, get this, "To increase the displayed length of a ladies skirt on restroom signs". Feel free to go check it out yourself here. There is also a telephone number to call for more information. There are some fears that calling that number will result in large call fees as if you had called one of those premium number sex lines, but the 0860 number is NOT a premium call number so out of curiosity I called it. After listening to a recording about the NSEP I hung up.

I honestly cannot think there is anything real about this. If our government really has initiated a project as frivolous as this, then we really do need to be careful who we vote into power in the next election. I can only see this as some form of prank. One person commented on another blog that he actually called the agency responsible who apprently says its for real and their client is the NSEP but that they couldn't go into more details.

I hope this is only some delayed April Fool's joke or an attempt by someone to garner attention and then use that attention to launch some other project or initiative. If any one has anymore information please feel free to share because this is a little scary if it's real....

EDIT: Apparently there is speculation that this advertising is being arranged by UniLever, you know, the household products company. If this really is a viral marketing strategy by a company then I must say they have done good work here. I am actually surprised there is nothing from the government yet to denounce the adverts as nothing to do with them.

Memory caching can be a saviour

At Synaq we are busy working on a pretty complex application. Essentially its a frontend interface to a system that scans and processes customers emails for spam then records the results of the scans in a MySQL database. Without going into too much senseless detail, the backend processes a few million items per day and suffice it say that is one helluva database to search through when you need to extract useful data.

Because of the sheer quantity of data we have had to use numerous techniques to try and make the frontend still act at least reasonably responsive when it needs to query the database. Then one day I asked myself "Does the interface really need to query that database so often for data that in essence hardly ever changes?". The scenario is that the interface does not really make many alterations to the data extracted and a lot of the data used is repeated per page for a specific users session. One security feature we have for example is that every user is defined as belonging to a specific Organisation (or Organisational Unit to be technically correct) and every page load requires retrieving this list of Organisations that the current user is allowed to see. This is not likely to change that often and so we came up with an idea.

We use APC, a memory caching facility for PHP scripts, and it also allows you to store your own values through your code into memory explicitly. Thankfully, symfony provides a class that can manage that for us as well, the SfAPCCache Class, that makes using the cache a doddle. Our problem? We need to ensure that the data we store is totally unique.

The solution was to store the results of a database query for our OrganisationalUnits model class into the APC Cache memory. The way we did this was to use the Criteria object for the Propel query as the name of the item to be stored. It stands to reason that if the Criteria object for a specific query is unique then the result will be unique. If the same Criteria object is passed again then the results from the database will be the same as the same Criteria object we passed before. Why query the database a second time?

The APC Cache though cannot take an object type as a name only a string. Easily enough done with PHP's serialize() function. But that string is excessively long (a few thousand characters sometimes) so we need to find a way to shorten and yet keep the uniqueness. So we get the MD5 hash of that serialized Criteria object. There we go. But due to our own paranoia and the need to be 110% sure that we wont by some ridiculous stroke of bad luck create another Criteria object later that against all the statistics of MD5 creates the same hash, we also make an SHA1 hash and concatenate the two hashes. There! Now the chances of any Criteria objects having the same name are so remote as to be nigh-on impossible.

But it doesn't end there. This doesn't help us if we don't know a way to actually add this to the cache and remove etc. For this we go to our OrganisationalUnitsPeer class and overwrite the doSelect method that recieves all calls to run a query onthe database as such:

public static function doSelect(Criteria $criteria, $con = null)
$data_cache = new sfAPCCache();

$serialised = serialize($criteria);
$md5_hash = md5($serialised);
$sha1_hash = sha1($serialised);

$complete_name = "organisational_units_doSelect_".$md5_hash.$sha1_hash;

if ($data_cache->has($complete_name))

return unserialize($data_cache->get($complete_name));
$query_result = parent::doSelect($criteria);
$data_cache->set($complete_name, serialize($query_result), 3600);

return $query_result;


Rather simple I thought. We also wanted to be sure that if the user added, updated or removed a new Organisation that the cache would not give the incorrect listing so we added to OrganisationalUnits class (not Peer):

public function save($con = null)
$data_cache = new sfAPCCache();


$return = parent::save();

return $return;

public function delete($con = null)
$data_cache = new sfAPCCache();


$return = parent::delete();

return $return;

Just doing this to the one set of data has increased our page loads speeds dramatically as well as reducing the load on the server itself as well when we do intense performance testing. We hope to employ this further along with other items that similarly load for each page etc and will never change.

Friday, April 3, 2009

Our background in symfony

PHP is a great language, in my humble opinion, to program in because of its flexibility and pervasiveness. It has its odd quirks, which you get used to, but generally speaking coding in PHP has always been fun for me. One problem that the development world has had is using programming languages to build large and complex applications. While the efforts of Object Oriented Design have done a great deal to help push the mantra of making code re-usable, extensible and maintainable, it can still be a daunting task to build some of those projects out there.

One very important design principle I came across years ago was something called MVC; Model View Controller. Essentially what that entails is instead of bunging all your PHP code (and MVC does not only refer to PHP, its a design concept used in lots of other languages) into one file to represent a page, like database connection, running a query on the database, formatting that data and manipulating it, followed by echo'ed HTML to display that data in tables or whatever format is desired, MVC seeks to seperate all the different parts of a web application to make managing them easier.

Model refers to the actual object classes that describe the database schema your data is stored in. Instead of writing your own SQL queries by hand and hard-coding things like database, table and column names, the model is the intermediary. The model is responsible for connecting to the database, generating a query based on parameters you have passed to it, manipulating that returned data and then sending the end result back to whatever called it ready for use.

View refers the actual presentation on screen that an end user would see. The view doesn't care what the database looks like or even if one exists at all as long as it has the data it needs to create the presentation it is supposed to. It will generate the HTML needed for that data the model extracted to be displayed in a way that makes sense to the user.

Controller is the intermediary. It will take the events generated from the View, such as mouse clicks, page loads, etc, analyse what the view has done, decide what the next step will be, such as load another view or ask the model to return more data and then send that data to another view, etc. The controller can be thought of as the glue that binds the model and views together.

Whew! Ok, enough of that lecture. There is one problem with this seemingly clever seperation of tasks. Coding an MVC framework can be a nightmarish task and the complexity of making an MVC alone work can be more effort than its worth. This is where symfony comes in. Symfony is an already pre-built MVC framework for PHP, and while setting up your own MVC structure would be laborious, symfony's is ready to go and using the framework as opposed to writing your own PHP code from scratch actually makes the job even faster than using no MVC at all.

So why is symfony so great? Well, feel free to try it yourself. Symfony's philosophy is convention over configuration which means that, instead of explictly defining the relationships between classes and database schema, for example, that there is an implied relationship. For example, if you had a table called "sales_history", the model class that deals with interacting with that table is called "SalesHistory". Its a convention, we agree to use it this way. It is only if you decide to not use this convention and name your class "SalesMade" that you need to worry about reconfiguring aspects of your code to do that.

Because of this convention scenario you can do the following steps, after having installed symfony, to have a fully working set of database-agnostic model classes ready to use in your application:
  • Go to a terminal and enter:
mkdir project_name;
cd project_name;
/path/to/symfony generate:project project_name
  • Then go to "/path/to/project_name/config/schema.yml" and define your database structure in the easy to use YAML syntax
  • Go to terminal and enter "symfony propel:build-model;"
And voila! There you have it. You now have classes that match your schema that you can use anywhere in your symfony application.

At Synaq, we have been using symfony for over a year now on a specific, large, and complex project and it has proved invaluable. There has been a lot of learning and experimentation involved in getting to know and use the framework to its best, but the experience has been well worth it seeing how quickly, even with the learning curve, we have been able to produce results.

There is far too much involved with symfony for me to be able to go into great detail here, and I will be giving more information in future on tricks and tips we have learnt while using it. Suffice it to say, if you want to simplify the way you develop large projects, feel free to go give symfony a look.

Thursday, April 2, 2009

A non sysadmin trying sysadmin

Well I couldn't resist waiting till tomorrow but I have over the last few days been attempting something I have never done before. But a wee little bit of background first.

In the past as a PHP developer I have usually used servers to host the work I have done that have already been pre-setup, using a shared/managed hosted service where all the necessary server software (Apache, PHP, MySQL etc) is already up and running. This has served me well in the past because I could then focus on making the applications and leave the server administration to those more qualified.

However, over the last few days, I purchased myself a Linode, a VPS, which allows me to install a Linux distro of my choice as well as then go ahead and install my own applications onto it with the eventual aim of being the web and mail server for my domain (which is currently hosted at an afore-mentioned shared hosting company). Doing this has helped me learn an absolute ton about what goes into setting up a Linux, production web server as well as helping hone my Google search skills even further.

The reasons I decided to do this was simple. I needed a server that would allow me root access to install applications such as Trac, an SVN repository and issue tracking, web based application. Also, I wanted more control over things such as the modules of PHP I can install myself and even which version of PHP as well.

For this little project I chose to use the Ubuntu 8.10 (Intrepid Ibex) server distribution. One reason was simply I am used to using Ubuntu as it is my desktop environment as well, and also because one of my colleagues at Synaq (a systems administrator by profession) suggested it, citing that for my limited needs I needn't worry about LTS versions and so on. Included in my little server are the usual applications:
  • Apache 2 for webserver
  • PHP 5.2. I needed 5.2 because I want to use the new Propel 1.3 ORM in my PHP applications and 5.2 is a minimum requirement.
  • MySQL 5.0 as my primary database application predominantly because I am used to it. I plan to also install and experiment with PostgreSQL, amongst others.
  • Python, perl and Ruby. All because I plan to play with them too :)
  • SVN as my code store and code backup
  • Trac (which runs off Python/perl) to use as my Project management and SVN interface tool for a few projects I'd love to make.
A few of the sites I visited to help teach me how to set all this up, seeing as I had never really done this before were:
What I loved about this experience was that I was now able to better appreciate the work the sysadmins here do, as well as be amazed once again how little time I spent picking their brains because of the wealth of knowledge the Internet provides. I love WWW :D

The best part of this whole experience is going to my new server IP address (I haven't tranferred the domain yet, DNS is the next issue I need to tackle) and seeing it all happily load in my browser....

Wednesday, April 1, 2009

Its all about me.

Well, no, I am not that pretentious. This blog is all about me and my day-to-day activities related to web development, primarily with that revolutionary, server-side scripting language known as PHP, which started out standing for Personal Home Page and now represents Hypertext Pre-Processor ... pretty much a "hacked" acronym.

But enough about the boring history of PHP (I assume boring as when I discuss it with most people the glazed look is a dead give-away), and more about me. I am Gareth ... Oh, you want a bit more? Alrighty then. I am as of the date of this post a 28 year-old, engaged (should earn me some kudos with the little lady), South African guy, currently employed by Synaq, a company that specialises in providing Managed Linux Services to corporations using Open Source technologies. Pretty much were a bunch of Open Source geeks having a blast playing with really powerful machines that handle millions of processes per day. Well, thats what the System Admin's do at least. I am the geek that makes some of the software my fellow colleagues use and even some of our clients. I am the Web Developer. Or rather a web developer because with my compatriot Scott we are the team of two that try our best to write, hack, squeeze, prod, improve, maintain and otherwise maul code into some semblance of what the company wants. Its a nice job ... keeps me kinda busy .. and we get free coffee .. which is a nice perk..

And I am waffling so ONWARD! The reason I created this blog is simply because in my day-to-day work I often find myself sitting with a Great Discovery in my hands, either conjured alone or with Scott, and no one to share it with. While there are sites I could post this stuff onto it somehow feels more like just chucking stuff at the world in general than it does making a contribution. So, I hope to use this blog to share my web development woes and I have a few ideas I hope to get up and running on here as well. One of which is a complete A-Z tutorial of becoming a PHP web developer. All the way from understanding the client-server chain, to installing a testing server on your machine, choosing an IDE and getting stuck into the coding.

Well enough waffle for today. Tomorrow I shall make my first post (or later if I can't contain myself). Thanks for taking the time to read my first piece of drivel and hope you come back