Friday, March 31, 2006

Can or Should you Measure Software Development Productivity?

Lots of businesses are trying to measure their productivity these days. In the software development space, I can tell you it is being attempted in many ways. I would have to say at this point that all of them are flawed.

IT leaders, in many cases, are being pressured to measure their development productivity, to show improvement over time, just like other measurements in business. Business leaders, measure financials, and processes that are repetitive in nature, and they think that IT should be able to do the same thing. This is where the rub is, and the complete misunderstanding of what software development is. It is even where things like CMM go wrong.

If you have a manufacturing process, that process is repeated, with the exact same steps over and over again. These processes can easily be measured with throughput metrics, quality metrics, based on component, assembly and final product testing. They produce the exact same thing every time. Other business processes, such as picking product in a warehouse, are similar in nature. A person is instructed, usually through some software based system, where to go in the warehouse, what to get, and where to put it. Once again, a highly repeatable process, that produces the same outcome (at least when done correctly) every time. You can easily measure it, and not affect the outcome (or at least not affect it in a negative fashion).

That final phrase, "and not affect the outcome", is a very important phrase where software development is concerned. Also, "repeatable" is also very important to understand. Is software development repeatable? Can you measure it without affecting the outcome?

The answer to those two questions are the key on whether you can even try to measure software development productivity. Let's take the first question, and see where it leads us.

Whenever you embark on doing software development, you always have new requirements. Based on those requirements, the logic has to be different than what has been done before. Based on the people working on the project, their personal experiences and knowledge dictate the implementation choices that are made, even if the requirements have been implemented by someone else in another project. External forces, like technology changes in surrounding hardware and software, as well as things like corporate standards and direction changes, all influence how the software solution will be implemented. I am just scratching the surface here, on the myriad forces that work on a software development project. When you take these things into account, along with many other things within the typical software development project, how can anyone expect that this is a repeatable process? I don't believe that you can! By definition, software development is a creative act by human beings, hence the outcome will be different each and every time it is done.

To draw an analogy, if you took the same person, and sat them down in front of the same scene, and asked them to draw or paint it. Then have them do it again. Would it be the same the next time? The answer is obvious, it wouldn't. Now expand this analogy to include multiple artists, working on the same work of art, each dividing the work into some manageable piece. Now, what would you expect? Now expand it again, to periodically change out some of the artists for different artists (a common occurrence on software development projects), and what would you expect? I think the answer is clear. At no point would you end up with the exact same drawing or painting. Continue to extend this analogy to include new scene elements every time (like new system requirements), but they have to be incorporated into the same drawing or painting, and you start to get a good picture of what ongoing development on the same code base includes. Go even further, and have some of the scene elements be in direct conflict with others that used to be in the scene. I think you are probably getting the picture (pun intended)!!!!

That leads us to the second question. Can you measure it without affecting the outcome? The most prominent measure for software development productivity is function points. Considering that software development is a human creative act, all humans being measured will want to understand how the measure is calculated, and what is expected of them. When you consider that function point counting, counts things (e.g. number of unique interfaces, number of database tables, etc.), the more of those things that you produce the higher the value, and supposedly you have produced more functionality for your business. Do you see the inherit conflict?

To truly drive productivity in a process where human beings are endeavored in a creative act, you should be striving to do less, not more! The least amount of work to meet the requirements of the system should be the goal. As soon as you put a system in place that incents people to do more, you end up with a much more complicated implementation. Knowing that you are being measured based on the number of things you produce, and you are confronted with a design decision. One option has fewer of what is being counted, and one has more, which one do you think will be chosen? So the answer is clear, that these types of measurements simply incent the inverse behavior that you are looking for, and certainly affect the outcome. Not only do they affect the outcome, but they affect it in a negative way, especially where quality is concerned. More software in a system, creates more opportunities for errors in the implementation. It is a given that quality will suffer, and probably suffer dramatically. Of course, then all the project manager types out there are thinking, we will just do more testing, or better testing. Now you have just elongated your process, and once again are going in the opposite direction you intended.

On final question. How does counting things like interfaces, tables, etc., equate to the value that a software system has in the first place? It doesn't have anything to do with it at all! You could create this huge software system that would have lots of function points, but if your business doesn't find any value in it then it is not worth anything! It is what the software enables for your business that makes it valuable or not. The center of what we measure should be value to our businesses, nothing more, and nothing less!

Tuesday, March 28, 2006

Using Linux on a Day to Day basis

I was sitting here thinking as I was working away, and I started to think about what it is like using Linux on a day to day basis. Is it really that different than using Windows, or Mac OS X? Are there really big differences between these platforms for me?

I have several Linux machines in my household, and I have several Mac's. I used to use Windows in my day to day work environment, but I have been free of that for quite some time. I use Linux everyday, and I really don't have anything that I miss, or need, that the other platforms offer. Yes, there might be a feature here or there that is on one platform vs. another, but nothing that I just have to have. In fact, when I look at what I need to do my daily job, it boils down to these things.

My day almost always starts with reading e-mail. Well, there are certainly no issues there. Linux has quite a few decent mail clients to choose from. I have been using Evolution, and it has served my purposes quite well. Between the e-mail, calendar and task features, along with filters for sorting through e-mail and filing them in appropriate folders, I have a very productive environment. After going through e-mail, and working through whatever that brings, I usually transition to doing some technology reading.

In this regard, the trusty Firefox is my primary tool, along with Google's Reader (Their RSS/Atom Feed AJAX client), I can read through all the latest technology news, and technology articles that are relevant to me. After that, I usually do work around process related items.

In this regard, OpenOffice.org 2.0.x has been the tool of choice. I have to deal with budgets, products, development processes, etc., and they invaritably are encompassed in some form of business document. My co-workers almost all use Microsoft Office, so I have to use the Office formats often. What has been impressive, is that I have been working with Excel spreadsheets, PowerPoint presentations and Word documents of almost every kind, including budget and planning spreadsheets with macros. So far, OpenOffice.org has been able to read and write each and everyone, even the ones with macros, with no apparent issues at all. I have even used OpenOffice to publish documentation in DocBook format using the XML transforms for DocBook. The only thing it didn't do correctly is include my embedded images. A quike edit of the XML using VIM, and I had my embedded images. It even intelligently kept my footnotes, and appended them to the end of the document. Very clever way of dealing with footnotes. Finally, I usually turn my attention to development type tasks.

In this case, I use Eclipse, with JBoss IDE, and other Eclipse plugins. Eclipse works beautifully on Linux, and I have no problems working with our CVS repositories through Eclipse. Also, I use MySQL as my database, and have been using the GUI administration and query browser tools as well. The database is rock solid, and runs beautifully on my laptop. The GUI tools have progressed since they first became available, and I have used the query browser to do data analysis on a corporate database. Once I figured out how the bookmarking features work for queries, I was able to save, with descriptive names, all of my analysis queries for later use. It was very nice indeed. One thing that I have yet to put into practice, but will soon, is an application called gvidcap. This application will record what you are doing on your computer, complete with your voice (as long as you have a microphone). Last but not least, has been Skype. I have used Skype for work and I must say that I have been impressed. Conference calls and individual calls work very well. On conference calls, sometimes network latency issues and/or CPU issues on the peers involved, may degrade the experience, but overall I have to give it high marks. Besides work, I also use Linux for my personal business as well.

Where personal use of Linux is concerned, I certainly broaden the things I do. For instance, ripping some of my personal CD's, and putting together a music library for my own enjoyment. In Fedora Core 5, this is as easy as pie. Other things are burning CD and DVD's. I have come to like the simplicity of the CD/DVD creator in Fedora Core. In fact, with Fedora Core 5 you can now duplicate CD's and DVD's through it quite easily. Just put a CD in, and right-click on the desktop icon, and select "Copy Disc...". It doesn't get much simpler than that. Some of the other day to day activities where Linux really helps me out is the new Tomboy and Beagle applications. Tomboy is a very nice and simple note taking application. I have started to use it and it has filled a real need. Instead of typing a document and saving it away, and then not having it very accessible, I can just type a quick not, and it is right there within the panel applet, right where I can get at it, so I don't have to remember what I called it, and where I saved it. It is cutting down on the document clutter that everyone experiences. And finally, Beagle has been very impressive. When I want to find something, searching is now extremely fast, and complete, since the search technology doesn't just work on file names, but on the content. It has been wonderful, and it finds things in places that sometimes I wouldn't think to look. Very useful!

One last thing about personal use of Linux, is playing games. I have become addicted to Chromium. If you are old enough to remember Galaga, then you will like Chromium. Different in its approach, and much better graphics and sound then Galaga, but similar. Try it out! As far as commercial games, I actually have quite a few. Unreal Tournament 2003 and 2004, Quake 3 Arena, Return to Castle Wolfenstein are just some of the titles that I have that are native Linux ports. They all work great, and have been real fun. I hope the trend continues to offer native Linux ports for commercial games.

In conclusion, I would have to say that using Linux on a day to day basis is easy, productive and fits my needs very well. I would bet that if you spent some time with Linux, you would probably find the same thing.

Wednesday, March 22, 2006

Fedora Core 5: Fits and Starts

I was planning to write this wonderful review of Fedora Core 5. Well, I downloaded the ISO images, and burned them to CD using Fedora Core 4 with no problem. I booted up from the first CD, and started the installation process.

The installation process took about an hour and a half on my HP Pavilion zv5000 laptop. It is a Athlon 64 laptop with 802.11g wireless, 1.2GB of memory and all the typical things like USB 1.1 and 2.0 ports, microphone and headphone jacks, Nvidia graphics (GeForce 420 Go with 32MB) and a SD/PCMCIA slot. I have really grown to like this laptop, and I had everything working beautifully with Fedora Core 4. In fact, I was a little hesitant to jump on the Fedora Core 5 bandwagon so quick. I usually like to wait until I see the first kernel update for Fedora before upgrading. Usually by then, all the major problems have been worked out. Well, my hesitancy was justified.

The installation went without a hitch, installing packages from all five CD's. In fact, I noticed that it upgraded both the i386 architecture and x86_64 architecture packages I had installed. A very nice touch in deed. After rebooting to my shiny new GNOME 2.14 desktop, that is when the trouble began.

The first thing I always do after installing a new kernel is reinstall the Nvidia drivers so I get full accelerated 3D. This was especially crucial, considering I was real eager to try out the new AIGLX support with all the wonderful 3D stuff. Much to my surprise, the Nvidia kernel module would not build, even though the kernel that comes with Fedora Core 5 is 2.6.15, just like the latest kernel on Fedora Core 5. It turns out that they made a last minute change to the kernel that broke the Nvidia kernel module. There is a bug report for it, and they plan on fixing that problem when they release an updated kernel, which according to the note I saw should be within a few days (https://www.redhat.com/archives/fedora-test-list/2006-March/msg00999.html).

So, this shot down my ability to use the most anticipated feature of Fedora Core 5. Then the kernel problem reared its ugly head again. My wireless card, which is from Broadcom, does not have Linux device drivers. Seeing that my machine is also a 64-bit machine and I am running the 64-bit OS, I need to use DriverLoader from Linuxant. I installed the latest version of DriverLoader, and its kernel module also would not load. Turns out it is the same problem that broke the Nvidia kernel module. So, now I have reverted back to using a wire. At least I still have network connectivity.

Having endured these problems, and not really getting to try out the new 3D stuff was disappointing, but I did have a functioning system. At this point, I decided to move on to making sure all my upgraded applications worked. First, I fired up the new version of Firefox. It detected my old extensions, and asked if I wanted to find new compatible ones. I said yes, and it found all of the updates for my extensions, and I installed them, and everything worked great. This area was one of the most smooth of the whole day, and I was very pleased. Then I moved on to Evolution.

Once again, upgrade problems! I use Evolution as a client to an Exchange 2003 hosted environment for work, and it is very important for my everyday productivity. When I fired up Evolution, I could no longer authenticate. Ouch! I deleted the account, and exited Evolution. I setup the account again, and seemingly I could authenticate, but I couldn't get into my folders. After several hours of playing around with things, I just gave up and entered a bug report into Bugzilla. At this point, I have had to fall back to using Outlook Web Access, which is not very good. Anyway, at least I could get to my work e-mail, calendar and tasks, even if the interface is crude.

On the upside, my Skype client still works great, my Eclipse environment, and MySQL (even with the upgrade to 5.0) works great. The only problem I have on the MySQL front, is that the MySQL Query Browser just segfaults now. I tried upgrading it, building it from the source RPM, etc., but to no avail. Once again, I was left with no option but to report a bug to MySQL.

Hopefully, the kernel fix will come soon, and I will have my wireless and Nvidia drivers working again, and that is when we can start having some real fun testing out FC5. I guess the lesson is to not upgrade on the very first day of the release.

Wednesday, March 08, 2006

Are Software Patents True Inventions?

Over the last several years, I have given a lot of thought to software patents. Being involved in open source software for the last six years or so, has triggered some of those thoughts. Also, being involved in a patent infringement case also made me think long and hard about how I felt about software patents.

Several years ago, in my previous job, the company I worked for was sued for patent infringement. Now, this is the last thing I ever expected, because the company had no technology of its own for sale or use outside of the company! Why would we be sued for patent infringement? It didn't make sense to me at all. In the early days of this lawsuit, I had to meet with an outside patent attorney who was going to act as an advisor on the case for us. He explained a lot about patents and how the system worked to me. He explained that the mere use of the technology by someone made them liable for infringement. That means users of technology are just as at risk as technology providers.

In this particular case, the technology in question was provided to us through a vendor, and we used it extensively in our enterprise. Which made it all the more scary, because if we were forced, in some way, to stop using the technology, we would essentially cease to be able to operate our business. Fortunately, we were in compliance with our indemnity clause in our contract, and at least the vendor had to take over and defend us. Even so, it was still me that had to go through the process. In that process, I was deposed by the legal counsel of the patent holder.

Eventually, the case was settled without going to trial, and the company I worked for did not have to pay a dime. What I learned from that experience was three fold.

First, the patent did not have to have a working implementation! When software patents were first issued, you had to submit the source code to the working implementation with the patent application. This is no longer true, so you can essentially patent an idea, without a working implementation.

Second, the patent office does not have the skill to determine if the idea is something that truly meets the bar for a patent. One of the keys to whether something is considered an invention, is it cannot be the logical next step for a engineer competent in the field. In reading the patent involved in the case I talked about, it clearly did not meet that requirement. It is my belief that 99% of software patents do not meet this criteria.

Third, is that the discovery process for what is called "prior art" is awful. I believe 99% of software patents have relevant prior art as well, but you would never know it by looking at software patents. Of course, one of the issues with prior art, is that 95% of all software written, is written by organizations that have no intention of ever selling it. IT/IS departments write 95% of the worlds software, which means that the search for prior art is only covering 5% of the software spectrum. No wonder this process is so bad.

Is this to say that no software can meet the requirements for patentability? I think there is a narrow band of software that can be considered a true invention. For example, crytographic algorithms. The mathematical element is such that you are essentially discovering something. The peer review that these algorithms have to go through in order to be proven secure, also raises the bar, in my opinion.

Because of these issues, and others that I haven't delved into, I believe that we would all be better off just eliminating the patentability of software. I believe that the notion that software research and development would stop is ridiculous. Just because you can't patent something doesn't mean there isn't money to be made in the market. It is the potential money to be made, and the size of the potential market that drives software research and development, not the ability to protect the work through a patent. In almost all cases, software functionality can be duplicated with an alternative implementation approach anyway! What is really going on with patents, is that companies want the ability to control a market. That is not the free market economy at its best. Healthy competition on implementation in software is what is best for the economy.

Let's just do away with software patents?

What do you think?

Tuesday, March 07, 2006

Predictions for the Future of Middleware

What is the future for middleware, especially in the enterprise? I would say there are two major trends in the industry.

One trend is the consolidation of middleware. The consolidation is reflected in the ever increasing number of pieces in the portfolios of companies like IBM. Now, IBM's portfolio is a hodgepodge of internally developed and acquired technologies that don't always play well together (or even work for that matter). Nonetheless, it is still a great example of the fact that a lot of enterprises want fewer vendors to work with, and will buy more from a single source. Of course, this is driven by a desire to have "one throat to choke". As I have said before, this is simply a myth. If you check out one of previous posts, you can get my full analysis of why this is a myth ("The Myth of One Throat to Choke").

This consolidation in the closed source world has the biggest affect on the smaller vendors who have specialized middleware. For example, companies like webMethods, who have been in the EAI space, or Sonic with their ESB product. They simple offer one small piece of what is needed by enterprise customers, so they will either be purchased, or whither and die a slow death, as their once standalone market is subsumed by the middleware suite vendors.

The other trend is the proliferation of open source middleware. Whether it be a simple solution like LAMP, or more comprehensive solution like the JBoss suite of middleware technologies, this market is growing, and to some extent starting to dominate the landscape. As enterprises continue to push the use of open source software and gain its benefits of higher quality, lower costs, empowerment for developers and support organizations, the sky is the limit.

These two trends are the only trends that show growth in middleware. Everything else is stagnant in the market. There may be some companies that can show growth with standalone solutions, but it is growth that will be temporary. As open source continues to mature, and big middleware players like IBM continue their march, the middle of the market will get squeezed out.

So when there are two middleware plays left in the market, the large platform or suite vendors (of which there might be two or three), and open source, what happens then?

Open source will continue to commoditize the middleware market, and the large platform players will have to either move out of middleware and up the stack, or they will have to join the open source party, and get behind existing open source efforts or try to forge their own communities. Mind you, this will take years, as enterprises don't change over night, but in my opinion it is inevitable!