Saturday, December 30, 2006

Is JBoss Open Source?

I continue to run across people, and written articles, that claim that JBoss is not "true" open source. For the longest time, I just didn't understand what they meant by that.

In some conversations I have had with people, they don't understand the licensing issues, in others, I just here corporate blah, blah, being repeated without much thought. This mostly comes from IBM employees, who are repeating the party line, but don't really understand what it is based on. In still others, I hear confusion between licensing and development models, and this seems to be the heart of the issue, with people claiming that JBoss is "evil", and not "true" open source.

So what is "true" open source to these critics of JBoss? It is simply that they think open source is not just the license, but also the development model that is used. They also believe that the only appropriate development model is one where no one company or entity entirely controls the project.

The fact of the matter is that open source is about the license, not about the development model used. I could write the code completely on my own, and release the code, and never even accept external contribution, and if the license is an OSI approved license, then it is still open source. The project may or may not be very successful with that approach, but it doesn't change the fact that the code being under an OSI license affords everyone the freedom's of open source.

So what is the development model that the critics say make something "true" open source?

They contend that you have to have many companies contributing, and Linux is used as a primary example. The fact of the matter is, in the case of Linux, you have market dynamics that brings companies together because they have a common interest in fighting a monopoly in the operating systems market. This is a very unique set of circumstances in comparison to the middleware market.

Also, with IBM in particular, which is widely credited for giving legitimacy to Linux, has a huge incentive to support and contribute to Linux. First, when they started getting involved with Linux, they had AIX, OS/390, OS/400 and OS/2 as operating systems they were spending considerable resources developing and supporting. Considering the portability of Linux, and its rapidly maturing technology, if they put their resources behind it they could eventually have a unified OS strategy, with one operating system running across all their various hardware platforms. In fact, today you can run Linux on all of their hardware platforms now.

In the case of a company like Oracle, Linux is the hedge against Microsoft in the database market. In order for Oracle to maintain a market share advantage over Microsoft in the database market they need an alternative platform that is popular on commodity hardware that SQL Server doesn't run on.

The dynamic of having a hated monopoly, plus other unique incentives, brings even competitors together to support, contribute and promote Linux. This simply doesn't exist in the standardized middleware market.

Could you imagine IBM and BEA contributing to JBoss? Companies only contribute to open source projects when there is a strategic corporate advantage to doing so. No one should be naive enough to think otherwise.

In the middleware market, there is no one dominate player, in terms of market share, and there is considerable revenue tied to traditional closed source products. It is quite impossible for JBoss to have the kind of external contribution that Linux enjoys, due to its unique market conditions.

Having said that, JBoss enjoys considerable external contribution from companies. Initially, Novell was a considerable contributor to a couple of the projects, but the Red Hat acquisition put an end to that. We have also had many companies that are users of our technology contribute over the years. Our new Group Bull relationship is another example, and when you look at the folks that work for the JBoss division of Red Hat, all of them where external contributors at one time (developers).

Under the market circumstances, and the business model of the company, JBoss has as open a development model as is possible. That leads to the other issue of the critics.

The business model of JBoss, is one where the core developers all work for the same company. What this enables, is a quality of support that simply cannot be matched. While anyone could take the JBoss software, and distribute it themselves, and offer support, they simply cannot match the quality of support. We have a two tier model, where we hire very experienced Java EE developers for tier one support, and the core developers are tier two. Does this mean that we are not "true" open source?

Open source is about supplying freedom's to all user's of the software, and JBoss supplies that, as all of our software is under an OSI approved license, and most of it is under the LGPL. Secondarily, the business model that has emerged for open source is one based on quality of support. By hiring the core developers, we enable the best possible support, which is certainly in the spirit of open source.

In conclusion, under the market conditions, and what users expect from open source companies, JBoss is as "true" to open source as you can be!

Saturday, December 23, 2006

Java and the GPL!

It's been a long time since I posted, and something that I was looking forward to, was Sun's move to open source the Java platform.

Well, they not only followed through with the plan, they completely caught me off guard with their choice of license. I think they caught everyone off guard.

I have always been in favor of putting Java under an open source license, but I never really gave much thought to which license would be appropriate. The GPL, with the so-called "Classpath" exception, I have come to believe is the ideal choice.

It allows for the virtual machine to be deeply integrated into other GPL software, such as Linux. The JVM has always been a second-class citizen where Linux is concerned, in that there was never very much time spent on optimizing the JVM for Linux. Now, the community can really get involved in optimizing the JVM for Linux, and I think this will have real benefits to the Java community, where Linux distributions are the target deployment platform.

Besides Linux, other projects will also benefit. GNOME will no longer have the excuse to ignore Java as a first-class language. Java may finally become a reality where the desktop is concerned. At least those desktops that use GNOME. will not have the problems of having quite a bit of its code based on a language without a free-as-in-freedom runtime environment. It also eliminates the need for distribution vendors to have to do all the engineering to create an distribution with an alternative Java such as GNU Classpath. This means less energy will be expended on non-value engineering tasks, and more can be plowed into the mainstream development.

I also believe that the knock-on effects of a GPL Java will not be fully realized for many years. This is truly an earth-shattering move by Sun, and they are to be applauded for it!

Wednesday, June 28, 2006

Open Source Java; What does this mean?

I was at JavaOne earlier this year, when Jonathan Swartz asked whether Java would be open sourced. The following revelation that Java would be open sourced, and that it was not a matter of whether anymore, but a matter of how.

This was the buzz of the first day, and I have continued to watch this unfold. Recently, I read some stories that said that Sun would be ready to open source Java in months. Now, this is a pretty broad declaration, and they could go as long as 11 months, without having to retract that statement, but still, they seem to be moving down the track as they said they would.

What does this really mean for all of us involved with Java?

I have always been a proponent of the open sourcing of Java. My main complaint has always been that certain JVM bugs just never get fixed. I would love to have the "freedom", and be empowered to fix those bugs in an completely open process. There have been many studies and comparisons of the quality, in terms of defect density, between closed source and open source software. All of them draw the same conclusions. Open source software has less defects, and is more reliable then closed source software. It's pretty simple. I want less defects, and a more reliable virtual machine, and we will get that via the open source development model.

Are there other benefits to this?

I once heard Bill Joy, former Sun employee and co-founder, say that innovation happens out there. What he meant by that, at least my interpretation, is that companies cannot be insular, and they have to realize that innovation happens in the broader market, and one company, no matter how big, can innovate solely on their own. With that in mind, opening up Java to the world, can only create additional innovation in and around the Java platform.

In fact, I believe it will accelerate the delivery of innovation for the Java platform in a way that cannot even be fully understood today. Only many years down the road, will we be able to look backward and realize the monumental changes that came from this.

I am really hopeful about the open sourcing of Java, and its benefits to all of us that use it and depend on it. I only hope that months, is really just a few short months time. The sooner the better!

Monday, June 19, 2006

Are Users Part of the Open Source Community?

Recently, I have read a number of blogs and articles that make the assertion that users of open source software are not necessarily part of the open source community. They only count contributors as part of the community. To some extent, that can be true, but in others I don't believe that it is fair.

To the extent that users are individuals that put the software to good use, but never do anything, like submit a bug report, or help other users, then I would agree they are not part of the community. Of course, without users, then what is the purpose of the software to begin with? For any open source project to be successful, it must first and foremost, be useful, and therefore must attract users. This, in and of itself, makes users the single most important factor for an open source project.

From a best practice perspective, we all know that small development teams are the most productive, and there is a practical limit to the number of people who can contribute from a code perspective anyway. At least productively contribute to the code. Practically speaking, this means that to have a large scale community for an open source project, the vast majority of people involved in the project must be users.

Also, contribution shouldn't be viewed threw the limited lens of code contribution. There is testing, translations, documentation, answering other users questions, sharing your user experiences with other potential users, etc. These are all valuable contributions!

I contend that large numbers of users, are contributors to the open source projects they use. Even if they never write, or are incapable of writing, a line of code. In fact, they are the most important contributors, and shouldn't be viewed as someone outside "the community" of open source projects.

Tuesday, June 13, 2006

JBoss Seam!

With the release of JBoss Seam 1.0 this week, I thought it would be interesting to give a high level overview of what Seam is.

First, let me say that I believe Seam will be a huge step forward for developer productivity! Everyone that is thinking about, or toying with things like Ruby on Rails, or Spring, should stop and take a long look at Seam. Why do I say this?

Well, when you consider that the Java programming models have been too complex, and the realization of that has brought us some really innovative new technology, such as EJB 3. There is some very good UI technology in JSF and Facelets in particular (don't do JSP anymore). There are other very useful Java tools, such as jBPM for business process management, but nothing brings all of this together.

That's what Seam does. It unifies all of these great techologies in a seamless programming model, that is really compelling. Seam let's you take advantage of the fact that EJB 3 entities are just POJO's, and that they are detached objects, that can be used anywhere in an application. In fact, Seam let's you use EJB 3 entities as the backing bean in a JSF UI! But this is just scratching the surface.

Seam not only let's you program with a single model, but also integrates business process management. When you consider that all business applications are automated business processes to begin with, a business process management tool, like jBPM, is a natural fit for any and all applications. Seam allows you to use business process management, and in Seam's case, jBPM specifically, without having to be a BPM wizard. In fact, many things can be done without writing a line of jBPM code!

Besides, BPM integration, Seam also integrates AJAX style programming, with a Seam remoting capability, that allows JavaScript to call EJB components (stateless/stateful session beans, and message driven beans & pojo's) directly! This is powerful!!

When you couple these attributes, a single programmer model that eliminates all the glue code from your presentation layer, everything is a POJO, AJAX integration through JSF components and the Seam remoting capabilities, and finally some tooling with Eclipse that can code generate a fully working Seam application from your database schema, what are you waiting for? Oh, and all of this will become a Java standard through the JCP process. Gavin was able to get a JSR submitted and approved, called WebBeans, that will make this a standard (JSR 299).

Download some of the sample applications off of the JBoss website, along with the code for Seam 1.0, and I think you will be impressed.

Wednesday, May 31, 2006

This isn't your father's EJB!

I had posted about EJB 3.0 sometime ago, and based on my Java One experience, I thought it was worth revisiting. As I talked to hundreds of people at the JBoss booth, and also went to some of the sessions, I was struck by how little people seemed to know about the new EJB 3.0 specification.

I think that so many people either had poor experiences with the complexities of EJB versions prior to 3.0, or heard that it was complex, that they have tuned out the new specification. Granted EJB 1.0 through 2.1 was not exactly the best example of good engineering. So, there certainly is a stigma that has to be overcome.

When I told people at Java One that they should really look closely at EJB 3.0, and try it out, they became intrigued as to why. Then when I explained that you can write just plain old java objects, their eyes would light up. What? No more heavy component model and deployment descriptors? I can actually unit test this stuff through normal JUnit or TestNG? Really?

When I would be able to get into a deeper conversation about the technology, and we could discuss things like defaults that actually make sense. When you can follow a simple convention, and not have to use an annotation, it gets even simpler. For example, for an entity, you don't have to specify the database table name or the column names if you just name them the same in the database and the class. What could be easier? Now, the reality is that some corporate naming standards will probably get in the way of some of this ease of use, but I would urge any DBA to check that stuff at the door, and let the conventions be the standards. Besides, the cost of the time of the DBA to determine whether something is a table or a view, or a synonym is trivial compared to the cost of many developers time. Allow sane business names to be used for tables and columns, and everyone will be better off in the long run.

The extensibility of EJB 3.0 is also wonderful. The ability to create your own annotations, and extend things with very little code (an AOP lite) is very powerful indeed. This will allow developers to do many things you would use an AOP (Aspect Oriented Programming) framework for, within the confines of EJB, without the complete learning curve of AOP. When you consider there are no standards for AOP frameworks, you will be spending that learning curve on something that is proprietary in nature. That in and of itself, is enough of a deterrent to me.

The final thing that I have really come to like about EJB 3.0, is its embeddable. I have already written two applications that use EJB 3.0 objects from a Java SE environment! No application server required! This is wonderful for small applications and utilities.

I have become a real EJB 3.0 junkie. There is no turning back. Once you have taken that first hit, especially if you have lived through the EJB 1.0 to 2.1 days, you will be hooked. There is no doubt about it. This is the future for enterprise application development!!!

Wednesday, May 24, 2006

It's Official: The Red Hat acquisition is Unanimously Approved!

I haven't blogged in awhile, but last week I went to JavaOne for the first time since 1998. This was a completely new experience for me, as I went as an employee of JBoss, and worked in the booth, along with attending some of the sessions.

As I talked to hundreds of people at our booth, one thing really stood out for me. Everyone was very happy that we were being acquired by Red Hat. Another thing that really stood out, was that everyone was also very happy that we weren't acquired by Oracle.

So, there you have it! It's official, at least in the eyes of our customers and supporters. The Red Hat acquisition is approved!!!

I must say that I also approve. I believe that the combined company will be stronger, and grow bigger and better. We have already heard from some prospective customers that JBoss is now in consideration, where we would not have been before. The power of just being bigger makes a world of difference. The customer relationships that Red Hat already enjoys will be a big help. Especially, when you consider the reach that Red Hat has globally, that we just haven't been able to establish as of yet.

Anyway, as you can tell, I'm excited, and looking forward to the closing of the acquisition, so we can get on with executing as a combined entity.

Friday, March 31, 2006

Can or Should you Measure Software Development Productivity?

Lots of businesses are trying to measure their productivity these days. In the software development space, I can tell you it is being attempted in many ways. I would have to say at this point that all of them are flawed.

IT leaders, in many cases, are being pressured to measure their development productivity, to show improvement over time, just like other measurements in business. Business leaders, measure financials, and processes that are repetitive in nature, and they think that IT should be able to do the same thing. This is where the rub is, and the complete misunderstanding of what software development is. It is even where things like CMM go wrong.

If you have a manufacturing process, that process is repeated, with the exact same steps over and over again. These processes can easily be measured with throughput metrics, quality metrics, based on component, assembly and final product testing. They produce the exact same thing every time. Other business processes, such as picking product in a warehouse, are similar in nature. A person is instructed, usually through some software based system, where to go in the warehouse, what to get, and where to put it. Once again, a highly repeatable process, that produces the same outcome (at least when done correctly) every time. You can easily measure it, and not affect the outcome (or at least not affect it in a negative fashion).

That final phrase, "and not affect the outcome", is a very important phrase where software development is concerned. Also, "repeatable" is also very important to understand. Is software development repeatable? Can you measure it without affecting the outcome?

The answer to those two questions are the key on whether you can even try to measure software development productivity. Let's take the first question, and see where it leads us.

Whenever you embark on doing software development, you always have new requirements. Based on those requirements, the logic has to be different than what has been done before. Based on the people working on the project, their personal experiences and knowledge dictate the implementation choices that are made, even if the requirements have been implemented by someone else in another project. External forces, like technology changes in surrounding hardware and software, as well as things like corporate standards and direction changes, all influence how the software solution will be implemented. I am just scratching the surface here, on the myriad forces that work on a software development project. When you take these things into account, along with many other things within the typical software development project, how can anyone expect that this is a repeatable process? I don't believe that you can! By definition, software development is a creative act by human beings, hence the outcome will be different each and every time it is done.

To draw an analogy, if you took the same person, and sat them down in front of the same scene, and asked them to draw or paint it. Then have them do it again. Would it be the same the next time? The answer is obvious, it wouldn't. Now expand this analogy to include multiple artists, working on the same work of art, each dividing the work into some manageable piece. Now, what would you expect? Now expand it again, to periodically change out some of the artists for different artists (a common occurrence on software development projects), and what would you expect? I think the answer is clear. At no point would you end up with the exact same drawing or painting. Continue to extend this analogy to include new scene elements every time (like new system requirements), but they have to be incorporated into the same drawing or painting, and you start to get a good picture of what ongoing development on the same code base includes. Go even further, and have some of the scene elements be in direct conflict with others that used to be in the scene. I think you are probably getting the picture (pun intended)!!!!

That leads us to the second question. Can you measure it without affecting the outcome? The most prominent measure for software development productivity is function points. Considering that software development is a human creative act, all humans being measured will want to understand how the measure is calculated, and what is expected of them. When you consider that function point counting, counts things (e.g. number of unique interfaces, number of database tables, etc.), the more of those things that you produce the higher the value, and supposedly you have produced more functionality for your business. Do you see the inherit conflict?

To truly drive productivity in a process where human beings are endeavored in a creative act, you should be striving to do less, not more! The least amount of work to meet the requirements of the system should be the goal. As soon as you put a system in place that incents people to do more, you end up with a much more complicated implementation. Knowing that you are being measured based on the number of things you produce, and you are confronted with a design decision. One option has fewer of what is being counted, and one has more, which one do you think will be chosen? So the answer is clear, that these types of measurements simply incent the inverse behavior that you are looking for, and certainly affect the outcome. Not only do they affect the outcome, but they affect it in a negative way, especially where quality is concerned. More software in a system, creates more opportunities for errors in the implementation. It is a given that quality will suffer, and probably suffer dramatically. Of course, then all the project manager types out there are thinking, we will just do more testing, or better testing. Now you have just elongated your process, and once again are going in the opposite direction you intended.

On final question. How does counting things like interfaces, tables, etc., equate to the value that a software system has in the first place? It doesn't have anything to do with it at all! You could create this huge software system that would have lots of function points, but if your business doesn't find any value in it then it is not worth anything! It is what the software enables for your business that makes it valuable or not. The center of what we measure should be value to our businesses, nothing more, and nothing less!

Tuesday, March 28, 2006

Using Linux on a Day to Day basis

I was sitting here thinking as I was working away, and I started to think about what it is like using Linux on a day to day basis. Is it really that different than using Windows, or Mac OS X? Are there really big differences between these platforms for me?

I have several Linux machines in my household, and I have several Mac's. I used to use Windows in my day to day work environment, but I have been free of that for quite some time. I use Linux everyday, and I really don't have anything that I miss, or need, that the other platforms offer. Yes, there might be a feature here or there that is on one platform vs. another, but nothing that I just have to have. In fact, when I look at what I need to do my daily job, it boils down to these things.

My day almost always starts with reading e-mail. Well, there are certainly no issues there. Linux has quite a few decent mail clients to choose from. I have been using Evolution, and it has served my purposes quite well. Between the e-mail, calendar and task features, along with filters for sorting through e-mail and filing them in appropriate folders, I have a very productive environment. After going through e-mail, and working through whatever that brings, I usually transition to doing some technology reading.

In this regard, the trusty Firefox is my primary tool, along with Google's Reader (Their RSS/Atom Feed AJAX client), I can read through all the latest technology news, and technology articles that are relevant to me. After that, I usually do work around process related items.

In this regard, 2.0.x has been the tool of choice. I have to deal with budgets, products, development processes, etc., and they invaritably are encompassed in some form of business document. My co-workers almost all use Microsoft Office, so I have to use the Office formats often. What has been impressive, is that I have been working with Excel spreadsheets, PowerPoint presentations and Word documents of almost every kind, including budget and planning spreadsheets with macros. So far, has been able to read and write each and everyone, even the ones with macros, with no apparent issues at all. I have even used OpenOffice to publish documentation in DocBook format using the XML transforms for DocBook. The only thing it didn't do correctly is include my embedded images. A quike edit of the XML using VIM, and I had my embedded images. It even intelligently kept my footnotes, and appended them to the end of the document. Very clever way of dealing with footnotes. Finally, I usually turn my attention to development type tasks.

In this case, I use Eclipse, with JBoss IDE, and other Eclipse plugins. Eclipse works beautifully on Linux, and I have no problems working with our CVS repositories through Eclipse. Also, I use MySQL as my database, and have been using the GUI administration and query browser tools as well. The database is rock solid, and runs beautifully on my laptop. The GUI tools have progressed since they first became available, and I have used the query browser to do data analysis on a corporate database. Once I figured out how the bookmarking features work for queries, I was able to save, with descriptive names, all of my analysis queries for later use. It was very nice indeed. One thing that I have yet to put into practice, but will soon, is an application called gvidcap. This application will record what you are doing on your computer, complete with your voice (as long as you have a microphone). Last but not least, has been Skype. I have used Skype for work and I must say that I have been impressed. Conference calls and individual calls work very well. On conference calls, sometimes network latency issues and/or CPU issues on the peers involved, may degrade the experience, but overall I have to give it high marks. Besides work, I also use Linux for my personal business as well.

Where personal use of Linux is concerned, I certainly broaden the things I do. For instance, ripping some of my personal CD's, and putting together a music library for my own enjoyment. In Fedora Core 5, this is as easy as pie. Other things are burning CD and DVD's. I have come to like the simplicity of the CD/DVD creator in Fedora Core. In fact, with Fedora Core 5 you can now duplicate CD's and DVD's through it quite easily. Just put a CD in, and right-click on the desktop icon, and select "Copy Disc...". It doesn't get much simpler than that. Some of the other day to day activities where Linux really helps me out is the new Tomboy and Beagle applications. Tomboy is a very nice and simple note taking application. I have started to use it and it has filled a real need. Instead of typing a document and saving it away, and then not having it very accessible, I can just type a quick not, and it is right there within the panel applet, right where I can get at it, so I don't have to remember what I called it, and where I saved it. It is cutting down on the document clutter that everyone experiences. And finally, Beagle has been very impressive. When I want to find something, searching is now extremely fast, and complete, since the search technology doesn't just work on file names, but on the content. It has been wonderful, and it finds things in places that sometimes I wouldn't think to look. Very useful!

One last thing about personal use of Linux, is playing games. I have become addicted to Chromium. If you are old enough to remember Galaga, then you will like Chromium. Different in its approach, and much better graphics and sound then Galaga, but similar. Try it out! As far as commercial games, I actually have quite a few. Unreal Tournament 2003 and 2004, Quake 3 Arena, Return to Castle Wolfenstein are just some of the titles that I have that are native Linux ports. They all work great, and have been real fun. I hope the trend continues to offer native Linux ports for commercial games.

In conclusion, I would have to say that using Linux on a day to day basis is easy, productive and fits my needs very well. I would bet that if you spent some time with Linux, you would probably find the same thing.

Wednesday, March 22, 2006

Fedora Core 5: Fits and Starts

I was planning to write this wonderful review of Fedora Core 5. Well, I downloaded the ISO images, and burned them to CD using Fedora Core 4 with no problem. I booted up from the first CD, and started the installation process.

The installation process took about an hour and a half on my HP Pavilion zv5000 laptop. It is a Athlon 64 laptop with 802.11g wireless, 1.2GB of memory and all the typical things like USB 1.1 and 2.0 ports, microphone and headphone jacks, Nvidia graphics (GeForce 420 Go with 32MB) and a SD/PCMCIA slot. I have really grown to like this laptop, and I had everything working beautifully with Fedora Core 4. In fact, I was a little hesitant to jump on the Fedora Core 5 bandwagon so quick. I usually like to wait until I see the first kernel update for Fedora before upgrading. Usually by then, all the major problems have been worked out. Well, my hesitancy was justified.

The installation went without a hitch, installing packages from all five CD's. In fact, I noticed that it upgraded both the i386 architecture and x86_64 architecture packages I had installed. A very nice touch in deed. After rebooting to my shiny new GNOME 2.14 desktop, that is when the trouble began.

The first thing I always do after installing a new kernel is reinstall the Nvidia drivers so I get full accelerated 3D. This was especially crucial, considering I was real eager to try out the new AIGLX support with all the wonderful 3D stuff. Much to my surprise, the Nvidia kernel module would not build, even though the kernel that comes with Fedora Core 5 is 2.6.15, just like the latest kernel on Fedora Core 5. It turns out that they made a last minute change to the kernel that broke the Nvidia kernel module. There is a bug report for it, and they plan on fixing that problem when they release an updated kernel, which according to the note I saw should be within a few days (

So, this shot down my ability to use the most anticipated feature of Fedora Core 5. Then the kernel problem reared its ugly head again. My wireless card, which is from Broadcom, does not have Linux device drivers. Seeing that my machine is also a 64-bit machine and I am running the 64-bit OS, I need to use DriverLoader from Linuxant. I installed the latest version of DriverLoader, and its kernel module also would not load. Turns out it is the same problem that broke the Nvidia kernel module. So, now I have reverted back to using a wire. At least I still have network connectivity.

Having endured these problems, and not really getting to try out the new 3D stuff was disappointing, but I did have a functioning system. At this point, I decided to move on to making sure all my upgraded applications worked. First, I fired up the new version of Firefox. It detected my old extensions, and asked if I wanted to find new compatible ones. I said yes, and it found all of the updates for my extensions, and I installed them, and everything worked great. This area was one of the most smooth of the whole day, and I was very pleased. Then I moved on to Evolution.

Once again, upgrade problems! I use Evolution as a client to an Exchange 2003 hosted environment for work, and it is very important for my everyday productivity. When I fired up Evolution, I could no longer authenticate. Ouch! I deleted the account, and exited Evolution. I setup the account again, and seemingly I could authenticate, but I couldn't get into my folders. After several hours of playing around with things, I just gave up and entered a bug report into Bugzilla. At this point, I have had to fall back to using Outlook Web Access, which is not very good. Anyway, at least I could get to my work e-mail, calendar and tasks, even if the interface is crude.

On the upside, my Skype client still works great, my Eclipse environment, and MySQL (even with the upgrade to 5.0) works great. The only problem I have on the MySQL front, is that the MySQL Query Browser just segfaults now. I tried upgrading it, building it from the source RPM, etc., but to no avail. Once again, I was left with no option but to report a bug to MySQL.

Hopefully, the kernel fix will come soon, and I will have my wireless and Nvidia drivers working again, and that is when we can start having some real fun testing out FC5. I guess the lesson is to not upgrade on the very first day of the release.

Wednesday, March 08, 2006

Are Software Patents True Inventions?

Over the last several years, I have given a lot of thought to software patents. Being involved in open source software for the last six years or so, has triggered some of those thoughts. Also, being involved in a patent infringement case also made me think long and hard about how I felt about software patents.

Several years ago, in my previous job, the company I worked for was sued for patent infringement. Now, this is the last thing I ever expected, because the company had no technology of its own for sale or use outside of the company! Why would we be sued for patent infringement? It didn't make sense to me at all. In the early days of this lawsuit, I had to meet with an outside patent attorney who was going to act as an advisor on the case for us. He explained a lot about patents and how the system worked to me. He explained that the mere use of the technology by someone made them liable for infringement. That means users of technology are just as at risk as technology providers.

In this particular case, the technology in question was provided to us through a vendor, and we used it extensively in our enterprise. Which made it all the more scary, because if we were forced, in some way, to stop using the technology, we would essentially cease to be able to operate our business. Fortunately, we were in compliance with our indemnity clause in our contract, and at least the vendor had to take over and defend us. Even so, it was still me that had to go through the process. In that process, I was deposed by the legal counsel of the patent holder.

Eventually, the case was settled without going to trial, and the company I worked for did not have to pay a dime. What I learned from that experience was three fold.

First, the patent did not have to have a working implementation! When software patents were first issued, you had to submit the source code to the working implementation with the patent application. This is no longer true, so you can essentially patent an idea, without a working implementation.

Second, the patent office does not have the skill to determine if the idea is something that truly meets the bar for a patent. One of the keys to whether something is considered an invention, is it cannot be the logical next step for a engineer competent in the field. In reading the patent involved in the case I talked about, it clearly did not meet that requirement. It is my belief that 99% of software patents do not meet this criteria.

Third, is that the discovery process for what is called "prior art" is awful. I believe 99% of software patents have relevant prior art as well, but you would never know it by looking at software patents. Of course, one of the issues with prior art, is that 95% of all software written, is written by organizations that have no intention of ever selling it. IT/IS departments write 95% of the worlds software, which means that the search for prior art is only covering 5% of the software spectrum. No wonder this process is so bad.

Is this to say that no software can meet the requirements for patentability? I think there is a narrow band of software that can be considered a true invention. For example, crytographic algorithms. The mathematical element is such that you are essentially discovering something. The peer review that these algorithms have to go through in order to be proven secure, also raises the bar, in my opinion.

Because of these issues, and others that I haven't delved into, I believe that we would all be better off just eliminating the patentability of software. I believe that the notion that software research and development would stop is ridiculous. Just because you can't patent something doesn't mean there isn't money to be made in the market. It is the potential money to be made, and the size of the potential market that drives software research and development, not the ability to protect the work through a patent. In almost all cases, software functionality can be duplicated with an alternative implementation approach anyway! What is really going on with patents, is that companies want the ability to control a market. That is not the free market economy at its best. Healthy competition on implementation in software is what is best for the economy.

Let's just do away with software patents?

What do you think?

Tuesday, March 07, 2006

Predictions for the Future of Middleware

What is the future for middleware, especially in the enterprise? I would say there are two major trends in the industry.

One trend is the consolidation of middleware. The consolidation is reflected in the ever increasing number of pieces in the portfolios of companies like IBM. Now, IBM's portfolio is a hodgepodge of internally developed and acquired technologies that don't always play well together (or even work for that matter). Nonetheless, it is still a great example of the fact that a lot of enterprises want fewer vendors to work with, and will buy more from a single source. Of course, this is driven by a desire to have "one throat to choke". As I have said before, this is simply a myth. If you check out one of previous posts, you can get my full analysis of why this is a myth ("The Myth of One Throat to Choke").

This consolidation in the closed source world has the biggest affect on the smaller vendors who have specialized middleware. For example, companies like webMethods, who have been in the EAI space, or Sonic with their ESB product. They simple offer one small piece of what is needed by enterprise customers, so they will either be purchased, or whither and die a slow death, as their once standalone market is subsumed by the middleware suite vendors.

The other trend is the proliferation of open source middleware. Whether it be a simple solution like LAMP, or more comprehensive solution like the JBoss suite of middleware technologies, this market is growing, and to some extent starting to dominate the landscape. As enterprises continue to push the use of open source software and gain its benefits of higher quality, lower costs, empowerment for developers and support organizations, the sky is the limit.

These two trends are the only trends that show growth in middleware. Everything else is stagnant in the market. There may be some companies that can show growth with standalone solutions, but it is growth that will be temporary. As open source continues to mature, and big middleware players like IBM continue their march, the middle of the market will get squeezed out.

So when there are two middleware plays left in the market, the large platform or suite vendors (of which there might be two or three), and open source, what happens then?

Open source will continue to commoditize the middleware market, and the large platform players will have to either move out of middleware and up the stack, or they will have to join the open source party, and get behind existing open source efforts or try to forge their own communities. Mind you, this will take years, as enterprises don't change over night, but in my opinion it is inevitable!

Monday, February 27, 2006

Does Linux Suck?

I have had numerous discussions around Linux, especially as a desktop OS. A lot of Windows and Mac folks like to say that Linux sucks. A lot of the diatribes against Linux usually have to do with getting laptops to functional completely. A lot of people assume that this is a Linux problem, and that Linux must "suck", because some piece of hardware don't work, or doesn't work out-of-the-box.

A lot of other complaints to Linux revolve around functionality issues. Either something doesn't work the way Windows or Mac OS X does, or some application isn't available, or they just don't like the alternative application and they way it works. Are these types of issues related to the quality of the operating system?

I would say not! Many of the issues are actually a matter of taste, versus a matter of fact. Simply not liking the way an operating system does something, doesn't mean the operating system sucks. Just that it doesn't work the way you expect, or it doesn't fit your style of working. For every person that doesn't like the way something works, there are many who probably do.

Personally, I have Linux running on an HP laptop with an Athlon 64 processor. When I first purchased it, I installed Fedora Core 3 on it. Did everything work out of the box? No. Did most everything work? Yes. Considering how proprietary laptops are, and considering that the manufactures are not working directly with a Linux distributor to make sure everything works, it is quite amazing that things work as well as they do! Windows XP came pre-loaded on the laptop, and of course everything worked. HP specifically works with Microsoft to make sure that happens. People take this for granted, and then blame Linux when things don't work. With my HP laptop, everything worked with the exception of my wireless card, and the 16x9 display would get corrupted on the console when using the Nvidia drivers with it.

The built in wireless is a Broadcom 802.11b/g 54 Mbps card, and it was just dead. Of course, Broadcom does not produce Linux drivers, except for embedded applications. You cannot download those, so I started to see what could be done about it. I found the Ndiswrapper project, and that looked quite promising, but I was quickly disappointed, because I had put the x86_64 version of Fedora Core on the laptop, so I could actually run in 64-bit mode (The Windows XP version on the laptop was 32-bit). I started to work on porting it to 64-bit, but ran into some snags in the code base. It actually assumes that it is 32-bit and 32-bit only, so the project really started to become hard. So, I decided to see what else might be out there.

Then I found a commercial product that does the same thing that NdisWrapper does, which is load the Windows driver under the Linux kernel. The company is called Linuxant, and the product is DriverLoader. Amazingly, they had 64-bit Linux support, and they had support for my Broadcom chipset with a 64-bit Windows driver! I was elated, and I quickly purchased a copy (very cheap, but I don't remember how much), installed it, installed the driver, and boom! It didn't work! It turns out that the Fedora project had built the kernel with 4k stacks. Ouch! Typical Windows wireless drivers require a 12k to 16k stack size to operate. Dead in the water again. I guess that quite a few people had problems with things like this, so the Fedora project started building the kernel again with a larger stack size. I updated my kernel, and all of a sudden I had working wireless! Yeah!

Next issue to tackle was the screen corruption on the console. Mind you, the graphical screen was fine. This was more of an annoyance than anything. I did a search on the Internet, and wouldn't you know it, someone had created a step-by-step guide to solve this problem with my laptop and published for all to see. This is one of the real strengths of the Linux community. I followed those instructions, and low and below, after restarting X, I know had a perfectly working graphical screen, and text based virtual consoles that were no longer corrupted!

So, for my two issues, where these problems with the operating system itself? Not really. The fact that Broadcom won't release a Linux driver has nothing to do with the quality of the operating system. The fact, that HP's widescreen display needs a different modeline in the X server configuration is also not a quality problem with the OS. It is merely a public specification issue by the hardware vendor.

Yes, because of these types of issues, Linux is harder to get working. Does this mean that Linux sucks? No, it is just a reflection of the current state of affairs in desktops and laptops. All the OEM's really pay attention to is making Windows work on their machines, and in the case of Apple, Apple controls the hardware and the operating system together. So if their stuff doesn't work, that's a real problem.

For me, I work very productively with Linux everyday, and I don't have to reboot. I use a wide variety of applications, and never find anything wanting. The quality is great, the stability is great, and I love the fact that it changes and matures rapidly. I don't have to wait three years to get my next rev of an operating system, and then have to pay for the privilege to use it.

Linux doesn't suck!

Monday, February 20, 2006

IBM's and BEA's Comments on JBoss

With all the rumors floating around the press lately about JBoss being acquired, there have been some interesting statements about JBoss made by competitor's IBM and BEA.

The first one I would like to address is a quote from BEA's Marge Breya. Marge Breya is BEA's Chief Marketing Officer, so we can forgive her a little for her ignorance. So here is the quote:

"JBoss is closed from a contribution standpoint–it's open source with a closed community…a bit like calling Cuba a democracy," said Breya.

Here is the link that contains the quote:

This has got to be one of the most twisted statements I have every read. First, she claims that JBoss is closed from a contribution standpoint. This is the furthest thing from the truth. Open source contribution is based on a meritocracy. The best contributions are committed, and contributors earn committer status in the projects. JBoss is no different, and I can assure you that JBoss has many contributors outside of JBoss, Inc.'s paid developers. If BEA wanted to contribute, their developers would have to earn that right through valuable and good code contributions. There have been over one thousand contributors to JBoss projects, and that number will continue to grow. If you look at the forums on the jboss web site ( you will see a vibrant community of users and contributors (actually users are contributors too). There are thousands and thousands of posts on all the various JBoss projects. If this isn't community, then what is she talking about?

Second she tries to say that we really aren't open source because we don't have a community, and says it is like calling Cuba a democracy! What a ridiculous statement. So let's really look at this statement. We all know Cuba is not a democracy but a communist state. You would be off your rocker to make that statement. Is Marge Breya, off her rocker by saying that JBoss is not open source? Well, let's ask ourselves some questions. What license is the software licensed under? JBoss is licensed under the LGPL. Is this an approved open source license by the Open Source Initiative? Yes, it is. In fact, the LGPL specifically prohibits the code from ever being closed. We have already gone over the community question and contribution, so what is her beef? I think that it has to do with the license. More on that in a minute.

Next, we have some IBM comments from Steve Mills. Steve Mills is the head of IBM's software group, and he was quoted saying the following:

JBoss' Java application server contains a significant proprietary component even while it adheres to the Java 2 Enterprise Edition standards.

"JBoss has a lot of proprietary JBoss. It's sort of a hybrid model of open source," Mills said.

Here is the link that contains the quote:

Steve Mills claims that JBoss is a hybrid model of open source. What does that mean? Is some of the JBoss code closed source, and some not? Absolutely not! Are some of the JBoss projects not based on an open standard? Yes! Is JBoss not open source just because they have code that is not based on some standard? No! Having said that, are there standards for covering the things that are not based on an open standard? No! There are no open standards for object/relational mapping, with the exception of the new EJB 3.0 specifications, of which JBoss was the main contributor, and has an implementation already. There are not open standards for Aspect Oriented programming. This doesn't stop IBM from using AspectJ does it? Is AspectJ not open source because it doesn't adhere to some standard? Of course, not! This is just a bunch of hooey! Why would IBM be complaining about JBoss being proprietary? Have they open sourced their WebSphere product line? Do they have any proprietary technology in WebSphere? Of course they do. Just like JBoss, you have to solve real-world customer problems, and that means you have to have technology that is outside of any standards. Standards only cover part of the problem space for developing real-world solutions to real business problems. How many WebSphere and BEA shops use Hibernate for their persistence? Lot's of them do, because it is simply works better. I believe there are two fundamental reasons for IBM and BEA to cast disparaging remarks at JBoss.

The first reason, is the license that JBoss uses. It does not let them fork the code and make it proprietary. BSD and ASF style licenses allow, and to some extent, encourage forking the code, and taking it proprietary. This allows traditional software vendors like IBM and BEA to mine the best of open source and take it proprietary. So the open source community does the work, and IBM and BEA get the money. This sounds like exploitation to me. The second reason, is that much of the JBoss technology can run inside BEA's Weblogic and IBM's WebSphere. This must be very scary for them. After awhile, their customers might start to think, why am I paying BEA and IBM anything, when I am getting the most value from the JBoss software components. Maybe I should just adopt the entire JBoss stack instead?

In summary, they want to make it seem that JBoss is not really open source, because JBoss poses a huge threat to their traditional middleware revenues!

Tuesday, February 07, 2006

Software Development Productivity

I have been caught in a lot of discussions over the past year or so about software development productivity. How to measure it. What improves productivity? Does it come at the expense of quality? All kinds of things to think about. In researching this topic, along with quite a few other folks that I have worked with, I have a set of principles that I would like to discuss.

One area that concerns a lot of enterprise is the requirements process. How do you get good requirements up front. It seems like it should be easy right? Nope! It isn't easy at all. Software systems are difficult for end users to describe before they see them. I think it is one of the reasons that packaged applications have been popular versus writing the application in-house. So what can you do about requirements?

The only thing, that I believe can be done, is to follow an iterative process. That is, don't bother trying to gather requirements up front, because they will never be right, no matter what process is followed. Define some high-level goals, and write something that illustrates those goals. If the requirements cannot be written down on 3x5 cards, then you are specifying too much. Keep it simple, and get something concrete in a short two to four week cycle that the end user can actually look at. The end users will be able to articulate what they like and don't like, and how something should work only after seeing an implementation. That will get the ball rolling, and you can then do iteration after iteration to refine, add features, correct things that may otherwise go way off course.

This iterative process shouldn't be dragged on for a long time either. The entire project life-cycle should be complete within a twelve to sixteen week process at the most. Did you know that research has shown that projects that go on for six months have a less than 50% chance of being implemented! If you go all the way out to thirty-six months, the odds drop to zero! That's right, zero! Keep the projects small and managable, and release early and often, not just in your iterations, but to production as well.

For both principles above to succeed, you must work directly with the end users of what you are building. Don't fall into the trap that a lot of organizations fall into, where they have organizations dedicated to working with business people, and they will translate the requirements for developers. This is a huge expense burden on the company, and it only makes matters worse. Think about it. Haven't we all played the telephone game, where one person starts and whispers something in the next persons ear, and so on, and so on. When you get to the last person, is what the first person said ever repeated? No! Of course, not! This same principle is at work with layers of organizations that are supposedly there to help, but only serve to scramble the message. Let developers, no matter what you think of their people skills, work directly with end users. You will find that you get a better product, quicker, and you will have happier end users.

Keep team sizes small. In knowledge work, which software development is, you should never have a team size larger than seven people. Why do I say that? How did I come up with seven as the magic number? If you look at the research done by Quantitative Software Management on project team sizes, you will see that team sizes between three and seven are the most productive. Large team sizes can cost upwards of 400% more, and deliver 29% later, doing the same size work! This is dramatic. If you have a project that just seems to large to deliver with a small group, don't trust your intuition. It is wrong! Keep the team size small, and if something really is very large, break it up into sub-projects that can be delivered to production in a twelve to sixteen week period. You will deliver value quicker, and the overall project will get completed much quicker and at a much lower cost.

Where are these principles demonstrated for everyone to see right out in the open? Well, its obvious (at least to me). Open source projects work this way be default. They have small sets of contributors at the center. Even large open source communities have divided things into sub-systems that can be worked on by a very small groups with committer access. They work directly with their end users. No middle man. Requirements are not expressed in large documents. In fact designs are not documented at all! Instead, implementations that only demonstrate the overall goal of the project, is how things get started. The process is iterative, with many small releases, some labeled aphas or betas, or release candidates, while others become a "stable" release. This allows the project to get feedback from users early, and users can give feedback in the form of feature requests, bugs, etc. See the parallels to what should be happening in the enterprise?

If you adopt the open source model of development in-house, you will find a huge productivity boost for your organization, and you will find that your end users are much happier, because they will actually get what they need and want, in a reasonable amount of time, for a lot lower cost.

Think about it!

Monday, February 06, 2006

GPL 3 and DRM!

I have been coming across a lot of articles talking about the new version of the GPL (GNU Public License) lately, and specifically the issue around DRM or Digital Rights Management. One thing really strikes me about the articles. The fact that, in many cases, we are mixing multiple issues together, that shouldn't be mixed together.

DRM, in the consumer products world, is one thing, and DRM in the corporate information world, is entirely another. In the first case, we are talking about movie studios, music labels, etc. trying to control the distribution of a copyrighted work so that everyone pays for their copy of that work. While, I can understand this, as they want to be compensated for the costs of producing that work, the control mechanisms being used go far beyond ensuring that you paid for the copy. They try to completely control what you do after you have paid for the copy. These copy protection schemes, like the software copy protection schemes of the 1980's will fail, because people will not tolerate them for long.

In the other case, protecting corporate information, such as trade secrets, is certainly something that open source software needs to support. Corporations of all shapes and sizes, with all kinds of business models, have a need to protect certain information. If anyone doesn't think that corporate espionage doesn't exist, they are naive. Digital signatures, encryption, permission based controls, all have their place where protecting corporate information is concerned. In fact, these are all things that are in place, in one form or another, in open source software today. We should not mix these two forms of DRM, and put them in the same basket. If we do, then we are in jeopardy of losing the very corporations that are helping to make open source software a success.

Software licensing is not the area to be trying to combat DRM!

Monday, January 30, 2006

Benchmarks: Numbers Don't Lie, but Liars Use Numbers

Can industry standard benchmarks or even application benchmarks, like SAP's be relied upon to make technology choices? I have come to the conclusion that benchmarks are not reliable measuring sticks for use by decision makers regardless if they are a mythical application, like a SPECjAppServer2004 benchmark, or an ISV specific application benchmark like SAP's ( Why do I believe this is true?

With the industry standard benchmarks, whether they are SPEC, or the Transaction Performance Council, the applications are far too simple to simulate a real-world workload. Real-world applications have far more complex business logic, and are usually highly data driven. When I say data driven, I mean that the application logic branches are almost always determined by querying a database for what to do under certain business cases. They are really automated business processes. I have seen cases where the customer setup in an application had over 60,000 locations, and in another case where there were over 100,000 specific products listed in a customer contract. These are but two simple examples, and what they lead to is a read/write ratio in the applications that are heavily tilted to the read side. In two major applications that I have been involved with the read/write ratios were 98% read, 2% write, and 93% read, 7% write. Industry standard benchmarks do not have such ratios because they don't simulate these types of complex, data driven, large dataset applications.

With ISV specific benchmarks, even though they are running a real business application, they don't represent a customized deployment of their technology. They are specifically crafted to create the highest possible numbers because they are actually marketing tools, not something that can be relied upon for your own implementation. If you look at some of the SAP benchmark results they have numbers like 29 million and more dialog steps per hour, and stuff like that! This is a dead give away to anyone who has half a brain. Does anyone's SAP implementation in the world do 29 million of anything in one hour? I think not! My entire career has been spent in high volume transaction oriented businesses (until just recently), and believe me, these types of numbers are completely off the chart, and meaningless.

There is one other aspect to both types of benchmarks. The benchmark configurations used are configurations that no customer, in their right mind would deploy in a production environment. You will see things like raw disc being used, with RAID level 0 (just stripping). Undocumented features of databases being turned on, that are specifically for benchmarks, but make the database unsupported by the vendor in a production environment. Data being stripped over hundreds or even thousands of disk drives. All logging of any kind, whether it be databases, application servers, OS, etc., being turned off to lower the overhead as much as possible. These are but some of the tricks that are used in so-called audited benchmark results. Where does that lead us where these benchmarks are concerned.

It leads us to one place and one place only. That "numbers don't lie, but liars use numbers"! These benchmarks are marketing tools, and no more. They don't represent anything remotely close to a production deployment, and the numbers will always be higher than what can be achieved in a real-world deployment that can be managed. Don't rely on these marketing ploys to make decisions, instead run your own workload in a proper production like configuration, and make your decisions based on facts, and not fiction.

Monday, January 23, 2006

A New Beginning

I have recently changed jobs, and have gone from a traditional internal IT shop to an open source company. Friday was my last day at my old job, and today was my first at my new job. What I find most interesting about the differences, is the passion that is so often drained from employees in traditional IT shops, is alive and well in my new position.

People really care about what they are doing, and it shows in everything that I have experienced so far. A successful endeavor, not matter what its purpose, has to involve people who care. What a refreshing difference! It is wonderful to be involved in something where people say what they mean, and mean what they say. No hidden agendas, no politics, just a spirit of let's do the right thing.

I think I have found a position where I can turn my passion into my vocation, and you can never go wrong with that.

Monday, January 16, 2006

The Myth of "One Throat to Choke"

When decision makers start to compare various technology solutions, one thing that inevitably comes up is the notion of a single vendor solution, with one support organization, versus a best-of-breed solution with multiple support organizations. The so-called "One Throat to Choke" support model.

I call this a myth, for several reasons. While we can all recall situations where multiple vendors have pointed fingers at each other versus helping solve our problem. I can also recall situations where multiple vendors worked quite well together to solve problems. Just like we can all recall a single vendor not addressing a problem even though it was clearly their problem to deal with.

First, with most single vendor solutions, that have anything more than one moving part, so-to-speak, they have proprietary features of the integrated solution that are intended to lock you in. They also make it very difficult to get value out of their solution without using the proprietary features. With that, once you have landed in that trap, the switching costs start to mount, and for some conservative organizations, they become insurmountable. Once they have you in that situation, there support really doesn't have to be very good. So now, you have "one throat to choke", and you are just there choking them with no results! This is especially true in the software industry with "stacks" or "suites" that are supposed to save you from all the integration costs, because they are pretested and certified together.

Second, most mature technologies today are based on a set of open standards. With open standards, the integration costs aren't as high, and in some cases downright non-existent. With standarized interfaces and protocols between the various pieces of a best-of-breed solution, it is often quite easy to determine where the problem lies. Lessening the pointing of fingers, and making it easier to determine where a problem lies, and who needs to be involved to fix it. Also, when vendors are put into a competitive situation, often they will work harder to solve your problems then vendors that have you locked in!

Finally, with many technology combinations in a best-of-breed solution, the vendors have predefined cross support relationships, and if they don't, many times they are willing to put those in place for you.

While it may seem alluring to have "one throat to choke", I think the differences in resolving problems is minimal, at best, when compared to a multi-vendor solution. Also, with the lock in strategy of "stack" or "suite" products, you are many times left with an inferior solution with no competitive pressure that helps you as a customer.

Saturday, January 14, 2006

When Technology Evaluations Go Awry

Recently, I have been witness to a technology evaluation that has been a real eye opener. What you would like to believe is that individuals involved in an evaluation will have an open mind to all solutions, and that they would not try to hide the weaknesses and problems in one solution versus another.

I guess I shouldn't be surprised that human nature has raised its ugly head in this. Although I would like to think that people will be honest, and have the best interests of everyone in mind, I have uncovered multiple instances where individuals have actually covered up things, and outright rated specific features that they did not even observe. All because they don't believe in an open standards and open source approach to technology. They believe that traditional commercial ISV solutions are inherently better, so they set out to "prove it", and in so doing they intentionally skewed test results, hid problems, or attempted to explain them away.

Now, what is the the result of all of this? The ultimate result will be that a company will pay more money for a solution that has no additional benefits over the open solution, has poor technical support, doesn't truly work as well as the evalution seems to state, and will take their entire organization backwards instead of forward!

This is the saddest thing I have ever witnessed in my career. That people would put their own "beliefs" and "pride" ahead of the best interests of the organization they are apart of.

I believe that what I just witnessed marks the beginning of the end of what used to be a successful organization. I can only hope that this kind of behavior is eventually rooted out.