Wednesday, October 03, 2007

Open Source: The .NET Framework? Huh?

It is being reported today that Microsoft is open sourcing their .Net framework class libraries (See this link on InfoQ - Open Source: The .Net Framework). Wow, could this be the start of a sea change at Microsoft?

Well, it turns out that this is really no change at all. When you look closely, they are releasing the source code under the Microsoft Reference License.

This license only gives you permission to use the code in a read-only form! This hardly fits the definition of open source, and the term open source should never be used where this license is concerned.

While, a nice move by Microsoft for .Net developers, because they will be able to step through the actual .Net library code using a debugger, it has nothing to do with open source, and I can't believe anyone could use the term open source in conjunction with this move by Microsoft.

People need to get their facts straight before entitling an article in this way. This is very poor journalism, and allows Microsoft to get some good PR, that they clearly do not deserve in this case.

Monday, August 27, 2007

Microsoft's Behavior in the Standards Process for OOXML

I have been reading for weeks now, that Microsoft is manipulating the ISO standards process to get OOXML approved, and I find what I am reading to be disturbing and ethically wrong! Microsoft seems to promote a culture within the company that says, "Don't explicitly break any rules, but use any leverage possible to get what we want". This reminds me of many discussions I have had over the course of my career about the legality of something versus whether what was being done was ethical. The law doesn't cover all aspects, and doesn't define ethics in and of themselves. It's up to individuals and the leadership of companies to define what is and what is not ethical, and apparently Microsoft uses only the law to determine its ethical values. This is a real shame, and is truly a sad state of affairs for a company with so much power in the software industry to behave the way they behave. So what have they been doing that is so bad?

Well, let's enumerate everything I have seen:

  • Using individual countries standard body's rules to add new members that are Microsoft business partners, so they can stack the vote in their favor.
  • Preventing new members to these same countries body's that they know will not vote to approve OOXML.
  • Using the rules, within the voting process, to make it so that the comments do not get forwarded with the vote to the JTC1 committee of the ISO.
  • Giving misleading information about the JTC1 committee process, so that countries will not vote "No, with comments", and instead will vote "Yes, with comments".
    • By the way, if a country votes "Yes, with comments", Microsoft is not committed to actually fix any issues raised by the comments. They can simply address the comment by logging that nothing will be done.
  • Giving misleading information about the voting deadlines, thereby possibly preventing some countries votes from being counted.
  • Telling certain countries that Microsoft's educational programs in their country would be adversely affected if they didn't vote yes.
  • Calling heads of government agencies to pressure their representatives to vote yes.
While no one has done anything illegal, at least not that we know of yet, Microsoft is crossing an ethical line that shouldn't be crossed.

Instead of being able to sell OOXML as a standard worth considering on its merits, they are subverting the standards process, albeit within the rules, which are very lose, because they know that the standard does not really meet the requirements to become an official ISO standard without that subversion (see my previous post on whether OOXML is open or not).

If OOXML, becomes an ISO standard, it will forever damage the standards process that we rely on to create a truly competitive landscape in the market. Microsoft may have won, but we have all lost, because we will never again be able to trust any standards produced through this process again!

Wednesday, August 15, 2007

JBoss Enterprise Application Platform

This past Monday, I did a keynote at the SysCon Real World Java event in New York city. In preparing for that event, I realized that since JBoss was acquired, the news about JBoss has lessoned substantially, and many people are just not aware of what's new at JBoss. So, that is what I titled my presentation, and I focused on several areas. One area was the newly released (July 3, 2007) Enterprise Application Platform 4.2.

Traditionally, the lead product of JBoss was our application server. With the release of the Enterprise Application Platform, what we have done is combine our application server, with Hibernate, EJB 3, JSF, and JBoss Seam, to deliver an integrated application development platform for the enterprise. Now how is this different, and what does it mean to users of the application server?

Well, the application server only contains Hibernate, by default, out of the technologies I listed above, and the AS 4.2.x releases from jboss.org are now community releases that do not have paid for support. Of course, community support, through our forums, is always available just like before.

With the Enterprise Application Platform, you also get the most often used technologies, that used to have to be integrated separately by you, integrated by us, and tested as a whole. No more building of your own distribution with what you need to develop enterprise class applications. Also, the testing of the platform, as a whole, is new, and I will highlight the differences.

With the old AS releases, we have a unit test suite, that you can download and build, and we would run that in our continuous integration builds each day, and when the feature set development, and bug fixes were complete, we would concentrate on getting that test suite to 100% passing, and then release. The test suite was only run with the Sun JVM on a Linux platform (typically RHEL 4 based).

With the new Enterprise Application Platform, we continued to run the unit test suite. But we run it on the Sun JVM, the HP JVM (for HP-UX), and the BEA JRockit JVM, on RHEL 4 and 5, x86 and x86_64, Solaris 9 and 10 (Sparc), HP-UX 11i for PA-RISC and Itanium, and Windows Server 2003, x86 and x86_64. This ensures that more combinations of operating systems and JVM's work before we ship, versus having to deal with customer issues after the fact. We also have more operating systems and JVM's teed up for future updates (AIX and the IBM JVM are examples). We also ran the Hibernate test suite on five different databases. We certified on MySQL 5 (5.0.27), PostgreSQL 8.2.3, Oracle 9i (9.2.0.1), Oracle 10g R2 (10.2.0.1), and SQL Server 2005 (version 9.00.2047). Besides, the unit test suites, we added significant integration testing, that we had never done before internally.

The integration tests, contained tests for EJB 3, HTTP Session replication, JBoss Seam, and in the cases of EJB 3 and JBoss Seam we had performance, scalability and clustering tests. This additional testing led to the discovery of many bugs that we would have only discovered through customer deployment in the past. This is exciting for us, in that we have produced a product with the most complete testing we have ever done, and as a result produced the most hardened distribution for our customers ever! This is truly the dawning of a new age at JBoss!

Besides the new process that we take the Enterprise Application Platform through, the support arrangement has changed as well. We offer three years of support for the Enterprise Application Platform, where we fix all bugs and security errata for customers. For an additional two years we fix only critical bugs and security errata, giving a full five year support cycle to our customers! This is what our customers have been asking for, and we are delivering that to the market now, and will continue to do so with future platform offerings.

So, if you are interesting in this new offering, you will have to contact sales, as there is not a binary download of the Enterprise Application Platform available (you must be a subscriber). The other way you can get the Enterprise Application Platform, is through our developer tools packaged called RHDS (Red Hat Developer Studio). It includes the full binary distribution of the Enterprise Application Platform, and will install that, ready to run as part of the new development environment, which was released as a beta on Monday. Here is the link to that:

http://www.redhat.com/developers/rhds/index.html


Download the developer tools, and play around with the technology. I'm sure you will like what you see!

Enjoy!

Tuesday, August 07, 2007

Beagle Correction

A while ago, I posted a comparison of Beagle and the new Google Desktop Beta release. I had since removed Beagle, and had been using Google Desktop exclusively. I received a comment from a Beagle developer, Joe Shaw, and he thought I simply had encountered a bug in Beagle, and that Beagle does indeed index all the text in documents not just the meta data.

Well, I have since upgraded to Fedora 7, and I became interested to see whether a newer version of Beagle indeed would find files that it couldn't find before.

So today, I installed Beagle again, and I did the same search that I had done before, looking for a PDF file with the search query of "Small is Beautiful". As you can see from below, it is now able to find the file that it couldn't find before, and matches what Google Desktop does.

As you can also see, the indexing is not complete yet, but its only been a few hours since I reinstalled. At this point, the large index size problem also seems to be gone. I cannot report yet what the final size will be, until its done, but it is a little over 50% of the size of the Google Desktop index, but I don't know how much more it has to go. It would be nice if there was an index status page somewhere, like in Google Desktop, so you could see what the progress was in indexing your content.

So, it seems that Beagle has improved with Fedora 7, which includes Beagle 0.2.16.2. I congratulate the Beagle team, and I will continue to use both on Fedora so I can see how they fair against each other as they both mature.

In the case of RHEL, which I have one RHEL 5 desktop, I can only use Google Desktop, as RHEL will not ship with Mono, which Beagle requires. It would be nice if Beagle didn't require Mono, so it could be included with RHEL, because I would love to compare it on RHEL as well.

Thanks to Joe Shaw for commenting and looking into what was going on with my documents that Beagle couldn't find.

UPDATE:

Beagle finished indexing my system, and so now I can see the total index size of Beagle compared to Google Desktop. The results are considerably improved from before. As you can see from the below image, Beagle still creates a larger index, but the difference isn't as dramatic. Before, I saw that Google Desktop had an index that was 98% smaller, but now Google Desktop's index is only approximately 23% smaller.


This is very good news indeed, and great work by the Beagle developers!

Thursday, July 26, 2007

My Experience with Fedora 7

Like anyone that has been using Linux for a long time, I really like seeing what's new in my favorite distribution. I have been using Fedora since Fedora Core 3, and have happily upgraded to each successive version, and I finally got around to upgrading to Fedora 7 this week.

A big part of the reason I waited to upgrade, is my laptop is using an ATI Xpress 200M video chip, that doesn't have an open source driver that can do hardware accelerated 3D. The ATI proprietary driver didn't work with Fedora 7 when it was first released, so I thought it better to wait. Two months later, I read an article saying the new driver release fixed the issues with Fedora 7, so I downloaded the new driver, installed it on Fedora Core 6 first, and everything worked, so I popped in the DVD to do the upgrade and rebooted.

The laptop booted just fine off the DVD, and the upgrade process went smooth. It took a little longer than I would have liked, but I have a lot of software installed on this laptop, so there were 895 packages that had to be upgraded. This all went smoothly, and the upgrade finished, it ejected my DVD, and I rebooted. This is where the fun began.

The first thing I planned on doing after the reboot, was install the ATI proprietary driver. Of course, with the initial boot after the upgrade, with the new kernel in place, the driver wasn't there, and it booted in text mode, which is what is expected. At that point, I logged in, and ran the ATI installer, and everything installed fine. I then rebooted again, not specifically because I had to, but I like to make sure the graphical boot works as it is supposed to. At this point, when the graphical boot should kick in, all I get is a black screen.


So, I boot again, and this time use grub to boot into single user mode, and then I look at the Xorg.log file, and I find that the X Server is core dumping! Ouch! This release of the driver is supposed to work for Fedora 7. Well, I decide to reconfigure using the 2D open source radeon driver, and that works as expected, and I am able to get X working. I then go on-line, and find out that the driver is not recommended for use with Fedora 7, as others had encountered the same problem as I. So, I find myself still stuck with a 2D only system, because ATI has not released any fix for this problem. Hopefully, next months release will finally fix Fedora 7 compatibility issues once and for all (I am running out of hope for good ATI Linux support).


After that hurdle, I decided to see if the latest Broadcom driver for my wireless chip actually worked. I have a PCMCIA NetGear card that uses the Atheros chipset, that works great, and I oridinarly blacklist the driver for the internal Broadcom chip (bcm4318) because it has never worked reliably. I certainly would prefer to use the internal wireless, so I continue to experiment.

As it turns out, after upgrading the kernel to the latest Fedora 2.6.22, the driver loaded successfully, and I was actually able to connect to my access point with WPA. I was surprised, and happy to see this working, but the good news didn't last long. The connection speed that was being reported was only 1 Mb/s, and when I tried to open
my browser, it couldn't even open my home page. So with that disappointment behind me I went back to the Atheros based NetGear card. I had to use a snapshot release of the code for it to compile with the 2.6.22 kernel, but it works as good as ever, and maybe a little better.

At this point, everything is working quite well, and with some additional patches for Evolution's data server package, I now have a stable working laptop once again, albeit without working 3D. So after working with it the last few days, here are some of my observations.


I really like the fast user switching. The first time I tried it, it complained that it couldn't find the GDM binary, but after the screen saver kicked in, and I awoke it, I decided to use the "Switch User" function from there, and it worked! I was able to switch back and forth from two accounts without any issues. Since then, it has worked from the panel applet without complaint. I really like this feature, and its been sorely lacking in Linux for quite some time (I am used to this feature in Mac OS X).


Another area, that has been a pleasant surprise, has been using 32-bit Firefox plugins under 64-bit Firefox. This laptop is using an AMD Turion 64 processor, and the 64-bit Firefox is the default installation. Up to this point, I have always gone through the trouble of installing the 32-bit Firefox, just to get Flash, Adobe Reader, and other plugins working. I had read about some software called nspluginwrapper. This is not in the official Fedora repositories, but it has a build that works perfectly on Fedora 7. This has enabled me to use the 64-bit plugins for Xine, and OpenOffice.org, and at the same time use 32-bit Adobe Reader, Flash 9, and Java. Those along with the xine-lib-moles package, that adds the proprietary codecs to Xine, has opened up all the content on the web that I have
not been able to access before. I find my web experience to be so much more pleasurable then before! Of course, I would prefer that these web sites didn't use the proprietary formats to being with, and everyones lives would be much better.

The final area that seems to have improved dramatically is Firewire. I have a "My Book" external hard drive, that I use for backups, and it has both a Firewire interface as well as a USB 2.0 interface. The Firewire interface has never worked, so I broke out my Firewire cable, and plugged it in, and it powered up, and the drive mounted with a nice icon on the desktop, just like it should!


Since, this worked, I decided to test out the Firewire interface for performance and reliability of my backups. I needed to take a backup anyway, so I started up my backup process which creates a gzipped tar of my home directory, and then I simply move it to the "My Book". I timed the move to the "My Book", and also opened the resultant file using file roller, and did the move and open again using the USB 2.0 interface. The Firewire interface was slightly faster at moving the 4.3 GB file, but only by 9 seconds, so there wasn't much of a performance difference, but the surprising thing was only the Firewire transfer resulted in a file I could open successfully. The USB transfer resulted in a file that got CRC errors. Obviously, that isn't a good backup, so I redid the transfer once again using the Firewire interface, and was able to open the backup file on the "My Book" with no issues again. This kind of problem has happened intermittently with USB for quite sometime, and it gets more prevalent with larger backups. Needless to say I really like the new Firewire stack in Fedora 7, and soon I'll be testing it with a digital video camera, just to see how far this new stack has come.

To sum things up, since getting over the hurdles of my hardware, I have a very stable platform for doing my daily work, and there has been progress on many fronts. The work going into the Broadcom drivers is improving rapidly, and I hope to be able to use my internal wireless chip soon. I only wish ATI would get their act together on the video driver, so I can fully exploit my hardware.

Tuesday, July 03, 2007

Google Desktop for Linux vs. Beagle

Recently Google released Google Desktop for Linux. I have been using Beagle on Fedora Core, since it was added, and currently am running Fedora Core 6. With that, I decided to try out the beta of Google Desktop, and compare search results between the two, to see if one was any better than the other.

So, I installed Google Desktop with their RPM for Fedora, and set the preferences. I setup my preferences for indexing the same as I did for Beagle, so the comparison would be fair on both sides. You can see the settings in the following image:

Most of the settings are the defaults provided, but I added /var, /opt, /etc and /tmp as file systems, because I like to be able to search for things in log files written by syslog, configuration files, etc., and I also am indexing all file types, and web history, with the only exception being https content.

This pretty much mirrors my Beagle preferences as you can see from below:


So, after setting the preferences, I watched Google Desktop go to work on indexing my file systems. What was interesting is that it took a very long time. Over two days to do the first pass at indexing. Now, granted, I have a lot of files on my laptop, so this is understandable, but Beagle seemed to index my files a lot faster, but I don't have a specific time to compare against, because there is no way to monitor the indexing progress of Beagle (at least not that I know of). Now that brings us to comparing search results.

With Beagle, I have been frustrated at times that it couldn't find files that I knew were there, but couldn't remember where I had saved them. Isn't that what desktop search is all about? In fact, as a result of trying to find a Portable Document Format (PDF) document that I had saved from the web, I opened a Bugzilla case thinking that Beagle was not indexing PDF's. It turned out that Beagle was indexing the PDF's, but Beagle only indexes based on a files metadata, not its entire contents. That explains why it couldn't find the file I was looking for, because the search phrase I was using didn't match the files metadata, but part of its content. So, I had the perfect test case to see whether Google Desktop could find what Beagle couldn't.

I searched with the term "Small is Beautiful", which is part of a subtitle of a document produced by Familiar Metric Management, and it is about software development productivity as it relates to team size. As you can see, from the image below, that this search phrase returns nothing using Beagle.


So, I did the same search with Google Desktop, and you can see the results below. Unfortunately, I couldn't find a way to capture a screen shot of the interface, without losing the results at the bottom, so I did the search from the browser interface instead.

As you can see from my cursor highlight, Google Desktop found the file I was looking for without any problem. This explains the major difference between Google Desktop and Beagle. Beagle trades off indexing speed, by just indexing the metadata on documents, while Google Desktop does a full index on the content, thereby taking much longer to index files, but giving much better results. I prefer the better results. There is one other difference that I would like to point out between the two.

In backing up my laptop, I noticed that my backup of my home directory was taking longer and longer, and the backup was getting very large. In looking into this, it turned out that a large percentage of my home directory was the beagle index. That led me to look into how large the Google Desktop index was in comparison. Well, there is no comparison. The Google Desktop index is much, much smaller (see below).


In fact, its 94% smaller than Beagle! This is a huge difference, and certainly pays off in disk usage.

In conclusion, I really liked Beagle, but Google Desktop offers better search results, with considerably less disk usage for the index. At this point, I'm ready to turn off Beagle (maybe even uninstall it), and rely on Google Desktop instead.


Thursday, March 15, 2007

Greater than 4GB files on an External USB Hard Drive

Several months ago, I purchased a Western Digital USB/Firewire external hard drive to backup my laptop's home directory. Considering that I was using it with Linux, specifically Fedora Core 6, I wasn't sure how things were going to work.

After plugging it in, and attaching it via the USB 2.0 cable, it mounted and was presented on the GNOME desktop, and I could browse the contents of the disk without issue. Trying to keep things simple, I merely used tar and created a gzipped tar of my home directory, making sure to preserve all the permissions of the files with the following command:

tar -czpf /tmp/[file name with date].tar.gz /home/[my home directory]

This works quite well, but it presented me with my very first issue. My home directory is quite large, and the very first tar file I created was larger than 4 GB, so I couldn't write it to the external drive. It couldn't be written for the simple reason that the drive was using the FAT file system, and it doesn't support file sizes larger than the 32-bit maximum of 4 GB.

So, I looked through my home directory, and I found some obvious culprits to my size problem, and deleted those files, because I no longer needed them. Mostly it was old ISO images, that I had burned to CD long ago, and didn't need anymore. Okay, problem solved right?

Well, not quite. This worked for several months, but I was still dangerously close to the 4 GB limit. Eventually I spilled over the limit, and really couldn't delete files to get back under it.

With this in mind, I decided to see if I could change the file system to one that supported files larger than 4 GB. Considering that I am only using this with Linux, cross platform compatibility was not an issue for me, so the obvious choice was to use the ext3 file system from Linux. This would give me the large file support I needed, and also be more reliable, as ext3 is more robust than FAT, and it supports journaling, so there is significantly less risk to losing data.

During my investigation of making this change, I found nothing but individuals having problems with trying to do this. Many individuals had even rendered their drives unusable. Considering this, I took a step back and wondered whether I should try this, or see if I could think of another resolution.

I really couldn't think of a better way to deal with this problem, and I wanted to keep things simple, so I went ahead and tried to make the file system change, and here is the procedure I used.

  • First, I moved all my backups of my home directory that were currently on the drive, and copied them to /tmp on my laptop.
  • Second, I fired up GParted, considering that it is a graphical partitioning tool, that also will format partitions. This proved to be an excellent choice, because it helped me to avoid one pitfall.
    • Considering that the drive was plugged into the USB port, and mounted under /media/My Book, GParted would not let me format the drive until I unmounted it.
    • I used GParted to unmount the drive, and then I selected from the menu "Format to->ext3".
    • I watched as it automatically changed the partition type to the correct one, and then formatted the partition with the ext3 file system.
    • It completed with no issues, but here is where one of the problems reared its ugly head.
      • After formatting, the drive would no longer auto mount, and show itself on the desktop. I could manually mount it with the mount command, and it was working. I even wrote some files to it just to make sure everything was fine, and it was.
      • The guys on the Fedora Core mailing list were most helpful with this problem.
      • As it turns out, I needed to label the new file system with the e2label utility, which I did with the following command:
        • e2label /dev/sdc1 "My Book"
  • Finally, I moved the backups I put in /tmp back to the drive with the new file system.
After, these simple steps, I had a newly formatted external USB hard drive that I could write files larger than 4 GB to, without issues. It would auto mount, just the way it did when it was a FAT file system, and I now have some very large backups on it, and didn't have to change my very simple backup procedure.

Tuesday, March 13, 2007

Glimmer of Hope for Desktop Linux?

In the last week or so, I have read three different articles that talked about different government agencies that are banning Microsoft's Vista operating system, along with other Microsoft products in some cases. The National Institute of Standards and Technology (NIST) is the latest, and this follows the US Department of Transportation (DOT) and the Federal Aviation Administration (FAA).

So, what makes this a glimmer of hope for desktop Linux. Well, at least in one of those cases, the FAA, is seriously looking at a combination of Linux desktops with Google's new enterprise applications as a replacement for Windows and Microsoft Office! When you combine this type of interest with other government initiatives to adopt open standard file formats, you can see a glimmer of hope that the Microsoft lock is being broken by some large government agencies.

You could say, so what! It's only some public sector organizations! What makes this a glimmer of hope, in my mind, is the carry over affect it could have on the private sector.

If enough government agencies start adopting open technologies like Linux and ODF, then the private sector companies that have to do business with them will have to adopt technologies that inter-operate. This in turn loosens the grip that Microsoft has on a larger portion of the market.

I sincerely hope that these government organizations aren't just bluffing to get concessions out of Microsoft. With large scale adoption of open technologies, such as Linux and ODF, we will all be better off. True competition on the market for desktop operating systems and applications could become a reality someday.

Wednesday, February 14, 2007

Is OOXML Open as Microsoft Claims?

Microsoft recently posted an "open letter", complaining that IBM is not in favor of open standards, and that they are all hypocrites. It is noted that IBM was the only one to vote no in the ECMA process for the standardization of OOXML. I find this to be disingenuous to say the least.

Microsoft claims that OOXML is open because of its acceptance as an ECMA standard. In my opinion, that hardly makes it open. The rules by which ECMA standards are created are very loose indeed, and I don't blame IBM one bit for voting against it. I just can't believe that everyone else involved didn't vote no too!

File formats have become an interesting topic of conversation, ever since ODF (Open Document Format) came on the scene. Before ODF became an OASIS and ISO standard there were no open standards for office document formats. With Microsoft controlling the majority of the market for office productivity applications, their proprietary file format has been lock-in heaven for them, and lock-in hell for their customers.

ODF threatens to break that lock-in, and free customers to choose alternatives, without the problems associated with proprietary file formats (lost formatting, can't edit with a different application, etc.). So, Microsoft had to act to protect its franchise, because they simply are afraid to, or maybe they can't, compete on the quality of their implementation of office productivity software. Of course, it would also commoditize the market, and drive down prices. With Office being almost half of Microsoft's profits, that's a hard pill to swallow.

With that as the backdrop, is OOXML truly open?

The short answer is an emphatic NO!

The reason for this is simple. The specification clearly references proprietary Microsoft Office technology that cannot be implemented by anyone other than Microsoft. Truly open standards, need to be able to be implemented by anyone that desires to do so, and this is simply not the case with OOXML.

Without the ability for competing products to implement the file format, Microsoft can claim to have an open standard file format, and keep the lock-in they have enjoyed for years. As they say in the Guiness commercials, "brilliant!".

Of course, I hope the ISO will put an end to this charade, and vote this down as an ISO standard. That is the only just thing that can happen. If Microsoft gets away with this, Microsoft will have won again, and the joke is on us.

What's the old saying? Fool me once, shame on you, fool me twice, shame on me!

Well, if the ISO members are fooled into accepting OOXML as a standard, it will not only be the shame of the ISO members, but a shame on the entire world!

Open Source Whiner Babies!

Since Marc Fleury's retirement from Red Hat, there have been several articles and blogs written, with regards to Marc and JBoss. In those articles and blogs, it always seems like the folks who are critical are the guys that left JBoss in the early days, to try and create a competitive business that they called the "Core Developers Network" or CDN.

The thing that strikes me the most about their comments, is that they are childish, immature, and lean on a crutch of what "true open source" is.

What these guys are, are whiner babies, and nothing more!

They weren't getting what they think in their own minds was fair, as far as a stake in JBoss goes, so they split and tried to form a competitor, based on the same project (Whose ego was getting in the way here?).

Then, when JBoss moved to protect its business by removing their commit privileges, they cried foul.

What did they expect? Peace and love?

In reality, if they stuck it out, and continued to work, they would have been hansomly rewarded in the end. Now that JBoss has been acquired by Red Hat, and Marc, along with lots of other folks, got big paydays, they are left to cry over their spilled milk.