tag:blogger.com,1999:blog-189749492024-03-29T00:07:22.699-06:00Open Source and Enterprise ArchitectureDiscussion of open source software and enterprise architecture, including middleware, software development, and all manner of hardware.Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.comBlogger47125tag:blogger.com,1999:blog-18974949.post-74637739860255873312010-11-01T14:00:00.000-06:002010-11-01T14:00:55.705-06:00Blaming Microsoft for Your Own Shortcomings!<div style="text-align: justify;">Today, I read an article that was about enterprises having a difficult time migrating to Windows 7. The cause? IE6, of course!</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">In the article, someone is quoted as saying that Microsoft should do more to help its customers because the problem was caused by Microsoft!</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">I don't normally defend Microsoft, but I think every enterprise that wrote applications that could only run on IE6 deserve what they get! After all, it was never a big secret that Microsoft was trying to make sure that they locked you into Windows, by locking you into proprietary browser technology. No one should be surprised, and Microsoft is not to blame.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">When I worked in IT, during the very time when IE 5, 5.5 and 6 were the predominant IE releases, we tested all our web applications on Firefox, as well as IE, just to make sure that we weren't using proprietary IE features. </div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">As Enterprise architects, you have to think beyond what the developer cares about, and think about the entire enterprise. With that in mind, decisions about proprietary technology have to be weighed very heavily. As anyone that has read this blog knows, I'm no fan of proprietary technology, and this article is just another data point, in a long line of data points, that shows you eventually pay a price for using <i><b>ANY</b></i> proprietary technology. This example is especially egregious, just because the enterprises in question aren't even switching to a competitive operating system solution, but cannot even upgrade to the latest version from the same vendor.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">In this day and age, everyone should know better!</div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com2tag:blogger.com,1999:blog-18974949.post-80595380608833415842010-03-08T11:14:00.000-07:002010-03-08T11:14:20.532-07:00Something Everyone Interested in Open Source Should Read<div style="text-align: justify;"><span style="font-family: Arial,Helvetica,sans-serif;">I just ran across Michael Tiemann's recent blog post, on his OSI (Open Source Initiative) blog. After reading it, I really felt like everyone should read this.</span></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><span style="font-family: Arial,Helvetica,sans-serif;">So, here is the link:</span></div><div style="text-align: justify;"><span style="font-family: Arial,Helvetica,sans-serif;"> </span></div><div style="text-align: justify;"><span style="font-family: Arial,Helvetica,sans-serif;"><a href="http://www.opensource.org/node/511">The OSI Categorically Rejects IIPA's special pleadings against Open Source</a> </span></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">We all need to be on guard against organizations like the IIPA, and what they attempt to do to disrupt open source software's adoption.</div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com1tag:blogger.com,1999:blog-18974949.post-28996465629362988842010-01-28T07:59:00.000-07:002010-01-28T07:59:30.753-07:00What Would Life Be Like Without Windows?<div style="text-align: justify;">Today I read a blog titled the same as what I titled my blog post, from PC World, and giving full credit, were credit is due, by Randall C. Kennedy of InfoWorld. Now, I don't know Randall, but I have to say, it was the most lopsided view of the world I have ever read.<br />
</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Basically, it was an article claiming that the monopoly of Microsoft on the desktop is a good thing. I used to work in telecommunications, and I used to hear the same argument about AT&T.<br />
</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">The fact of the matter is, monopolies are never good, but competition is very good for everyone. It brings prices down, improves quality, and expands the market.<br />
</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Besides, I know what life is like without Windows. I haven't run any version of Windows since Windows 3.1! I can tell you from my experience, that it wasn't always peaches and creme, but today I couldn't be happier.<br />
</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">I'm currently running Fedora 12 on the laptop that I'm am using to write this blog post. I have all the software I need for everything I do, and it works great. I can interoperate with anyone out there, even people using Microsoft products. Besides that, I have the most stable environment I can imagine.<br />
</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">I don't spend countless hours fixing my system, but spend countless hours getting work done. No viruses, crashes, hangs, or interoperability issues here. I even have Mac's in my home, and my Linux system and the Mac's interoperate just fine. We share drives across the network, e-mail and IM (including video) between the systems, share documents, you name it, and it all works.<br />
</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Randall's vision of a future without Windows is simply not based on reality.<br />
</div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com5tag:blogger.com,1999:blog-18974949.post-61446871831128088642009-11-19T13:11:00.000-07:002009-11-19T13:11:00.754-07:00Fedora 12 Rocks!<div style="font-family: Arial,Helvetica,sans-serif; text-align: justify;">I just upgraded to Fedora 12, and I have found the release to be very exciting, at least for me. You see, I have an HP laptop, and it was a great deal at Best Buy, but it had some hardware that I knew would probably have problems.<br />
</div><div style="font-family: Arial,Helvetica,sans-serif; text-align: justify;"><br />
</div><div style="font-family: Arial,Helvetica,sans-serif; text-align: justify;">It has two pieces of hardware that have had interesting results under Linux. The first is an ATI (now AMD) HD 3200 graphics card. While the stock ati open source driver worked with it when I first bought it (right around the release of Fedora 10), it had no hardware acceleration of any kind, not even 2D. So it worked and was functional, but I really couldn't do much with it. On the laptop that I replaced, I had an Nvidia card, and really was used to Compiz Fusion, and the way that it enabled me to work. So, I had to settle for what worked. Over time I tried to use the proprietary driver, but it never worked reliably, and soon it stopped working on Fedora at all.<br />
</div><div style="font-family: Arial,Helvetica,sans-serif; text-align: justify;"><br />
</div><div style="font-family: Arial,Helvetica,sans-serif; text-align: justify;">With that, I had to wait and see what would happen in the open source drivers for this card. Well, with Fedora 12, if you install the MESA DRI experimental drivers package, you can now get accelerated 3D, and so far it has worked flawlessly. It's not giving the frame rates I would expect, but its light years ahead of where it was with software rendering. I now have Compiz Fusion installed and configured the way I like it, and even though this package is experimental, so far I have had no reliability problems at all!<br />
</div><div style="font-family: Arial,Helvetica,sans-serif; text-align: justify;"><br />
</div><div style="font-family: Arial,Helvetica,sans-serif; text-align: justify;">Kudo's to AMD for releasing the specifications, and Kudo's to the team of developers working on the ati driver and adding this support. It's simply awesome!<br />
</div><div style="font-family: Arial,Helvetica,sans-serif; text-align: justify;"><br />
</div><div style="font-family: Arial,Helvetica,sans-serif; text-align: justify;">The second piece of hardware that I have had trouble with is the Intel HDA sound card. While playback always worked, the microphone that is built in to the laptop worked sporadically. I would get a kernel or driver update and it would start working. I would get another kernel or driver update and it would be broken again. Weird things like the volume getting adjusted over 100% on Flash video playback would happen, which was very annoying. Well, with Fedora 12, I can successfully use Skype to make audio/video calls, and the built-in microphone actually works. I can also record video with audio in Cheese and that works too! This has been a real pain, because there are times that I really need to do video conference calls and I just couldn't do it. Also, the weird adjustment of the volume level in Flash is gone too, which removes a real annoyance from the equation.<br />
</div><div style="font-family: Arial,Helvetica,sans-serif; text-align: justify;"><br />
</div><div style="font-family: Arial,Helvetica,sans-serif; text-align: justify;">This release finally lets me use all my hardware in this laptop, and so far has been completely stable. I couldn't be more happy. There are lots of other new features that I would like to explore as well, but I haven't had the time as of yet.<br />
</div><div style="font-family: Arial,Helvetica,sans-serif; text-align: justify;"><br />
</div><div style="text-align: justify;"><span style="font-family: Arial,Helvetica,sans-serif;">If you haven't checked out Fedora 12, do it, its been great for me, and the much improved hardware support is pretty awesome!</span><br />
</div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com2tag:blogger.com,1999:blog-18974949.post-23178329604737068532009-01-08T16:43:00.002-07:002009-01-08T16:51:21.935-07:00Time for an Open Source Strategy<div style="text-align: justify;"><span style="font-family: arial;">In looking at the state of things right now, economically speaking, has there ever been a time better suited for adoption of open source?<br /><br />I don't think so. Given today's economic situation, closed source license and maintenance fees can choke off the air supply of any business. I know from personal experience, having to cut budgets many times over the years when I was in IT, maintenance fees on closed source software always adds up to a significant amount of money in the enterprise. If you find yourself in that situation, and you have been on the sidelines where open source adoption is concerned, its time to get off the sidelines and into the game!<br /><br />If you want specific advice in adoption of open source technologies, please don't hesitate to post here.<br /><br />Good luck to everyone in these very tough economic times.<br /></span></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com0tag:blogger.com,1999:blog-18974949.post-51321819248556187212008-10-01T11:57:00.004-06:002008-10-01T13:25:23.353-06:00Are you Stupid if you use "Cloud" based Applications or Services?<div style="text-align: justify;"><span style="font-family: arial;">There has been a lot of recent commentary around Richard Stallman's recent comments about cloud computing, as well as Larry Ellison's comments at Oracle OpenWorld, also ridiculing "cloud computing". With those comments, I started to think about it at a little deeper level then before, and figured it was a good topic to cover for enterprise architecture.<br /><br />As the buzz around software-as-as-service (SaaS), cloud computing, hosted applications, platform-as-a-service (PaaS), call it what you want, has grown, its become clear that enterprises need to understand these offerings, and determine whether it is right for them. With that, and the fact that some contend this is stupid, let's examine whether it really is or not. Regardless of how you might personally feel about Richard Stallman or Larry Ellison, there is some truth in what they both say.<br /><br />As anyone who has ever read my blog, or known me personally, should know, I am a big proponent of openness. Openness in the case of open standards and open source. If we look at cloud computing through the lens of openness I can see cases where it can be stupid to depend on it, and cases where it can be very smart indeed. Let's start by looking at the so-called "stupid" cases.<br /><br />In general, Richard Stallman talked about cloud computing being a trap. In a sense, he is correct. In the case where you are using a hosted application in the "cloud", and your data may be held in a proprietary format, with very high barriers for getting your data out. This is just like buying into the proprietary ERP vendor solutions that have proliferated in IT shops around the globe. Even when you have those in-house, they have your data in a proprietary format, and they make it as difficult as possible to get it out. This makes the barrier to exit very high, which leads to the trap that you can't switch to another vendors solution without unbearable conversion costs! So, the trap isn't really the fact that its a cloud based solution, but that they have your data in a proprietary format, and the switching costs, once they have your data, is too high for most companies to absorb. By definition, this flies in the face of openness and not being locked into any one vendor. Something a lot of IT shops work hard to avoid, but fall right into with both in-house and cloud based software. So, what about cloud based platforms?<br /><br />In the case of cloud based platforms there is a trap also. The trap is that you use a proprietary platform, with API's and features only available from the cloud provider. This is another area that enterprises should avoid. Instead of trapping you with proprietary data formats, they trap you with proprietary application programming interfaces and techniques, rendering your application non-portable in every sense. You can't lift your application up, and drop it into another cloud from another vendor, and you can't bring it in-house, without re-writing it! Ouch!!! For many years, I battled against using proprietary API's in in-house developed applications, only to be told that we would never switch from x to y! Of course, in all those cases, just the opposite ended up happening. In many cases, changing platforms saved the company millions upon millions of dollars. In fact, this strategy saved my last employer over 26 million dollars in the nine years I was employed there (and this figure has continued to grow over time). Don't fall into the trap with proprietary development API's and features. You will regret it in the long-term. So, that covers cloud based applications, and cloud based application development platforms, what else is there?<br /><br />Well, there is one more category of cloud based computing. Cloud based infrastructure, where you are provided with a virtualized hardware environment (servers, network and storage), and you can choose to put your choice of operating system, middle-ware and databases in place. This infrastructure can be used for both primary hosting of whatever you want to put on it (whether in-house developed or not), and can be used for dynamic expansion of infrastructure to handle peak loads, now being called "cloud bursting". So, what is this category of cloud computing - smart or stupid?<br /><br />If the infrastructure allows you to choose the operating system, middle-ware and databases, and you can successfully run you application outside of the cloud, then I would say that this is smart indeed. You have all the control you need to keep your application portable, without the infrastructure investment and on-going management costs. Not to mention the ability to dynamically grow the environment.<br /><br /></span><span style="font-family: arial;">In summary, look for cloud based software solutions that are based on open standards (open source as well), with open formats for storing your data, and the ability to easily extract your data through an open interface (perferrably with the ability to do high-volume bulk transfer). If you are looking at platforms for development, only accept those platforms that don't depend on proprietary API's, and keep a running copy on an internal environment somewhere (doesn't have to be large and expensive), to verify that you can run the same application deployed outside of the vendors cloud. If you are just looking for infrastructure, stay with vendors that allow you to choose the operating system, middle-ware and databases. That will keep what you do there portable, whether that's a primary environment or you are using it to do "cloud bursting".<br /><br />Like most things in life, you can do stupid things and smart things with technology, just try to understand any hidden traps there might be, and keep your solutions open!<br /></span><br /></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com0tag:blogger.com,1999:blog-18974949.post-49914313098259484232008-09-11T17:34:00.004-06:002008-09-11T17:55:02.081-06:00Sprint EVDO Card and Linux<div style="text-align: justify;">Recently I upgraded to a new phone, and while I was at it I bought one of those mobile broadband cards for my laptop. I had seen enough blogs and e-mail posts on various mailing lists to know that it would probably work. My carrier is Sprint, so I naturally received an EVDO card from them.<br /><br />Upon my purchase, I noticed from Sprint's own instructions for Linux that you couldn't activate the card on Linux. That could only be done from a Mac or Windows based PC. Well, I happen to have Macs in my house along with my Linux systems, so I went to a Mac, and activated the card. It was very simple, as it automatically ran a Mac version of some software built into the card, and it activated, and connected without issue. So, then I wanted to get it configured to work on my Linux laptop. That's where the fun began.<br /><br />I'm running Fedora 9, but its also a 64-bit AMD system, and I am running the x86_64 version of Fedora on it. When I plugged in the card, I could see that it was recognized by the OS, and the USB product id and vendor id would display. So, that was promising. I wanted to do this the easiest way possible, so I thought I would just go into the Edit Connections... menu item in the Network Manager applet, and configure from there. So, I did that, and I selected the broadband tab, and clicked on Add. Well, nothing happened. Nothing at all. After a while of digging around, I decided to run the same connection application from the command-line. Sometimes applications will spew out errors to standard error that never are displayed in the GUI, so I thought that I might learn something.<br /><br />So, I ran nm-connection-editor from the command-line, instead of from the Network Manager applet, and what do you know, an error message spews out when I try to add a broadband connection. Here it is:<br /><br /><pre class="bz_comment_text" id="comment_text_0">** (nm-connection-editor:4208): WARNING **: create_new_connection_for_type:<br />unhandled connection type 'NMSettingCdma'<br /><br />** (nm-connection-editor:4208): WARNING **: Can't add new connection of type<br />'cdma'</pre>After some searching, I found out that the x86_64 version of Network Manager actually didn't have the broadband code implemented in the version that was in Fedora 9. So, I opened a bugzilla on it, and I ended up getting a response saying it was fixed in a version that was in Fedora 9 updates testing.<br /><br />So, after getting the appropriate link to where these packages where, I decided to install them and try it out. Well, I must say I couldn't be happier. It worked flawlessly, and it detected the correct type of card, and filled in all the information for it, and I didn't change anything other than the default connection name that the wizard had filled in. At first, I thought no one has ever documented adding a mobile broadband card through the Network Manager interface, so I would do that. After thinking about it though, it was so easy, that there really isn't any value in documenting it.<br /><br />This is truly the way things should be. Plug in your device, right-click and select Edit Connections..., then select the Mobile Broadband tab, click Add, and click OK! It couldn't get much easier than that! Whoops, I guess I just documented it!<br /><br />So, if you haven't used the Network Manager for doing this, its certainly a lot easier than setting up PPP scripts that I have seen documented by many people. Give it a try, you'll like it.<br /></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com0tag:blogger.com,1999:blog-18974949.post-28539285268567422252008-08-29T13:34:00.002-06:002008-08-29T13:38:11.229-06:00Can Software Patents Get Any Worse?Today, I read an article that states Microsoft has been granted a patent for "Page Up" and "Page Down"! I laughed when I saw the title, and I couldn't believe it. Here is a link to the article:<br /><br /><h1><a href="http://news.zdnet.com/2424-9595_22-218626.html">Microsoft patents 'Page Up' and 'Page Down' </a></h1>It has become amazing what you can get a software patent for. I really don't even know what to say, its so shocking!<br /><br />Simply amazing!Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com0tag:blogger.com,1999:blog-18974949.post-60709170140158207832008-02-19T09:23:00.005-07:002010-06-08T17:55:09.757-06:00JVM Performance Tuning<div style="text-align: justify;"><span style="font-family: arial;">Last week was JBoss World, and it was exciting to be a part of it. I gave a presentation on performance tuning our Enterprise Application Platform or EAP, and it was packed. In fact, people were sitting on the floor in pretty much all available space. What struck me about the presentation, and much of the discussion I had with individuals afterwards, is that JVM tuning is a big topic. So, I thought I would share some of what I learned over the past couple of months as I was preparing for my presentation.<br />
<br />
In preparing for my presentation, I wrote an EJB 3 application, wrote a load test for it, and applied optimizations to various configuration parameters within the EAP, the JVM and the operating system. In particular, one JVM and OS setting really made a huge difference in throughput, and its something that I wanted to share here.<br />
<br />
When using a 64-bit OS, in my case Fedora 8 and RHEL 5.1, I wanted to investigate the usage of large page memory support, or HugeTLB as its referred to within the Linux kernel. What I found was very scarce documentation around using this, and that that documentation was incomplete to actually make it work. What I also found, is it makes a huge difference in overall throughput and response times of an application, when using heap sizes above 2GB.<br />
<br />
So, without further ado, let's dive into how to set this up. These instructions are for Linux, specifically for Fedora 8 and RHEL 5.1, but the results should be generally applicable to any 64-bit OS and 64-bit JVM that supports large page memory (which all the proprietary UNIX's do, and I found an MSDN article describing how to use this on 64-bit Windows).<br />
<br />
You must have root access for these settings. First, you need to set the kernel parameter for shared memory to be at least as big as you need for the amount of memory you want to set aside for the JVM to use as large page memory. Personally, I like to just set it to the maximum amount of memory in the server, so I can play with different heap sizes without having to adjust this every time. You set this by putting the following entry into /etc/sysctl.conf:<br />
<br />
<span style="font-family: courier new;">kernel.shmmax = </span><span style="font-style: italic; font-weight: bold;"><span style="font-family: courier new;">n</span><br />
<span style="font-style: italic;"><span style="font-weight: bold;"><br />
</span></span></span>where <span style="font-style: italic; font-weight: bold;">n</span> is the number of bytes. So, if you have a server with 8GB of RAM, then you would set it to 8589934592, or 1024*1024*1024*8, which is 8GB.<br />
<br />
Second, you need to set a virtual memory kernel parameter to tell the OS how many large memory pages you want to set aside. You set this by putting the following entry into /etc/sysctl.conf:<br />
<br />
<span style="font-family: courier new;">vm.nr_hugepages = <span style="font-style: italic; font-weight: bold;">n</span><span style="font-family: arial;"><br />
<br />
where <span style="font-style: italic; font-weight: bold;">n</span> is the number of pages, based on the page size listed in /proc/meminfo. If you cat /proc/meminfo you will see the large page size of your particular system. This varies depending on the architecture of the system. Mine, is an old Opteron system, and it has a page size of 2048 KB, as shown by the following line in /proc/meminfo:<br />
<br />
<span style="font-family: courier new;">Hugepagesize: 2048 kB</span><br />
<br />
So, I wanted to set this to 6GB. I set the parameter to 3072, which is (1024*1024*1024*6)/(1024*1024*2). Which is 6GB divided by 2MB, since 2048 KB is 2MB.<br />
<br />
After this, you need to set another virtual memory parameter, to give permission for your process to access the shared memory segment. In /etc/group, I created a new group, called hugetlb, you can call it whatever you like, as long as it doesn't collide with any other group name, and it had a value of 501 on my system (it will vary, depending on whether you use the GUI tool, like I did, or whether you do it at the command line, and vary depending on what groups you already have defined). You put that group id in /etc/sysctl.conf as follows:<br />
<br />
<span style="font-family: courier new;">vm.hugetlb_shm_group = <span style="font-style: italic; font-weight: bold;">gid</span></span><br />
<br />
where <span style="font-style: italic; font-weight: bold;">gid</span>, in my case was 501. You also add that group to whatever your user id is that the JVM will be running as. In my case this was a user called jboss.<br />
<br />
Now, that concludes the kernel parameter setup, but there is still one more OS setting, which changes the users security permissions to allow the user to use the memlock system call, to access the shared memory. Large page shared memory is locked into memory, and cannot be swapped to disk. Another major advantage to using large page memory. Having your heap space swapped to disk can be catastrophic for an application. So, you set this parameter in /etc/security/limits.conf as follows:<br />
<br />
<span style="font-family: courier new;">jboss soft memlock <span style="font-style: italic; font-weight: bold;">n</span></span><br />
<span style="font-family: courier new;">jboss hard memlock <span style="font-style: italic; font-weight: bold;">n</span></span><br />
<br />
where <span style="font-style: italic; font-weight: bold;">n</span> is equal to the number of huge pages, set in vm.nr_hugepages, times the page size from /proc/meminfo, which in my example would be, 3072*2048 = 6291456. This concludes the OS setup, and now we can actually configure the JVM.<br />
<br />
The JVM parameter for the Sun JVM is -XX:+UseLargePages (for BEA JRocket its -XXlargePages, and for IBM's JVM its -lp). If you have everything setup correctly, then you should be able to look at /proc/meminfo and see that the large pages are being used after starting up the JVM.<br />
<br />
A couple of additional caveats and warnings. First, you can dynamically have the kernel settings take affect by using sysctl -p. In most cases, if the server has been running for almost any length of time, you may not get all the pages you requested, because large pages requires contiguous memory. You may have to reboot to have the settings take affect. Second, when you allocate this memory, it is removed from the general memory pool and is not accessible to applications that don't have explicit support for large page memory, and are configured to access it. So, what kind of results can you expect?<br />
<br />
Well, in my case, I was able to achieve an over 3x improvement in my EJB 3 application, of which fully 60 to 70% of that was due to using large page memory with a 3.5GB heap. Now, a 3.5GB heap without the large memory pages didn't provide any benefit over smaller heaps without large pages. Besides the throughput improvements, I also noticed that GC frequency was cut down by two-thirds, and GC time was also cut down by a similar percentage (each individual GC event was much shorter in duration). Of course, your mileage will vary, but this one optimization is worth looking at for any high throughput application.<br />
<br />
Good luck!<br />
</span></span></span></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com63tag:blogger.com,1999:blog-18974949.post-22443741626102905092008-02-04T11:20:00.000-07:002008-02-04T17:33:43.728-07:00Yahoo and Microsoft; Mixing Oil and Water<div style="text-align: justify;"><span style="font-family:arial;">Every since Microsoft announced its $44.6 billion dollar offer for Yahoo, there have been many articles flying around about the potential merger. What I find most interesting, is the lack of coverage of the technology issues around such an integration.<br /><br />I have seen only two articles that have mentioned technology differences between the two companies as an integration challenge. I think this is a huge oversight in the coverage of the acquisition.<br /><br />From what I know of Microsoft, and what I have heard of Yahoo's technology, you simply cannot downplay the challenge of putting these two companies together. They are polar opposites where engineering is concerned, and Microsoft is living in a dream world if they think they are going to get any synergy from combining the two engineering teams.<br /><br />Good software developers tend to be pretty picky about the technologies they work with, and are probably with the company they are with, in large part, because of the technologies employed.<br /><br />In the case of Microsoft, there is no speculation about what technologies will be employed. They will be Microsoft technologies, period. This is illustrated by Microsoft's acquisition of HotMail. HotMail was deployed on an open source infrastructure, and I believe they were using BSD as the operating system. When Microsoft acquired them, the first thing Microsoft wanted to do was move HotMail to a Microsoft platform. Of course, this failed at first, but I believe they eventually did succeed in getting HotMail moved to a Windows platform. With the difficulties of just moving this one application, you have to consider moving the entire Yahoo portfolio over to a new platform to be an insurmountable task.<br /><br />From everything I've heard about Yahoo's technology platform, it is largely based on open source. Just like HotMail was. If I were a Yahoo software developer, and I was asked to move my work to a Microsoft platform, I would simply quit. Now, they will have a Microsoft retention package that will attempt to keep them at the company, but I really don't see this as something that will keep the most talented folks around. Now, Microsoft could decide to allow the Yahoo platform to be the platform that stays, but this is so totally against the Microsoft culture, that I don't see this happening. This also poses a lot of problems for all their existing technology and their other acquisitions. Would they truly be willing to throw all the other technology away, or have those engineers move their technology to open source, and into the Yahoo infrastructure? Again, I don't see that happening, and they would also risk losing those existing engineers for the same reason that Yahoo engineers would leave.<br /><br />If this merger isn't akin to mixing oil and water, I don't know what is!<br /></span></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com0tag:blogger.com,1999:blog-18974949.post-54623132460476790942008-02-01T13:16:00.000-07:002008-02-01T13:50:53.610-07:00The State of JBoss and the Upcoming JBoss World<div style="text-align: justify;"><span style="font-family: arial;">Lately, it seems like the world views JBoss as a failed acquisition by Red Hat. Failed by a couple of measures in fact. The first being sales of our subscriptions, and the second being that we are no longer innovating. I wanted to take the time to address both of those items.<br /><br />Where sales is concerned, I believe this was the biggest problem with the integration of JBoss into Red Hat. There for a time, sales really did lag, and things weren't looking very good. We had lots of experienced middle-ware sales people leave as a result of the integration into the larger Red Hat sales organization. I also believe that Red Hat didn't truly understand that the sales process and cycle were different for middle-ware than it was for RHEL. Having said that, I now see that Red Hat truly does understand the differences (some people just have to learn the hard way), and while the losses of good, experienced middle-ware sales folks probably still hurts some, we are seeing a major turn around in this area. While I cannot publicly talk about the actual sales figures (we are a public company after all), I can say that sales of JBoss subscriptions are growing, and have been growing for quite a while now. Demand for training and consulting is also strong. So, the picture is looking bright, and brighter all the time. I believe the company has learned a hard lesson, and the resultant actions from that lesson is now paying off, and will continue to pay off in the future.<br /><br />Now, let's turn our focus to innovation. Has innovation really stopped at JBoss, or at least come to a very slow crawl. I really take issue with that. Since the acquisition, we have changed our product model somewhat, which certainly slowed some other things down, but when you look at what we have accomplished, its actually quite amazing.<br /><br />First, we released our first two products under our new product model, with our Enterprise Application Platform 4.2 (In July of last year), and the very recent Enterprise Platform 4.3 (EAP for short). Our EAP 4.3 release contains the very latest in Java Web Services, with fully supported JAX-WS and JSR-181 annotation support, as well as an entirely new JMS implementation based on our JBoss Messaging technology.<br /><br />With JBoss Messaging, you now have a JMS provider that can be horizontally scaled in a cluster, with fully transparent load balancing and fail-over. Also, performance is substantially enhanced over the old (and now retired) JBossMQ. For those of you that have experience with JBossMQ in a cluster, with its band-aid approach to clustering and fail-over, you will definitely appreciate JBoss Messaging. This is a world-class messaging system, and certainly a shining example of innovation from JBoss.<br /><br />Second, we have also released a new version of our Portal platform, JBoss Portal 2.6, and it includes much better usability, manageability, and capabilities to support newer technologies, like Google Gadgets.<br /><br />Third, we have released our JBoss Communications Platform, based on our Mobicents open source project, providing the only enterprise class JSLEE implementation in the world. With ongoing enhancements, the next platform release will also support the very latest SIP servlets specification as well, so you can start out with SIP servlets, and move up to full JSLEE as you need. This is a revolutionary platform for telecommunications, and another shining example of the innovation coming from JBoss.<br /><br />Fourth, we have released our JBoss ESB, into the community, and as we speak we are working hard on delivering our SOA Platform product based on JBoss ESB. This product will have a very large impact on the ESB/SOA marketplace, as the first truly enterprise class open source product in the market (and yes I don't count MuleSource and Service Mix, because they don't have the kind of support organization that we have). This will be a game changer!<br /><br />Last but not least, we also delivered JBoss Developer Studio. I can't say enough about this accomplishment. The developers deserve all the credit for getting this to market. It fills a huge hole in our product portfolio, and makes it that much easier for IT managers to move to JBoss technology.<br /><br />And that's not all. There are many other things in the works at the JBoss division of Red Hat, and for that, you should consider coming to <a href="http://www.jbossworld.com/">JBoss World</a>. We are having a JBoss World in Orlando, starting on February 13th, and I believe there is one more week to register. I would encourage everyone that can to come, and check out all the exciting things happening with JBoss, and I think you will be convinced that we are still innovating, and there are lots of reasons to consider JBoss technology for your projects.<br /></span></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com0tag:blogger.com,1999:blog-18974949.post-39579089297636100972008-01-21T11:55:00.000-07:002008-01-21T12:23:15.591-07:00The State of ATI Graphics for Linux<div style="text-align: justify;"><span style="font-family: arial;">Over a year ago, my laptop that I use for work, had a meltdown, and would no longer work. I had to get back to work as quickly as possible, so I went to Best Buy to purchase a new one. In that situation, I made a decision to purchase a laptop with the ATI XPress 200M PCI-E integrated graphics chip. Now, at the time I was very hesitant, because I had read many reports of problems with all ATI discreet and integrated graphics under Linux. Well, it turns out that the reports were well founded, and I had lots of problems trying to use the proprietary graphics drivers from ATI. They were buggy, slow, and my laptops suspend and hibernate functions simply didn't work at all. I had none of these problems with my old laptop which used an Nvidia chip set. Needless to say, this was frustrating, and the open source driver, which I would prefer, but ATI doesn't release their specifications, so I couldn't actually use the majority of the features of the graphics chip. For example, no 3D support, no support for proper widescreen resolutions, and generally poor performance. Again, these issues with the open source drivers aren't the fault of the developers, but still falls squarely on ATI as well. So, what to do?<br /><br />At this point, I used whatever seemed to work best at the time, and many times found myself switching between the proprietary driver and the open source driver, depending on which one at the time seemed to work the best. Needless to say, this was a real pain to deal with, but I really didn't have much of a choice in the matter. I still needed to make a living, and didn't want to take the laptop back and try another one, just to find other issues. So, I stayed patient, and kept testing each successive release of the available drivers. Then a breakthrough occurred.<br /><br />First, a couple of months ago, ATI released a new proprietary driver that was based on a new code base, and I have to say, the performance is impressive. While it didn't address all of my issues, at least I had a stable driver that actually had good graphics performance. Many applications, that were really frustrating to use, suddenly became responsive and a joy to use. Whew! Now, this wasn't the end of the problems, but it certainly was a new beginning that had a lot of promise.<br /><br />Now, the latest release of the driver, finally has my laptop usable in all situations. After reading the release notes for the latest release, I noticed that suspend and hibernate fixes were included, which had me intrigued. Maybe, I would finally be able to use my laptop without power for longer periods of time? Well, I installed the new release, and after seeing that everything was still stable for all my daily activities, I decided to test suspend/resume and hibernate functions. Well, I have very good news to report. Both functions work as expected! I couldn't be more happy at this point. I have a fully functional laptop where all the graphics features work, and I can use it with or without power in confidence. That's not the entire story either.<br /><br />It turns out, that along with this transition to the new code base, that ATI has also started to release the specifications for their newer graphics architecture. While, that will not impact me, it certainly is a great step in the right direction. I actually wish that they would simply stop the proprietary driver, and just work in conjunction with the community to produce and support great open source drivers, but at least its a step in the right direction.<br /><br />While I still have some reservations about using ATI products under Linux, the progress lately has me thinking that ATI products should be something I evaluate when making my next purchase.<br /></span></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com0tag:blogger.com,1999:blog-18974949.post-20476000337006220942007-10-03T11:25:00.000-06:002007-10-03T11:34:18.292-06:00Open Source: The .NET Framework? Huh?<div style="text-align: justify;"><span style="font-family: arial;"></span>It is being reported today that Microsoft is open sourcing their .Net framework class libraries (See this link on InfoQ - <a href="http://www.infoq.com/news/2007/10/Dotnet-Open-Source">Open Source: The .Net Framework)</a>. Wow, could this be the start of a sea change at Microsoft?<br /><br />Well, it turns out that this is really no change at all. When you look closely, they are releasing the source code under the <a href="http://www.microsoft.com/resources/sharedsource/licensingbasics/referencelicense.mspx">Microsoft Reference License.</a><br /><br />This license only gives you permission to use the code in a read-only form! This hardly fits the definition of open source, and the term open source should never be used where this license is concerned.<br /><br />While, a nice move by Microsoft for .Net developers, because they will be able to step through the actual .Net library code using a debugger, it has nothing to do with open source, and I can't believe anyone could use the term open source in conjunction with this move by Microsoft.<br /><br />People need to get their facts straight before entitling an article in this way. This is very poor journalism, and allows Microsoft to get some good PR, that they clearly do not deserve in this case.<br /></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com0tag:blogger.com,1999:blog-18974949.post-54258993228482062082007-08-27T15:47:00.000-06:002007-08-27T16:17:47.436-06:00Microsoft's Behavior in the Standards Process for OOXML<div style="text-align: justify;"><span style="font-family:arial;">I have been reading for weeks now, that Microsoft is manipulating the ISO standards process to get OOXML approved, and I find what I am reading to be disturbing and ethically wrong!</span> <span style="font-family:arial;">Microsoft seems to promote a culture within the company that says, "Don't explicitly break any rules, but use any leverage possible to get what we want". This reminds me of many discussions I have had over the course of my career about the legality of something versus whether what was being done was ethical. The law doesn't cover all aspects, and doesn't define ethics in and of themselves. It's up to individuals and the leadership of companies to define what is and what is not ethical, and apparently Microsoft uses only the law to determine its ethical values. This is a real shame, and is truly a sad state of affairs for a company with so much power in the software industry to behave the way they behave. So what have they been doing that is so bad?<br /><br />Well, let's enumerate everything I have seen:<br /><br /></span><ul><li>Using individual countries standard body's rules to add new members that are Microsoft business partners, so they can stack the vote in their favor.</li><li>Preventing new members to these same countries body's that they know will not vote to approve OOXML.</li><li>Using the rules, within the voting process, to make it so that the comments do not get forwarded with the vote to the JTC1 committee of the ISO.</li><li>Giving misleading information about the JTC1 committee process, so that countries will not vote "No, with comments", and instead will vote "Yes, with comments".</li><ul><li>By the way, if a country votes "Yes, with comments", Microsoft is not committed to actually fix any issues raised by the comments. They can simply address the comment by logging that nothing will be done.</li></ul><li>Giving misleading information about the voting deadlines, thereby possibly preventing some countries votes from being counted.</li><li>Telling certain countries that Microsoft's educational programs in their country would be adversely affected if they didn't vote yes.</li><li>Calling heads of government agencies to pressure their representatives to vote yes.</li></ul>While no one has done anything illegal, at least not that we know of yet, Microsoft is crossing an ethical line that shouldn't be crossed.<br /><br />Instead of being able to sell OOXML as a standard worth considering on its merits, they are subverting the standards process, albeit within the rules, which are very lose, because they know that the standard does not really meet the requirements to become an official ISO standard without that subversion (<a href="http://andrigoss.blogspot.com/2007/02/is-ooxml-open-as-microsoft-claims.html">see my previous post on whether OOXML is open or not</a>).<br /><br />If OOXML, becomes an ISO standard, it will forever damage the standards process that we rely on to create a truly competitive landscape in the market. Microsoft may have won, but we have all lost, because we will never again be able to trust any standards produced through this process again!<br /></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com0tag:blogger.com,1999:blog-18974949.post-49968159470508799632007-08-15T12:02:00.000-06:002007-08-15T12:47:20.728-06:00JBoss Enterprise Application Platform<div style="text-align: justify;"><span style="font-family:arial;">This past Monday, I did a keynote at the SysCon Real World Java event in New York city. In preparing for that event, I realized that since JBoss was acquired, the news about JBoss has lessoned substantially, and many people are just not aware of what's new at JBoss. So, that is what I titled my presentation, and I focused on several areas. One area was the newly released (July 3, 2007) Enterprise Application Platform 4.2.<br /><br />Traditionally, the lead product of JBoss was our application server. With the release of the Enterprise Application Platform, what we have done is combine our application server, with Hibernate, EJB 3, JSF, and JBoss Seam, to deliver an integrated application development platform for the enterprise. Now how is this different, and what does it mean to users of the application server?<br /><br />Well, the application server only contains Hibernate, by default, out of the technologies I listed above, and the AS 4.2.x releases from jboss.org are now community releases that do not have paid for support. Of course, community support, through our forums, is always available just like before.<br /><br />With the Enterprise Application Platform, you also get the most often used technologies, that used to have to be integrated separately by you, integrated by us, and tested as a whole. No more building of your own distribution with what you need to develop enterprise class applications. Also, the testing of the platform, as a whole, is new, and I will highlight the differences.<br /><br />With the old AS releases, we have a unit test suite, that you can download and build, and we would run that in our continuous integration builds each day, and when the feature set development, and bug fixes were complete, we would concentrate on getting that test suite to 100% passing, and then release. The test suite was only run with the Sun JVM on a Linux platform (typically RHEL 4 based).<br /><br />With the new Enterprise Application Platform, we continued to run the unit test suite. But we run it on the Sun JVM, the HP JVM (for HP-UX), and the BEA JRockit JVM, on RHEL 4 and 5, x86 and x86_64, Solaris 9 and 10 (Sparc), HP-UX 11i for PA-RISC and Itanium, and Windows Server 2003, x86 and x86_64. This ensures that more combinations of operating systems and JVM's work before we ship, versus having to deal with customer issues after the fact. We also have more operating systems and JVM's teed up for future updates (AIX and the IBM JVM are examples). We also ran the Hibernate test suite on five different databases. We certified on MySQL 5 (5.0.27), PostgreSQL 8.2.3, Oracle 9i (9.2.0.1), Oracle 10g R2 (10.2.0.1), and SQL Server 2005 (version 9.00.2047). Besides, the unit test suites, we added significant integration testing, that we had never done before internally.<br /><br />The integration tests, contained tests for EJB 3, HTTP Session replication, JBoss Seam, and in the cases of EJB 3 and JBoss Seam we had performance, scalability and clustering tests. This additional testing led to the discovery of many bugs that we would have only discovered through customer deployment in the past. This is exciting for us, in that we have produced a product with the most complete testing we have ever done, and as a result produced the most hardened distribution for our customers ever! This is truly the dawning of a new age at JBoss!<br /><br />Besides the new process that we take the Enterprise Application Platform through, the support arrangement has changed as well. We offer three years of support for the Enterprise Application Platform, where we fix all bugs and security errata for customers. For an additional two years we fix only critical bugs and security errata, giving a full five year support cycle to our customers! This is what our customers have been asking for, and we are delivering that to the market now, and will continue to do so with future platform offerings.<br /><br />So, if you are interesting in this new offering, you will have to contact sales, as there is not a binary download of the Enterprise Application Platform available (you must be a subscriber). The other way you can get the Enterprise Application Platform, is through our developer tools packaged called RHDS (Red Hat Developer Studio). It includes the full binary distribution of the Enterprise Application Platform, and will install that, ready to run as part of the new development environment, which was released as a beta on Monday. Here is the link to that:<br /><a href="http://www.redhat.com/developers/rhds/index.html"><br />http://www.redhat.com/developers/rhds/index.html</a><br /><br />Download the developer tools, and play around with the technology. I'm sure you will like what you see!<br /><br />Enjoy!<br /></span></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com2tag:blogger.com,1999:blog-18974949.post-14949321759233927672007-08-07T20:18:00.001-06:002007-08-08T07:58:34.542-06:00Beagle Correction<div style="text-align: justify;"><span style="font-family: arial;">A while ago, I posted a comparison of Beagle and the new Google Desktop Beta release. I had since removed Beagle, and had been using Google Desktop exclusively. I received a comment from a Beagle developer, Joe Shaw, and he thought I simply had encountered a bug in Beagle, and that Beagle does indeed index all the text in documents not just the meta data.</span><br /></div><div style="text-align: justify;"><div style="text-align: justify;"><br /><span style="font-family: arial;font-family:arial;" >Well, I have since upgraded to Fedora 7, and I became interested to see whether a newer version of Beagle indeed would find files that it couldn't find before.</span><br /><br /><span style="font-family: arial;font-family:arial;" >So today, I installed Beagle again, and I did the same search that I had done before, looking for a PDF file with the search query of "Small is Beautiful". As you can see from below, it is now able to find the file that it couldn't find before, and matches what Google Desktop does.</span><br /><br /><a style="font-family: arial;" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1eu3NyyRAR3HR6EY-WzCN4oyXYxnhM-xgDriF0IfaYKh28z-UNaov9kEmyFCKyBMqjkS8OphHSPrvZruGuWDZMM3ZPDBdzvv1X9NZWXzLukl9ImoMgc3hj3iUGHgd3O1M164A/s1600-h/beaglesearch.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1eu3NyyRAR3HR6EY-WzCN4oyXYxnhM-xgDriF0IfaYKh28z-UNaov9kEmyFCKyBMqjkS8OphHSPrvZruGuWDZMM3ZPDBdzvv1X9NZWXzLukl9ImoMgc3hj3iUGHgd3O1M164A/s320/beaglesearch.png" alt="" id="BLOGGER_PHOTO_ID_5096149842260304306" border="0" /></a></div><div style="font-family: arial; text-align: justify;">As you can also see, the indexing is not complete yet, but its only been a few hours since I reinstalled. At this point, the large index size problem also seems to be gone. I cannot report yet what the final size will be, until its done, but it is a little over 50% of the size of the Google Desktop index, but I don't know how much more it has to go. It would be nice if there was an index status page somewhere, like in Google Desktop, so you could see what the progress was in indexing your content.<br /><br />So, it seems that Beagle has improved with Fedora 7, which includes Beagle 0.2.16.2. I congratulate the Beagle team, and I will continue to use both on Fedora so I can see how they fair against each other as they both mature.<br /><br />In the case of RHEL, which I have one RHEL 5 desktop, I can only use Google Desktop, as RHEL will not ship with Mono, which Beagle requires. It would be nice if Beagle didn't require Mono, so it could be included with RHEL, because I would love to compare it on RHEL as well.<br /><br />Thanks to Joe Shaw for commenting and looking into what was going on with my documents that Beagle couldn't find.<br /></div><div style="text-align: justify;"><br /><span style="font-family: arial; font-weight: bold;font-family:arial;" >UPDATE:</span><br /><br /><span style="font-family: arial;font-family:arial;" >Beagle finished indexing my system, and so now I can see the total index size of Beagle compared to Google Desktop. The results are considerably improved from before. As you can see from the below image, Beagle still creates a larger index, but the difference isn't as dramatic. Before, I saw that Google Desktop had an index that was 98% smaller, but now Google Desktop's index is only approximately 23% smaller.</span><br /><br /><a style="font-family: arial;" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2HkXRmPv3UAtOM_ubgXo4X0cNkyDfiY0MKSV4cVhdVnEAKrJzLTcwETY4-1tDZA2YD_gJz_Qddc2pVfgpPCOXEBVC1cSmROUO_JmUxnyiHafAv2YhNIlETuB1PHPkdwqcvf8u/s1600-h/beagleindex.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2HkXRmPv3UAtOM_ubgXo4X0cNkyDfiY0MKSV4cVhdVnEAKrJzLTcwETY4-1tDZA2YD_gJz_Qddc2pVfgpPCOXEBVC1cSmROUO_JmUxnyiHafAv2YhNIlETuB1PHPkdwqcvf8u/s320/beagleindex.png" alt="" id="BLOGGER_PHOTO_ID_5096328165007466946" border="0" /></a><br /></div><span style="font-family:arial;">This is very good news indeed, and great work by the Beagle developers!<br /></span></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com0tag:blogger.com,1999:blog-18974949.post-13074659832837461642007-07-26T15:10:00.000-06:002007-07-26T16:20:54.596-06:00My Experience with Fedora 7<div style="text-align: justify;"><span style="font-family: arial;">Like anyone that has been using Linux for a long time, I really like seeing what's new in my favorite distribution. I have been using Fedora since Fedora Core 3, and have happily upgraded to each successive version, and I finally got around to upgrading to Fedora 7 this week.</span><br /></div><div style="text-align: justify;"><br /><span style="font-family:arial;">A big part of the reason I waited to upgrade, is my laptop is using an ATI Xpress 200M video chip, that doesn't have an open source driver that can do hardware accelerated 3D. The ATI proprietary driver didn't work with Fedora 7 when it was first released, so I thought it better to wait. Two months later, I read an article saying the new driver release fixed the issues with Fedora 7, so I downloaded the new driver, installed it on Fedora Core 6 first, and everything worked, so I popped in the DVD to do the upgrade and rebooted.<br /></span><br /><span style="font-family:arial;">The laptop booted just fine off the DVD, and the upgrade process went smooth. It took a little longer than I would have liked, but I have a lot of software installed on this laptop, so there were 895 packages that had to be upgraded. This all went smoothly, and the upgrade finished, it ejected my DVD, and I rebooted. This is where the fun began.</span><br /><span style="font-family:arial;"><br />The first thing I planned on doing after the reboot, was install the ATI proprietary driver. Of course, with the initial boot after the upgrade, with the new kernel in place, the driver wasn't there, and it booted in text mode, which is what is expected. At that point, I logged in, and ran the ATI installer, and everything installed fine. I then rebooted again, not specifically because I had to, but I like to make sure the graphical boot works as it is supposed to. At this point, when the graphical boot should kick in, all I get is a black screen.</span><br /><span style="font-family:arial;"><br />So, I boot again, and this time use grub to boot into single user mode, and then I look at the Xorg.log file, and I find that the X Server is core dumping! Ouch! This release of the driver is supposed to work for Fedora 7. Well, I decide to reconfigure using the 2D open source radeon driver, and that works as expected, and I am able to get X working. I then go on-line, and find out that the driver is not recommended for use with Fedora 7, as others had encountered the same problem as I. So, I find myself still stuck with a 2D only system, because ATI has not released any fix for this problem. Hopefully, next months release will finally fix Fedora 7 compatibility issues once and for all (I am running out of hope for good ATI Linux support).</span><br /><span style="font-family:arial;"><br />After that hurdle, I decided to see if the latest Broadcom driver for my wireless chip actually worked. I have a PCMCIA NetGear card that uses the Atheros chipset, that works great, and I oridinarly blacklist the driver for the internal Broadcom chip (bcm4318) because it has never worked reliably. I certainly would prefer to use the internal wireless, so I continue to experiment.<br /><br />As it turns out, after upgrading the kernel to the latest Fedora 2.6.22, the driver loaded successfully, and I was actually able to connect to my access point with WPA. I was surprised, and happy to see this working, but the good news didn't last long. The connection speed that was being reported was only 1 Mb/s, and when I tried to open</span><span style="font-family:arial;"> my browser, it couldn't even open my home page. So with that disappointment behind me I went back to the Atheros based NetGear card. I had to use a snapshot release of the code for it to compile with the 2.6.22 kernel, but it works as good as ever, and maybe a little better.</span><br /><span style="font-family:arial;"><br />At this point, everything is working quite well, and with some additional patches for Evolution's data server package, I now have a stable working laptop once again, albeit without working 3D. So after working with it the last few days, here are some of my observations.</span><br /><span style="font-family:arial;"><br />I really like the fast user switching. The first time I tried it, it complained that it couldn't find the GDM binary, but after the screen saver kicked in, and I awoke it, I decided to use the "Switch User" function from there, and it worked! I was able to switch back and forth from two accounts without any issues. Since then, it has worked from the panel applet without complaint. I really like this feature, and its been sorely lacking in Linux for quite some time (I am used to this feature in Mac OS X).</span><br /><span style="font-family:arial;"><br />Another area, that has been a pleasant surprise, has been using 32-bit Firefox plugins under 64-bit Firefox. This laptop is using an AMD Turion 64 processor, and the 64-bit Firefox is the default installation. Up to this point, I have always gone through the trouble of installing the 32-bit Firefox, just to get Flash, Adobe Reader, and other plugins working. I had read about some software called nspluginwrapper. This is not in the official Fedora repositories, but it has a build that works perfectly on Fedora 7. This has enabled me to use the 64-bit plugins for Xine, and OpenOffice.org, and at the same time use 32-bit Adobe Reader, Flash 9, and Java. Those along with the xine-lib-moles package, that adds the proprietary codecs to Xine, has opened up all the content on the web that I have</span><span style="font-family:arial;"> not been able to access before. I find my web experience to be so much more pleasurable then before! Of course, I would prefer that these web sites didn't use the proprietary formats to being with, and everyones lives would be much better.</span><br /><span style="font-family:arial;"><br />The final area that seems to have improved dramatically is Firewire. I have a "My Book" external hard drive, that I use for backups, and it has both a Firewire interface as well as a USB 2.0 interface. The Firewire interface has never worked, so I broke out my Firewire cable, and plugged it in, and it powered up, and the drive mounted with a nice icon on the desktop, just like it should!<br /><br /></span><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipwRtHEJjKm-Sc25F4ysbUCFXY77OXairYhEBnM9GiKGg-IQvsDmMROjMCWljSX5uakxxxcZES08brKg6yWlv7brqy5-qtNEDOl6vxgc99NC6LPf1bRNFsf0Rqo7i0mIeyElHo/s1600-h/Firewire.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipwRtHEJjKm-Sc25F4ysbUCFXY77OXairYhEBnM9GiKGg-IQvsDmMROjMCWljSX5uakxxxcZES08brKg6yWlv7brqy5-qtNEDOl6vxgc99NC6LPf1bRNFsf0Rqo7i0mIeyElHo/s320/Firewire.png" alt="" id="BLOGGER_PHOTO_ID_5091628839425450402" border="0" /></a><br /><span style="font-family:arial;">Since, this worked, I decided to test out the Firewire interface for performance and reliability of my backups. I needed to take a backup anyway, so I started up my backup process which creates a gzipped tar of my home directory, and then I simply move it to the "My Book". I timed the move to the "My Book", and also opened the resultant file using file roller, and did the move and open again using the USB 2.0 interface. The Firewire interface was slightly faster at moving the 4.3 GB file, but only by 9 seconds, so there wasn't much of a performance difference, but the surprising thing was only the Firewire transfer resulted in a file I could open successfully. The USB transfer resulted in a file that got CRC errors. Obviously, that isn't a good backup, so I redid the transfer once again using the Firewire interface, and was able to open the backup file on the "My Book" with no issues again. This kind of problem has happened intermittently with USB for quite sometime, and it gets more prevalent with larger backups. Needless to say I really like the new Firewire stack in Fedora 7, and soon I'll be testing it with a digital video camera, just to see how far this new stack has come.<br /><br />To sum things up, since getting over the hurdles of my hardware, I have a very stable platform for doing my daily work, and there has been progress on many fronts. The work going into the Broadcom drivers is improving rapidly, and I hope to be able to use my internal wireless chip soon. I only wish ATI would get their act together on the video driver, so I can fully exploit my hardware.<br /></span></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com0tag:blogger.com,1999:blog-18974949.post-80380140921328567722007-07-03T09:09:00.001-06:002007-07-03T10:28:21.817-06:00Google Desktop for Linux vs. Beagle<div style="text-align: justify;"><span style="font-family:arial;">Recently Google released Google Desktop for Linux. I have been using Beagle on Fedora Core, since it was added, and currently am running Fedora Core 6. With that, I decided to try out the beta of Google Desktop, and compare search results between the two, to see if one was any better than the other.</span></div><div style="font-family: arial; text-align: justify;"><br />So, I installed Google Desktop with their RPM for Fedora, and set the preferences. I setup my preferences for indexing the same as I did for Beagle, so the comparison would be fair on both sides. You can see the settings in the following image:</div><div style="text-align: justify;"><br /><a style="font-family: arial;" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMw4JAEVJAZyVgQ3FUwtQtsq9oFpFVEfapVEC9YQw6dfh7tj2UFrClQfr67UqdeT0EzW6p9SaV9tekUA1KIV3gsRQhTh6VwxQTbGtn96I0yLKUHsbiONYwe8OjXHBt7ZvMQH_a/s1600-h/GoogleDesktopPreferences.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMw4JAEVJAZyVgQ3FUwtQtsq9oFpFVEfapVEC9YQw6dfh7tj2UFrClQfr67UqdeT0EzW6p9SaV9tekUA1KIV3gsRQhTh6VwxQTbGtn96I0yLKUHsbiONYwe8OjXHBt7ZvMQH_a/s320/GoogleDesktopPreferences.png" alt="" id="BLOGGER_PHOTO_ID_5082993473889741778" border="0" /></a></div><div style="text-align: justify;"><div style="font-family: arial; text-align: justify;">Most of the settings are the defaults provided, but I added /var, /opt, /etc and /tmp as file systems, because I like to be able to search for things in log files written by syslog, configuration files, etc., and I also am indexing all file types, and web history, with the only exception being https content.</div><div style="text-align: justify;"><br /><span style="font-family:arial;">This pretty much mirrors my Beagle preferences as you can see from below:</span><br /><br /><a style="font-family: arial;" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDALK83jBugwK5NrGqfplRKWojk16MYnahI7zHeErUAKczKrv70WJ_H6durm6sHTezuPe43MtfVQXmccqwr_vvthetFTmYRI8kr1gg5jKtEO6sXYho3fgZLTiWbyuk44l_eXyM/s1600-h/BeaglePreferences.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDALK83jBugwK5NrGqfplRKWojk16MYnahI7zHeErUAKczKrv70WJ_H6durm6sHTezuPe43MtfVQXmccqwr_vvthetFTmYRI8kr1gg5jKtEO6sXYho3fgZLTiWbyuk44l_eXyM/s320/BeaglePreferences.png" alt="" id="BLOGGER_PHOTO_ID_5082994594876206050" border="0" /></a><br /><span style="font-family:arial;">So, after setting the preferences, I watched Google Desktop go to work on indexing my file systems. What was interesting is that it took a very long time. Over two days to do the first pass at indexing. Now, granted, I have a lot of files on my laptop, so this is understandable, but Beagle seemed to index my files a lot faster, but I don't have a specific time to compare against, because there is no way to monitor the indexing progress of Beagle (at least not that I know of). Now that brings us to comparing search results.</span><br /><br /><span style="font-family:arial;">With Beagle, I have been frustrated at times that it couldn't find files that I knew were there, but couldn't remember where I had saved them. Isn't that what desktop search is all about? In fact, as a result of trying to find a Portable Document Format (PDF) document that I had saved from the web, I opened a Bugzilla case thinking that Beagle was not indexing PDF's. It turned out that Beagle was indexing the PDF's, but Beagle only indexes based on a files metadata, not its entire contents. That explains why it couldn't find the file I was looking for, because the search phrase I was using didn't match the files metadata, but part of its content. So, I had the perfect test case to see whether Google Desktop could find what Beagle couldn't.</span><br /><br /><span style="font-family:arial;">I searched with the term "Small is Beautiful", which is part of a subtitle of a document produced by Familiar Metric Management, and it is about software development productivity as it relates to team size. As you can see, from the image below, that this search phrase returns nothing using Beagle.</span><br /><br /><a style="font-family: arial;" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8G3-XnN5H433LrpEIiRYZZJHzhJuPZ46AN9oC-ofdPLGjREamUlsAhNuhc9frvGCqXbc8sg-TU9AeY0p8yqT3omGaeXhkL-yzxbGPVm7DmmI0kbuVk3YzCIu8vgZ2pgNDZf_x/s1600-h/BeagleSearchResults.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8G3-XnN5H433LrpEIiRYZZJHzhJuPZ46AN9oC-ofdPLGjREamUlsAhNuhc9frvGCqXbc8sg-TU9AeY0p8yqT3omGaeXhkL-yzxbGPVm7DmmI0kbuVk3YzCIu8vgZ2pgNDZf_x/s320/BeagleSearchResults.png" alt="" id="BLOGGER_PHOTO_ID_5082999452484217842" border="0" /></a><br /><span style="font-family:arial;">So, I did the same search with Google Desktop, and you can see the results below. Unfortunately, I couldn't find a way to capture a screen shot of the interface, without losing the results at the bottom, so I did the search from the browser interface instead.</span><br /><br /><a style="font-family: arial;" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAKce0lbe1AX2cmPKhTk__XjM1VqDSzjnm_aR4E4CYMqz1qnINon4eV36dQqU8x-m-WVzYOtqmCIAexh6FY79sBUzEguysJHeAwf3mATauyoULG8Qn3gCiJx4b7-RfLxhNN27J/s1600-h/GoogleDesktopSearchResults.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAKce0lbe1AX2cmPKhTk__XjM1VqDSzjnm_aR4E4CYMqz1qnINon4eV36dQqU8x-m-WVzYOtqmCIAexh6FY79sBUzEguysJHeAwf3mATauyoULG8Qn3gCiJx4b7-RfLxhNN27J/s320/GoogleDesktopSearchResults.png" alt="" id="BLOGGER_PHOTO_ID_5083000826873752578" border="0" /></a><span style="font-family:arial;">As you can see from my cursor highlight, Google Desktop found the file I was looking for without any problem. This explains the major difference between Google Desktop and Beagle. Beagle trades off indexing speed, by just indexing the metadata on documents, while Google Desktop does a full index on the content, thereby taking much longer to index files, but giving much better results. I prefer the better results. There is one other difference that I would like to point out between the two.</span><br /><br /><span style="font-family:arial;">In backing up my laptop, I noticed that my backup of my home directory was taking longer and longer, and the backup was getting very large. In looking into this, it turned out that a large percentage of my home directory was the beagle index. That led me to look into how large the Google Desktop index was in comparison. Well, there is no comparison. The Google Desktop index is much, much smaller (see below).</span><br /><br /><a style="font-family: arial;" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIThbHQLe57C3D1oUF-sTqWyctX0EwG9EyyFDZPt2J9FL1SwXlHzewLzKl5v_YSdRkfqhmVB8k76II1SzxiGqUMO_-HZtzcVPKyTI1JDOaDgyxvZniUh5nygYoqC-LrJoHIBsZ/s1600-h/IndexSize.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIThbHQLe57C3D1oUF-sTqWyctX0EwG9EyyFDZPt2J9FL1SwXlHzewLzKl5v_YSdRkfqhmVB8k76II1SzxiGqUMO_-HZtzcVPKyTI1JDOaDgyxvZniUh5nygYoqC-LrJoHIBsZ/s320/IndexSize.png" alt="" id="BLOGGER_PHOTO_ID_5083005813330783250" border="0" /></a><br /><span style="font-family:arial;">In fact, its 94% smaller than Beagle! This is a huge difference, and certainly pays off in disk usage. </span><br /><br /><span style="font-family:arial;">In conclusion, I really liked Beagle, but Google Desktop offers better search results, with considerably less disk usage for the index. At this point, I'm ready to turn off Beagle (maybe even uninstall it), and rely on Google Desktop instead.</span><br /><br /></div><br /></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com2tag:blogger.com,1999:blog-18974949.post-75337414782953008102007-03-15T16:32:00.000-06:002007-03-15T17:04:34.420-06:00Greater than 4GB files on an External USB Hard Drive<div style="text-align: justify;">Several months ago, I purchased a Western Digital USB/Firewire external hard drive to backup my laptop's home directory. Considering that I was using it with Linux, specifically Fedora Core 6, I wasn't sure how things were going to work.<br /><br />After plugging it in, and attaching it via the USB 2.0 cable, it mounted and was presented on the GNOME desktop, and I could browse the contents of the disk without issue. Trying to keep things simple, I merely used tar and created a gzipped tar of my home directory, making sure to preserve all the permissions of the files with the following command:<br /><br />tar -czpf /tmp/[file name with date].tar.gz /home/[my home directory]<br /><br />This works quite well, but it presented me with my very first issue. My home directory is quite large, and the very first tar file I created was larger than 4 GB, so I couldn't write it to the external drive. It couldn't be written for the simple reason that the drive was using the FAT file system, and it doesn't support file sizes larger than the 32-bit maximum of 4 GB.<br /><br />So, I looked through my home directory, and I found some obvious culprits to my size problem, and deleted those files, because I no longer needed them. Mostly it was old ISO images, that I had burned to CD long ago, and didn't need anymore. Okay, problem solved right?<br /><br />Well, not quite. This worked for several months, but I was still dangerously close to the 4 GB limit. Eventually I spilled over the limit, and really couldn't delete files to get back under it.<br /><br />With this in mind, I decided to see if I could change the file system to one that supported files larger than 4 GB. Considering that I am only using this with Linux, cross platform compatibility was not an issue for me, so the obvious choice was to use the ext3 file system from Linux. This would give me the large file support I needed, and also be more reliable, as ext3 is more robust than FAT, and it supports journaling, so there is significantly less risk to losing data.<br /><br />During my investigation of making this change, I found nothing but individuals having problems with trying to do this. Many individuals had even rendered their drives unusable. Considering this, I took a step back and wondered whether I should try this, or see if I could think of another resolution.<br /><br />I really couldn't think of a better way to deal with this problem, and I wanted to keep things simple, so I went ahead and tried to make the file system change, and here is the procedure I used.<br /><br /></div><ul style="text-align: justify;"><li>First, I moved all my backups of my home directory that were currently on the drive, and copied them to /tmp on my laptop.</li><li>Second, I fired up GParted, considering that it is a graphical partitioning tool, that also will format partitions. This proved to be an excellent choice, because it helped me to avoid one pitfall.</li><ul><li>Considering that the drive was plugged into the USB port, and mounted under /media/My Book, GParted would not let me format the drive until I unmounted it.</li><li>I used GParted to unmount the drive, and then I selected from the menu "Format to->ext3".</li><li>I watched as it automatically changed the partition type to the correct one, and then formatted the partition with the ext3 file system.</li><li>It completed with no issues, but here is where one of the problems reared its ugly head.</li><ul><li>After formatting, the drive would no longer auto mount, and show itself on the desktop. I could manually mount it with the mount command, and it was working. I even wrote some files to it just to make sure everything was fine, and it was.</li><li>The guys on the Fedora Core mailing list were most helpful with this problem.</li><li>As it turns out, I needed to label the new file system with the e2label utility, which I did with the following command:</li><ul><li>e2label /dev/sdc1 "My Book"</li></ul></ul></ul><li>Finally, I moved the backups I put in /tmp back to the drive with the new file system.<br /></li></ul><div style="text-align: justify;">After, these simple steps, I had a newly formatted external USB hard drive that I could write files larger than 4 GB to, without issues. It would auto mount, just the way it did when it was a FAT file system, and I now have some very large backups on it, and didn't have to change my very simple backup procedure.<br /></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com2tag:blogger.com,1999:blog-18974949.post-18395105930264730852007-03-13T13:14:00.000-06:002007-03-13T14:46:09.496-06:00Glimmer of Hope for Desktop Linux?<div style="text-align: justify;">In the last week or so, I have read three different articles that talked about different government agencies that are banning Microsoft's Vista operating system, along with other Microsoft products in some cases. The National Institute of Standards and Technology (NIST) is the latest, and this follows the US Department of Transportation (DOT) and the Federal Aviation Administration (FAA).<br /><br />So, what makes this a glimmer of hope for desktop Linux. Well, at least in one of those cases, the FAA, is seriously looking at a combination of Linux desktops with Google's new enterprise applications as a replacement for Windows and Microsoft Office! When you combine this type of interest with other government initiatives to adopt open standard file formats, you can see a glimmer of hope that the Microsoft lock is being broken by some large government agencies.<br /><br />You could say, so what! It's only some public sector organizations! What makes this a glimmer of hope, in my mind, is the carry over affect it could have on the private sector.<br /><br />If enough government agencies start adopting open technologies like Linux and ODF, then the private sector companies that have to do business with them will have to adopt technologies that inter-operate. This in turn loosens the grip that Microsoft has on a larger portion of the market.<br /><br />I sincerely hope that these government organizations aren't just bluffing to get concessions out of Microsoft. With large scale adoption of open technologies, such as Linux and ODF, we will all be better off. True competition on the market for desktop operating systems and applications could become a reality someday.<br /></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com0tag:blogger.com,1999:blog-18974949.post-82476007862492318612007-02-14T13:57:00.000-07:002007-02-15T15:50:06.442-07:00Is OOXML Open as Microsoft Claims?<div style="text-align: justify;"><span style="font-size:130%;">Microsoft recently posted an "<a href="http://www.microsoft.com/interop/letters/choice.mspx">open letter</a>", complaining that IBM is not in favor of open standards, and that they are all hypocrites. It is noted that IBM was the only one to vote no in the ECMA process for the standardization of OOXML. I find this to be disingenuous to say the least.<br /><br />Microsoft claims that OOXML is open because of its acceptance as an ECMA standard. In my opinion, that hardly makes it open. The rules by which ECMA standards are created are very loose indeed, and I don't blame IBM one bit for voting against it. I just can't believe that everyone else involved didn't vote no too!<br /><br />File formats have become an interesting topic of conversation, ever since ODF (Open Document Format) came on the scene. Before ODF became an OASIS and ISO standard there were no open standards for office document formats. With Microsoft controlling the majority of the market for office productivity applications, their proprietary file format has been lock-in heaven for them, and lock-in hell for their customers.<br /><br />ODF threatens to break that lock-in, and free customers to choose alternatives, without the problems associated with proprietary file formats (lost formatting, can't edit with a different application, etc.). So, Microsoft had to act to protect its franchise, because they simply are afraid to, or maybe they can't, compete on the quality of their implementation of office productivity software. Of course, it would also commoditize the market, and drive down prices. With Office being almost half of Microsoft's profits, that's a hard pill to swallow.<br /><br />With that as the backdrop, is OOXML truly open?<br /><br />The short answer is an emphatic NO!<br /><br />The reason for this is simple. The specification clearly references proprietary Microsoft Office technology that cannot be implemented by anyone other than Microsoft. Truly open standards, need to be able to be implemented by anyone that desires to do so, and this is simply not the case with OOXML.<br /><br />Without the ability for competing products to implement the file format, Microsoft can claim to have an open standard file format, and keep the lock-in they have enjoyed for years. As they say in the Guiness commercials, "brilliant!".<br /><br />Of course, I hope the ISO will put an end to this charade, and vote this down as an ISO standard. That is the only just thing that can happen. If Microsoft gets away with this, Microsoft will have won again, and the joke is on us.<br /><br />What's the old saying? Fool me once, shame on you, fool me twice, shame on me!<br /><br />Well, if the ISO members are fooled into accepting OOXML as a standard, it will not only be the shame of the ISO members, but a shame on the entire world!<br /></span></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com0tag:blogger.com,1999:blog-18974949.post-85980958131709829152007-02-14T10:37:00.000-07:002007-02-14T10:50:21.307-07:00Open Source Whiner Babies!<div style="text-align: justify;">Since Marc Fleury's retirement from Red Hat, there have been several articles and blogs written, with regards to Marc and JBoss. In those articles and blogs, it always seems like the folks who are critical are the guys that left JBoss in the early days, to try and create a competitive business that they called the "Core Developers Network" or CDN.<br /><br />The thing that strikes me the most about their comments, is that they are childish, immature, and lean on a crutch of what "true open source" is.<br /><br />What these guys are, are whiner babies, and nothing more!<br /><br />They weren't getting what they think in their own minds was fair, as far as a stake in JBoss goes, so they split and tried to form a competitor, based on the same project (Whose ego was getting in the way here?).<br /><br />Then, when JBoss moved to protect its business by removing their commit privileges, they cried foul.<br /><br />What did they expect? Peace and love?<br /><br />In reality, if they stuck it out, and continued to work, they would have been hansomly rewarded in the end. Now that JBoss has been acquired by Red Hat, and Marc, along with lots of other folks, got big paydays, they are left to cry over their spilled milk.<br /></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com0tag:blogger.com,1999:blog-18974949.post-19805000670733638752006-12-30T10:13:00.000-07:002006-12-30T11:06:07.591-07:00Is JBoss Open Source?<div style="text-align: justify;"><span style="font-size:130%;"><span style="font-family: arial;"><span style="font-family: arial;">I continue to run across people, and written articles, that claim that JBoss is not "true" open source. For the longest time, I just didn't understand what they meant by that.<br /><br />In some conversations I have had with people, they don't understand the licensing issues, in others, I just here corporate blah, blah, being repeated without much thought. This mostly comes from IBM employees, who are repeating the party line, but don't really understand what it is based on. In still others, I hear confusion between licensing and development models, and this seems to be the heart of the issue, with people claiming that JBoss is "evil", and not "true" open source.<br /><br />So what is "true" open source to these critics of JBoss? It is simply that they think open source is not just the license, but also the development model that is used. They also believe that the only appropriate development model is one where no one company or entity entirely controls the project.<br /><br />The fact of the matter is that open source is about the license, not about the development model used. I could write the code completely on my own, and release the code, and never even accept external contribution, and if the license is an OSI approved license, then it is still open source. The project may or may not be very successful with that approach, but it doesn't change the fact that the code being under an OSI license affords everyone the freedom's of open source.<br /><br />So what is the development model that the critics say make something "true" open source?<br /><br />They contend that you have to have many companies contributing, and Linux is used as a primary example. The fact of the matter is, in the case of Linux, you have market dynamics that brings companies together because they have a common interest in fighting a monopoly in the operating systems market. This is a very unique set of circumstances in comparison to the middleware market.<br /><br />Also, with IBM in particular, which is widely credited for giving legitimacy to Linux, has a huge incentive to support and contribute to Linux. First, when they started getting involved with Linux, they had AIX, OS/390, OS/400 and OS/2 as operating systems they were spending considerable resources developing and supporting. Considering the portability of Linux, and its rapidly maturing technology, if they put their resources behind it they could eventually have a unified OS strategy, with one operating system running across all their various hardware platforms. In fact, today you can run Linux on all of their hardware platforms now.<br /><br />In the case of a company like Oracle, Linux is the hedge against Microsoft in the database market. In order for Oracle to maintain a market share advantage over Microsoft in the database market they need an alternative platform that is popular on commodity hardware that SQL Server doesn't run on.<br /><br />The dynamic of having a hated monopoly, plus other unique incentives, brings even competitors together to support, contribute and promote Linux. This simply doesn't exist in the standardized middleware market.<br /><br />Could you imagine IBM and BEA contributing to JBoss? Companies only contribute to open source projects when there is a strategic corporate advantage to doing so. No one should be naive enough to think otherwise.<br /><br />In the middleware market, there is no one dominate player, in terms of market share, and there is considerable revenue tied to traditional closed source products. It is quite impossible for JBoss to have the kind of external contribution that Linux enjoys, due to its unique market conditions.<br /><br />Having said that, JBoss enjoys considerable external contribution from companies. Initially, Novell was a considerable contributor to a couple of the projects, but the Red Hat acquisition put an end to that. We have also had many companies that are users of our technology contribute over the years. Our new Group Bull relationship is another example, and when you look at the folks that work for the JBoss division of Red Hat, all of them where external contributors at one time (developers).<br /><br />Under the market circumstances, and the business model of the company, JBoss has as open a development model as is possible. That leads to the other issue of the critics.<br /><br />The business model of JBoss, is one where the core developers all work for the same company. What this enables, is a quality of support that simply cannot be matched. While anyone could take the JBoss software, and distribute it themselves, and offer support, they simply cannot match the quality of support. We have a two tier model, where we hire very experienced Java EE developers for tier one support, and the core developers are tier two. Does this mean that we are not "true" open source?<br /><br />Open source is about supplying freedom's to all user's of the software, and JBoss supplies that, as all of our software is under an OSI approved license, and most of it is under the LGPL. Secondarily, the business model that has emerged for open source is one based on quality of support. By hiring the core developers, we enable the best possible support, which is certainly in the spirit of open source.<br /><br />In conclusion, under the market conditions, and what users expect from open source companies, JBoss is as "true" to open source as you can be!<br /></span></span></span></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com0tag:blogger.com,1999:blog-18974949.post-1166909562726599672006-12-23T14:15:00.000-07:002006-12-23T14:32:42.776-07:00Java and the GPL!<div style="text-align: justify;">It's been a long time since I posted, and something that I was looking forward to, was Sun's move to open source the Java platform.<br /><br />Well, they not only followed through with the plan, they completely caught me off guard with their choice of license. I think they caught everyone off guard.<br /><br />I have always been in favor of putting Java under an open source license, but I never really gave much thought to which license would be appropriate. The GPL, with the so-called "Classpath" exception, I have come to believe is the ideal choice.<br /><br />It allows for the virtual machine to be deeply integrated into other GPL software, such as Linux. The JVM has always been a second-class citizen where Linux is concerned, in that there was never very much time spent on optimizing the JVM for Linux. Now, the community can really get involved in optimizing the JVM for Linux, and I think this will have real benefits to the Java community, where Linux distributions are the target deployment platform.<br /><br />Besides Linux, other projects will also benefit. GNOME will no longer have the excuse to ignore Java as a first-class language. Java may finally become a reality where the desktop is concerned. At least those desktops that use GNOME. OpenOffice.org will not have the problems of having quite a bit of its code based on a language without a free-as-in-freedom runtime environment. It also eliminates the need for distribution vendors to have to do all the engineering to create an OpenOffice.org distribution with an alternative Java such as GNU Classpath. This means less energy will be expended on non-value engineering tasks, and more can be plowed into the mainstream development.<br /><br />I also believe that the knock-on effects of a GPL Java will not be fully realized for many years. This is truly an earth-shattering move by Sun, and they are to be applauded for it!<br /></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com0tag:blogger.com,1999:blog-18974949.post-1151520527865531492006-06-28T12:25:00.000-06:002006-06-28T12:48:48.066-06:00Open Source Java; What does this mean?<div style="text-align: justify;">I was at JavaOne earlier this year, when Jonathan Swartz asked whether Java would be open sourced. The following revelation that Java would be open sourced, and that it was not a matter of whether anymore, but a matter of how.<br /><br />This was the buzz of the first day, and I have continued to watch this unfold. Recently, I read some stories that said that Sun would be ready to open source Java in months. Now, this is a pretty broad declaration, and they could go as long as 11 months, without having to retract that statement, but still, they seem to be moving down the track as they said they would.<br /><br />What does this really mean for all of us involved with Java?<br /><br />I have always been a proponent of the open sourcing of Java. My main complaint has always been that certain JVM bugs just never get fixed. I would love to have the "freedom", and be empowered to fix those bugs in an completely open process. There have been many studies and comparisons of the quality, in terms of defect density, between closed source and open source software. All of them draw the same conclusions. Open source software has less defects, and is more reliable then closed source software. It's pretty simple. I want less defects, and a more reliable virtual machine, and we will get that via the open source development model.<br /><br />Are there other benefits to this?<br /><br />I once heard Bill Joy, former Sun employee and co-founder, say that innovation happens out there. What he meant by that, at least my interpretation, is that companies cannot be insular, and they have to realize that innovation happens in the broader market, and one company, no matter how big, can innovate solely on their own. With that in mind, opening up Java to the world, can only create additional innovation in and around the Java platform.<br /><br />In fact, I believe it will accelerate the delivery of innovation for the Java platform in a way that cannot even be fully understood today. Only many years down the road, will we be able to look backward and realize the monumental changes that came from this.<br /><br />I am really hopeful about the open sourcing of Java, and its benefits to all of us that use it and depend on it. I only hope that months, is really just a few short months time. The sooner the better!<br /></div>Andrig T Millerhttp://www.blogger.com/profile/05386153547711039401noreply@blogger.com0