By Eric Bright
The first few days with Ubuntu 11.10 and its Unity
My system crashed completely. Everything in my Win7 installation is useless now. After saving my documents and whatnot from my dead Win7 installation using Ubuntu, now I’m thinking of giving up on Windows. I’m sick of it now. I’m sure Linux is going to give me a lot of headache and nightmares too, but in a long run, I think it’s the right decision to move away from Windows. I’m going to need a lot of help with stuffs in Ubuntu in near future and I’m counting on you. Wish me luck.
That was my thoughts and hopes for Linux on October 15, 2011 when I screwed up my Win7 installation and sent out the above message to my friends on Google+. Surprisingly, in less than an evening, the table turned around:
I will give Gnome in Ubuntu a few days of trial before I switch to Mint.
The Unity user interface has been around since Ubuntu 10.10. It seemed impressive at first, but it turned out to be too counter-intuitive to be a viable Desktop Environment option for me. At the moment, Gnome too seemed as counter-intuitive as Unity, but its “Activities” menu had a larger space for selecting applications and doing things than Unity had, so it could have been a better choice. However, I didn’t think either of those new moves would prove to be suitable enough.
The Introduction of New Problems
The user interface in the new releases of Ubuntu has changed dramatically. I don’t think the changes are solving any of the old problems; I think they are introducing new problems instead. A hierarchical menu system, the one that Unity 3D has got away from, is the most intuitive and the easiest way that our memories function, and therefore it makes sense if we assume that it also would be a better choice for how we organize our stuffs in menus.
I don’t want to be “too dismissive” as a friend of mine cautioned me not to, merely “because stuff is different than what [I’m] used to.” I don’t want to say that the Linux community is making a big mistake by moving in the Unity direction, because Ubuntu is not all what there is in this field. I tried Mint for example, and it is awesome and actually usable. At least it did not give me tons of new problems to solve (only a few). So, I decided to give it a try. But shortly after, I had to give up. I thought to myself:
I used to like Ubuntu, but I’m not that crazy about it anymore. Maybe we should give it another year or two to iron out the problems that the new direction of the development of its GUI is generating. Nevertheless, Unity is a respectful and a brave move. It could have been useful. We could not have known it until we actually used it. So, I am all for experimentation.
After a couple of days of playing around the new interface and the new paradigm I came to the following conclusion that the Unity 3D user interface in Ubuntu 11.10 is a very unpleasant environment compared to the previous user interfaces in the versions of Ubuntu before the introduction of Unity 3D. Not only because of Unity, but also because it lacks many of the tools that used to be installed by default in previous versions. Customizing the system is a nightmare now. It used not to be like that, and any restriction like that is certainly a step backward.
Going Backward to 1995
The way I felt while I was working in Unity 3D in Ubuntu 11.10 reminded me of the way I used to feel when I had to struggle my way through the user-unfriendly flavors of Linux about ten years ago. Back then, it was a true fight you had to go through to do the simplest things. I always hoped that Linux would learn from Windows and would make things less painful to do.
After a while, Linux caught up. Things started to make sense and, for once, things started to become simple to do in Linux. Until, umm… when Unity came about, when it started to turn around and go back to the opposite direction. Ubuntu was growing on me and I could see it being a suitable alternative to Windows. Now I had to change my mind. It’s a lot worse than what it was a few months ago. Fortunately, the good pieces are reused in Mint (Linux Mint is based on Ubuntu), and many of the horrible parts are avoided. I was glad that I could test the Mint distribution a few months ago in a VM and I could take refuge in there (if I was not going to go back to Win7 altogether). But still Mint too was far from being enough.
When I learned that Linux doesn’t use registry like Windows does, I was very excited and happy (I’m still happy about it). That’s awesome! Or so it sounds to me. How awesome that would be? I had to wait and see. After more working in the new environment I learned that the Linux itself was not suitable for what I usually do. The alleged awesomeness of the registry (or the lack of it in Linux actually) is outweighed by the primitiveness of the applications available in that world as well as the unbearable roughness of almost all edges of almost all applications that run natively under Linux. To this judgment, a friend of mine responded:
Getting over the lack of infinite configurability, I prefer that Unity gives you a highly polished experience out of the box, like OS X and Win7. I found KDE too busy for my liking – I liked the more minimalistic approach in Unity. Like I said, after getting used to it, I don’t want to go back to a Win95 clone.
He is right. Going back to a Win95 is not fun. Unity seems to have great potentials. Although I feel that it has a great many shortcomings as well. It seems to me that the idea of human-usability is new to the Linux community. If it is not, then they are horrible at it and so it would be safe to assume that they don’t know how to do it. Also, the command-line paradigm belongs to the pre-GUI era. The user is not the same as the programmer as it was the case 40 years ago. These two categories have evolved into two different entities. The tools that are great for coders are not great for the users anymore. It seems that the Linux community has just started to realize this difference. Although, futilely, they are still proud of their command-line Kong-Fu flexibilities. It’s fine to have a fabulous command-line skill as well as OS’s features to utilize, but it is not fine to assume that it can compete with the modern approaches. The situation resembles the funny LaTeX legacy software. The reason why it is not in use as much as MS-Office is, is not because it is fantastic. Quite opposite! It’s because it sucks in comparison to what MS-Office can do (the end results would look great in LaTeX but to get it you have too move the heaven and the earth); and this is when MS-Office itself is not particularly an amazing product either. If it was alright to work with machine codes, we would have been using the punch-cards even now. But, it is not alright. When better ways exist to do something, those ways will dominate the market.
Mixed feelings
My friend reminded me that:
“Certainly by…2005, possibly by the end of 2003, Linux will pass Mac OS as the No. 2 operating environment,” said IDC analyst Dan Kusnetzky.
How farfetched it turned out to be!
Today, I don’t know exactly how I should feel about Linux anymore. I have mixed feelings about it.
On the one hand, I like it so very much that I wish it all the best and I want to see it not as the 2nd, but as the first contender in the OS war. I deeply respect the efforts and, like everyone else, can see the enormous potentials in the Linux community. It can easily be the source of inspiration for me and thousands of others. Many people (surely not everyone) contribute to it for free, and spend the time that they could have otherwise spent on entertainments or on other thing, on the advancement of the Linux ecosystem. That’s certainly admirable. Also, I must admit that they have improved things dramatically since let’s say 2005. Back then, it was not an option that could be relied upon for home users. Now, it has some potentials in that market.
On the other hand, since I like it so much, I cannot stand it’s bold imperfections. I want it to beat the other OSs (a dream maybe). I want it to be cool, modern, competitive, and simple to use, even simpler than other OSs. Actually I expect it to be so. The reason why we are still struggling in the world of proprietary software to achieve the most basic interoperability and compatibilities (like between the ways different word processors make their files) is precisely because the architecture of those software are not clearly known, the internals are the guess works of anyone, and the end compatibility results is always at the mercy of trial and errors. Years passed by before the world could agree on a document format like ODF, and still it is not in use everyone (far from it). Seeing this, one would expect that the Linux community must prosper in no time. It must be able to build tools that are completely compatible with one another with elegant ease; because all the internal mechanisms of almost all software written in this ecosystem is readily known. So, everything must work flawlessly and seamlessly. Or must it?!
The experience shows the opposite. I can see that the many tools that are made for Linux are far inferior in both quality and usability. They do not do things seamlessly. Sometimes, they do not do the simplest things as easily as they absolutely must.
Let me give you an example: Can you find a tool like PeerBlock that works flawlessly under Linux? MoBlock? He he! Sure! Go ahead and install it if you can! Of course if you are a computer engineer (or a software programmer), you might not have any problem with the building and the installation of such a software. Then, you might as well write your own software that does exactly what you need it to do, and you might not even need MoBlock in the first place. But, most people are not software engineers. And most people don’t know how to read punch-cards either. Now, if someone comes to me and say: Hey, there is an elegant program that does so and so, and in order to install it you need to run this truck-load of punch-cards through this machine and Voila! You will have that brilliant program, I would say: You have come half a century late! It could have been an option in 1950s (although I’m not sure about it), but certainly it is not and option today. But why? Why would we be outraged by such a sincere offer? And more importantly, what does such an offer have in common with what Linux offers today? They are both obsolete!
Is the Linux Community Confused?
Again my friend reminded me that:
One of the major issues with the Linux desktop has always been that it was designed by engineers, for engineers.
I would say why is it promoted to home users them? Is there a confusion amongst the engineers of this marvelous OS as for whom should be and whom is its target audience? Have they made it for engineers but are marketing it to laypeople like me by mistake? If that is the case, then the engineers need new brains. Although, I doubt if that is the case. I don’t think something like Ubuntu is made “for engineers.” The progenitor of what we now know as Linux Ubuntu could be made “for” engineers. I understand that. It also might still be developed by engineers. Now the question is that what other gigantic piece of software, as big as Linux, do you know that is not written by engineers? Great software is almost always are written and are developed by this or that kind of engineers. So, the developers of Linux are not alone in being the engineers who write software. The only thing that is left in such an argument is that this particular community of engineers happen to write code only “for engineers” that I have a very hard time to believe. It might be true in some levels though, but the marketing campaign around Linux does not echo such a hypothesis at all.
The marketing campaign around the Linux OS ‘claims’ that this OS is ready to be used by everyone to do “everything.” That’s a big claim. Also that’s not limited to engineers anymore even if someday it was. Then, shouldn’t it be able to do the things that can already be done, seamlessly, elsewhere? Someone might say:
But, it is free!
Is Free Enough?
Is this how I am appreciating the work of thousands of volunteers who did almost all of this for free? No, actually it is not. The due respect and attention must be paid to the stewards of such a great effort. I personally like it, evangelize it, and am looking forward to seeing it sitting on the throne. I do. I have been installing it on the computer of those I knew who could use it. Also I should say that price tag for such an amazing set of software is not bad.
But, let’s face the reality: Free is not enough. Ask me how come? People want their jobs done. They already know that there are tools for the kinds of jobs they have in hand that can easily do the job. And they are usually ready to pay the price. If it was not the case, all the companies who sell any software should have been already failed and all of them must have been replaced by freeware alternatives. Although there are fantastic freewares out there that easily cast a deep shadow on the performance of their commercial counterparts, not all software that are free are like that.
It seems to be a lot easier to build a software that does a limited number of tasks well, and polish it and make it even better over the course of several years. An example of such a software, that is not particularly a small piece either, is LibreOffice. It is gigantic, it is capable, and it is getting even better. It’s not only a free office suite, but also it is open source. There are countless number of great software like LibreOffice that do specific things and do it right, and do it a lot better than their commercial competitors.
But when we come to an operating system, which is also a software, the landscape looks totally different. Even amongst the commercial players, there is only one truly seamless and useful option. How do I know that? By the number of computers around the world who are running different flavours of it. Of course there is a first-mover’s-advantage and whatnot that has contributed to its success, but it’s not all that. We can see examples of latecomers, like Android, that could overcome the disadvantages of being the last in the market by proper project management and engineering amongst other factors. So, it is not all because an OS had a head start that it is also dominating the market. Many factors are involved among which being simple and usable are a few. The OS in question is truly usable.
The virus-infection objections are straw-man arguments. Such straw-man argument are already debunked by the evidences from the market and clarification by technically educated public. We already know that the ratio of virus-infections in an OS is directly related to its popularity and has little to do with its inherited security. As soon as an OS becomes popular,it also starts to see viruses being written for it. All the platforms that once used to be obscure and allegedly virus-free, have become virus-infested once they become hugely popular. Also we know that the clams for the intrinsic security of this or that OS is nothing but a myth, exactly because we have seen such alleged securities suddenly disappeared once all the attentions were attracted to the platform.
Despite all the viruses, “security issues” and whatnot that an OS might be know for, it can be a lot simpler to use and a lot more effective to get the job done. Customers, and certainly non-technical users, appreciate that. They can sense usability that does not exists, yet, in the Linux world. The OS can be free and still useless.
I remember once I was visiting a client to fix her computers (she has three). I asked her as I was fixing her computers’ issues if she were interested in looking into a few free alternatives. At first she was very excited and enthusiastic about free software I could install on her computer. And I did it and everything went very well. Then, for one her older computers, which was not too old, but old enough for her, I suggested the installation of Ubuntu (as I usually do in such circumstances). She agreed and we proceeded. Obviously the OS was free and all the alternative applications that she wanted to have were also freely available. But before we even start doing anything, we hit a dead end. How? I am sure many of you already know the reason why a Linux installation might be rejected by a particular hardware configuration: Drivers! We could not get get wireless keyboard and mouse to work properly with the setting, the wireless adapter was not recognized and hence we could not go on-line, the graphic card was generic and some of the features couldn’t work, and so on and so forth. We could get around a few of the issue, but by the time that the daunting task of getting the beast to run normally was done, we had come across so many silly obstacles that she changed her mind and said: I never had any of these problems with my previous OS. I don’t want this new OS. I want the old one back. Would you please remove this from my computer and put the old one back? And no amount of explanation and assurance was enough to convince her that the new OS could work just as well. The reason why? Because, the new OS could not work just as well indeed. And yet, it was all free!
Usability Issues
This short article is not an exhaustive list for pointing out all the usability issues with a Linux distribution such as Ubuntu. I can only focus on a few of them.
The Unity user interface is one of the many usability issues that Ubuntu and other Linux distributions either inherited from their ancestors or created for themselves from scratch. The other serious problem with the Linux ecosystem is the installation of applications under Linux. It is an accepted practice in the Linux world that a software is coded, then it is put on-line for others to be built first and then to be installed through the command-line (terminal). This procedure, if not all common, is common enough and acceptable enough to ruin all what the Linux community is so enthusiastic about: user acceptance. Until this procedure is even allowed and is used as an alternative way of obtaining a working application, the Linux community will not see the popularity that it is seeking. The only way around this major and serious obstacle is to get away with this crippling experience and disallow raw-codes from being delivered as end products.
Lazy Programmer: The paradox of end product in Linux vs. in MS Windows
Think of it this way: When was the last time that the users were required to fetch the source-code of an application, build it on their own machines with their own tools, and then install or use it under any version of Windows? Never! It’s not that such a thing is impossible (what a laughable idea), but it’s because the software delivery philosophy is fixed on a point that happened to be usable to the end user. Everyone, even the open source society writes the code and compiles it on his or her machine and only then delivers it as a product. So, an end-user product is defined from the point where the code is already compiles onward. Anything before that point is not considered as the end product in the MS Windows world, but he source code. The source code, is not meant to be used, built, or compiled by the end user and no end user is required at any point to be able to do such a thing.
Now, look back at the Linux world again and give me examples of the software that are called as such, and are expected to be built and then installed by the end user. I don’t think anyone would have any problem with naming many of such “software.” The problem with the Linux mentality is that a bunch of uncooked (raw) source code is also called application. There is no problem with that labeling. The problem only begins when this labeling become the more or less standard practice of many lazy programmers. Many lazy programmers walk the end user half-way through the job and then suspend them in the air for the other half when they leave them empty handed with a bunch of code that the end user is expected to build on his own in order for them to be of any use at all. This is a philosophy that is accepted in the Linux world. The end user has to open the terminal over and over even today.
This is completely unacceptable. The fact that the end user has to do anything through the command-line qualifies the whole experience for rejection. It also tells a lot about the still-lazy mentality of some programmers who write for Linux environment. I call it lazy programming simply because a great and crucial part of the production of many ‘applications’ is left out to the end user to be performed.
Build It Yourself: Why more often than not a Linux user have to “build” an application from the source code?
Simply because the tools for doing so is always included inmost Linux distributions, then some programmers, whose number is not small, think that it’s alright to leave the source code as it is and let the end user build their application for themselves. The argument for such a practice is usually something like this: By building the application for yourself you will be utilizing the full potential of your specific CPU, so the software will work according to your hardware with its optimum potential.
While this argument may make sense if you are ‘building’ a software for Google servers, for which every nanoseconds that you save via such an optimization will add up to a huge number over a short period of time and that will be translated to a lot of economic benefits and advantages, it has no material meaning to most end users at homes and schools; nothing whatsoever. This is a good reason to build applications for time-sensitive missions where every fraction of a second actually counts and where no times saving method is ever enough and more and more ways of saving more and more nanoseconds are needed to be discovered and implemented. The end user will not feel a lot of differences in many cases.
Now, if you ask me if my computer was operating faster when it was on a Linux diet versus when it is back on a Win7 diet, I would say: Certainly when it was on a Linux OS. It was almost twice as fast and as responsive as it is now with a Win7 on it. But, unfortunately this is not the whole story. The Ubuntu version that I installed on my system was already compiled and the “build it yourself to have it optimized” argument does not apply to it. Also, almost all the applications I used from within Ubuntu where those that came with the installation. Whether the installation actually built the applications on the fly or just installed the binaries, I have no idea; that’s even beyond my point. My point is that the end user should not be left with a bunch of source code to build his or her own applications. That’s a simple and a clear message. Certainly I did not feel if I was building the Linux installation at the time of installing it on my system, even if it was indeed being built at the time.
I am not sure if the ‘lazy programmer’ mentality of the Linux community can be changed in any way anytime soon, but I know that it is a big problem. If Linux ever wants to have any place in laypeople’s homes, it has to invent a mechanism that can completely get away with this “build it yourself” nonsense.
“Build It Yourself” mentality is nonsensical
At no point should a computer user be required to build an application from its source codes if he or she decides to use it. This is an utter blind-spot of the Linux ecosystem.It should not happen even once during the typical life-cycle of a computer in the hands of an average user. If it happens several times a day from the day one on, there are surely a lot of things wrong.
How to depart from the current horrible experience of ‘do it yourself’ that still is very obviously present in the Linux world? There are many ways to do so. One is to not include any developer’s tools with the distribution of Linux in such a way that it becomes a universal practice. The developers’ tools can always be separately downloaded and installed by any end user should they be needed, but they should not be shipped by default. This will give programmers some incentives to take the last critical step towards the building a usable application, which is to actually build it or conjure up methods for a fully automated build to be produced by clicking on an install button without any intervention from the user (not more that clicking on a ‘next’ or ‘okay’ button, or a few of them as it is the case with most installations under MS Windows).
This automated building can either be handled by the author of the application or by the OS itself. But the crucial point is that at no point should the end user be needed to resort to ‘terminal’ and do some command-line Kong-Fu.
But will such a thing ever happen? My answer is: Absolutely not! Already the Linux community is self-congratulating for the ability of being able to build an application by yourself. If that is considered as a virtue, then it is not going to be changed anytime soon. Also because there are many different distributions of Linux OS, and none of them agree with what the other one is doing, the prospect is nothing short of being grim. It has not happened over the past many years, and I don’t see any sign of such thing being on the agenda of any distribution on Linux for any foreseeable future.
The Myth of Flexibility And The Virtue of Being Obsolete
This alleged flexibility will guarantee the perpetual unpopularity of the Linux environment. While the whole paradigm of application installation is being obsolete in the current way and something totally different is taking its place (think of the web applications being installed inside Google Chrome, or iPhone Apps being installed without the user’s intervention, or Android taking care of all aspects of installing and uninstalling everything that and application need to function properly), and while the world is shifting to more and more automated and hugely simplified ways of getting an application up and running, the Linux world has not been able to address the basics of the paradigm that had come before the dying one. The way we install things in let’s say Windows 7 is simple and straightforward. It does not need too many user inputs. An application suite as enormous as Adobe CS5.5 that will blow-up to several gigabytes and cover thousands of registry keys and thousands of folders and files i hundreds of different places, only require and click to be installed on a typical Win 7 box. And yet, this paradigm is getting obsolete.
The way we install and uninstall applications in Windows is already obsolete and an easier substitute for it is long over due. Look at how Android does things, or how iOS does a similar job. In neither cases the user sees anything of the installation of an application beyond the point of clicking on an “Install” icon, once. Yet, in Windows, the user have to go through series of dialogue boxes, clicking through many pages, “Next,” “Next,” and “Next” buttons and do a lot of interactions with the already automated installer. It is an obsolete way of installing an application. It is cumbersome, ugly, hard to follow for many users, and in times very intimidating and unwelcoming. The situation is dire enough for a bunch of fine programmers to come up with the idea of Portable Applications. The idea in there is that the installation of an application should not require anything more than an unzipping of a compressed package and no registry value and other nonsenses should ever be needed for the proper working of a portable application. Decompress it and run it. That’s all! It was a hugely successful project that spread the idea of portability of applications around so that several other similar platforms were made to implement the idea such as LiberKey. This is the definition of a portable application from Wikipedia:
A portable application (portable app), sometimes also called standalone, is a computer software program designed to run independently from an operating system. This type of application is stored on a removable storage device such as a CD, USB flash drive, flash card, or floppy disk – storing its program files, configuration information and data on the storage medium alone.
When you look at a Linux environment, you would expect that all applications should be installed as portable and all of them should be installed without any user intervention whatsoever. For one reason that there is no registry to set any keys in and dependencies should not be assumed before any typical installation.
While achieving a total portability in Linux is relatively easy, the main problem is that a lot of applications do not come in packages that and be readily used. They must be built first before they can be installed. So the main difference in the portability between the Windows environment and the Linux one is not that portability cannot be achieved, but that in many cases under Linux the installers are not even built to be installed, portable or not.
The invention of Synaptic package manager in Linux and later Ubuntu Software Center are definitely big steps forward. However, one end of the equation is badly open and that open end allows for the fun to be completely ruined. For example, Ubuntu usually comes with a lot of high quality software that are more than enough for day to day use of a person who only surfs the Internet, opens a few PDF or DOC files every other month, open a few pictures on her computer, sends out a few images from her computer every month, and perhaps listens to a CD once a year by accident. Certainly Ubuntu is capable of doing a lot more than that; no doubt about it. But, up to this point, it really needs no configuration, no installation of any application, and not extra effort to be totally useful of such a hypothetical user.
But, what if the needs of the user grows suddenly beyond the above-mentioned scenario? Let’s say she takes a course in a university. This is the time when all the rough edges start to shine through. As soon as she starts to investigate any new possibilities, the troubles begin.
Here, I don’t want to push my thought experiment to the limits of what I, personally, require of a computer to do for me. Had it been the case, I should have said that a complete disaster is fallen upon me. But in our hypothetical example, a person would very quickly see how raw and user-unfriendly the Linux environment actually is once one sets step out side the default, out-of-the-box experience that Linux is usually shipped with. Here is a few examples of what she cannot do and will not be able to easily do in a Linux environment:
She wants to annotate her PDF files, something she could easily do in Windows via a vast number of alternative applications; she cannot do that. The few options that are available in the Software Center are not enough for what she used to be able to do under Windows. Rest assured that there are a couple of Linux applications that have some capabilities for annotations (of which I am not aware), but they certainly lack many features and it’s almost always the case with Linux applications. Digitally signing a PDF file? Don’t even mention it; it’s an expert job under Linux. Editing a 5 minutes home video? Are you comparing VirtualDub with anything that runs under Windows? You gotta be kidding me! The closest you can get to a basic but usable video editing tool in Ubuntu is PiTiVi. Installing and configuring a firewall? No way! Linux already comes with one, so you don’t need any. Viewing and organizing your photos? You must be happy with what Shotwell can provide for now; don’t expect anything fancy.
These examples are from the obvious needs and the ones that are available and almost the best of their kinds under Linux. Anything more serious is either not available in Software Center to be easily installed and have to be obtained from other places, or not available at all (like in the case of a PeerBlock application; and I’m not mentioning any of the Adobe products that mostly do not even run under Wine).
As Long As It’s Not Windows
When I look at the Linux landscape I cannot help but see that the philosophy of the OS sometimes errs towards one an only one justification for its existence: As long as it is not Windows, we are happy. If the community happen to have this spirit or are driven by anything such as this motivation, consciously or subconsciously, the community will not be able to truly be a viable alternative to MS Windows (unless Microsoft screws it us somewhere along the way as it actually might; because Microsoft is famous for being the giant who easily loses foresight and gets lost in her own imaginations when others take over the opportunities and run). If Linux ever become more popular than it is today while having all the flaws that it already has, it certainly would not be because of the quality of its work, but because of the decline of its competitors. Android (with its mixed origin) caught up with the competitors and recently reached a higher market share in mobile devices although it entered the scene really late. The reason why? Not just that because a multi-billion dollar company like Google was behind it. There are other OSs with other multi-billion dollar companies behind them that almost failed (I’m pointing at you MeeGo and webOS, that’s right. Curiously enough both of them are based on Linux). The reason was that Android (that inherited a lot from Linux) has a far superior design than the mainline Linux. Whether they will merge or not as Linus Torvalds predicted in 2011 is another issue.
Although android has a heavy Linux heritage, it does not even attempt to be Linux, let alone for it to try to be MS Windows (it was blamed for it not being a Linux copy back then, but the same decision for it not to be Linux saved its future). In comparison, it’s been more than 18 years since the inception of Wine that Linux is trying to run Windows applications. As long as it’s not Windows, the applications that run under MS Windows are too great to be totally abandoned. If not and if the Linux applications are simply sufficient, then why the community has been working so hard to make it possible to run MS Windows applications under Linux? To become more competitive or to give more reasons to end users to love Linux, one might say. But Mac OS is not MS Windows and it has its own devotees (some people swear by it). Android is also not Linux and it is gaining an unprecedented momentum in the market.
These observations all point to one thing: That the Linux community is confused (they might call it being diverse) and its core principles are not user-friendly enough. Had it been 1991, a product like what Ubuntu is today would have been a heaven-sent marvel beyond imagination with no parallel whatsoever. But, there is a little problem in here: it is not 1991.
Resisting a Paradigm Shift?
It all started with Unity. It repelled me so badly that made me rethink about the entire philosophy of an open source operating system like Linux and all of its derivatives. Am I in an anaphylactic shock, showing severe allergy to Unity and the paradigm shift in the user interface field? Am I being too judgmental and too quick in my conclusions? I don’t think so and here is why.
I feel as eager to see the Linux community prosper beyond my imaginations as I was before trying out the Unity disaster. I still don’t like closed-source philosophy as much as I like the open-source architecture. I am all for free applications (free as in “free speech”, not necessarily as in “free doughnuts”). I still think that with taking the right turns, the Linux community can easily become the one and only viable long-term option for many end users even a lot more than what it is today. I believe in the great potentials that are behind the development of the operating system. And I firmly believe that every human being has the right to have a freely available and fully capable operating system in his or reach should he or she need it.
Nonetheless, the current state of the Linux operating system, despite all of its great server-side glories, is not ready enough for an average end-user. There are architectural problems, usability problems, availability of software as well as software delivery problems with all flavors of Linux. The vast ocean of free software available in the Software Center or the like is one millimeter deep. The depth of the userfriendlyness of the operating system and how it can facilitates task that a modern operating system seamlessly does is also less than a millimeter. The slightest scratch to the surface brings up the same monster under the hood that we used to see a few years ago, all above the hood, in a typical Linux environment.
Why?
It can have something to do with how R & D (research and development) is done in successful organizations compared to the unsuccessful ones. A large part of an effective R & D depends on feedback and they get processed or discarded.
A dynamic organism, like a company that supports the development of an operating system, must have an efficient feedback system or else all the efforts that go into the development of a product can go to waste (compare how MS Windows Vista was developed and marketed versus how Windows 7 was released: The latter enjoyed a large number of feedback from the end users whereas the former did not. The lesson is also learned for the development of Windows 8 with its preview releases). The point is not to compile a lot of feedback and then do something totally different (many software company don’t give a dime to what the customers want; they “know it all”). Not listening to the customers is a deadly sin of any strategic plan. Even Apple has to listen to its customers when the pressure builds up (the case in question is Apple’s Final Cut Pro X v.10.0).
There is no shortage of companies who think that they “know” what is good for their customers. Apple is one of them. Apple believes that it knows what customers must like or dislike. If the customers choose to be different, they can move on to the next product and the next manufacturer. It’s an “either my way or highway” mentality. It can rarely succeed. Apple is a very rare exception. The other exception is Ford (“Any customer can have a car painted any colour that he wants so long as it is black.” -Henry Ford). But the number of those companies that failed because their motto was like Apple or Ford Motor Company is astronomical. This is generally a recipe for failure. Companies with those mottoes might succeed only when they enjoy a monopoly. As soon as a healthy competition builds up, they start to lose their grip over their customers. That is what happened finally to Ford and now is happening to Apple. In a few years, Apple will be remembered as a ‘once the most valuable company on earth’, but not anymore.
Companies who started to listen to their customers, also started to see a great improvement in their strategies for meeting their customers’ needs. There are countless number of successful companies who still believe that “the customer is the king.” Cliché? Call it whatever you want, the mysterious effect of implementing this “cliché” is beyond words and it is going to remain like this for foreseeable future. The end user will always going to be the key for the success or failure of any market strategy. There will be no popular product that can achieve wide acceptance in a competitive market and remain deaf to the inputs of its customers.
The Linux community, therefore, needs to unify its efforts and find a way to use feedback from its users to gain the power of sight again and make correct design decisions. Without such a feedback system, they are flying blind. And, unfortunately, that is exactly how they look from every vantage point.