The openSUSE adventure: Installing Packages

Well, I learned some more today in trying to install the software I use all of the time. As a long-time Kubuntu user I was used to how they set up their repositories, I know all of the command-line tricks for using apt, and none of that is any use to me now.:) But I knew that would be the case, so I persevered.

The first thing I learned is that there are a lot of repositories. I was trying to install a password manager I like, KeePassX, and finding which repository it was in took forever. openSUSE has these repositories called BuildService, and there are a bunch of them. As it happened none of them had what I wanted. Then I found one called Packman, but that didn’t have it either. Finally I added opensuse-contrib, and that had what I wanted. So now I have about 20 repositories configured. I can’t tell whether that is a huge mistake. I did notice that for some reason every repository has a GPG key that is untrusted.

Then I had one piece of software that had to be compiled. Again, everything is different. Instead of a package with everything included, you have to install the components, like gcc and make, separately. Or at least that is what got me going. If you know better, please share the knowledge.

My one big unsolved problem from today is that my favorite Chrome extension, G+me for Google Plus, is not working. I have installed it, removed it, reinstalled it, and it just isn’t working. At this point I have to call it a day, but I will try to get it going again. All of my other Firefox and Chrome extensions/add-ons seem to work fine. So, the day is mostly a success, but a few things to work on yet.

Calling All 2011 Ohio LinuxFest Attendees!

We would really like to gather a teeny little bit of information about what you liked. I promise this can’t take more than a couple of minutes :

And to make it worth your while, we will select one respondent at random to get a free Professional pass to the 2012 event, which gets you: A day of training with the OLF Institute, which includes lunch on Friday, a t-shirt, and admission to the main OLF conference on Saturday. These normally run $350, so it is a valuable prize.

So go to and fill out the survey.

And thanks from all of us at OLF.

OpenSuse 12.1

Well, I just installed OpenSuse 12.1 on one of my computers. Another one of my New Year’s resolutions was to try some other distros. I picked OpenSuse because it is a KDE distro, and I definitely prefer KDE. I am hoping to find that RPM is better than it used to be. So far, YAST looks pretty good to me. I used it to install Samba and connect the computer to the rest of my network, and it was really easy to do, in fact I thought it was easier than in Kubuntu, which has been my distro for the last 5 years.

I figured out how to add a repository, KDE:Extras, and I installed a plasmoid I really like, Veromix. But I cannot figure out how to get it onto a panel. I really like this particular plasmoid, so if anyone can explain this I would appreciate it. I tried installing it directly on the panel using Add Widget, but that errored out on me, hence my attempt to add the repository and get it that way.

I know I will stumble over how to do things in this new distro, but I am going to keep with it for at least a few months and give it a fair chance. It is time to learn some new things.

Freedom is Never Free

The words we use to describe what we do can matter a lot in how we in the FOSS community think about what we do. Once upon a time there was Free Software, as defined by Richard Stallman in the famous Four Freedoms:

  1. The freedom to run the program, for any purpose (freedom 0).
  2. The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
  3. The freedom to redistribute copies so you can help your neighbor (freedom 2).
  4. The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

Now, I happen to be a big supporter of this. I love the idea of Free Software. And I have noticed that some people I greatly respect, such as Jon ‘maddog’ Hall, are always careful to refer to it as Free Software. Nonetheless, there are problems with this terminology. If you have been around FOSS for very long you have noticed that the word “free” admits of several meanings, one of which has to with the cost. And that was never the point in FOSS. There is nothing in the definition of FOSS or in the GPL that says you are prohibited from charging for your software. And because of the ambiguity in “free” we have to be careful to use “Free As In Freedom” to denote what Stallman meant by the Four Freedoms, as distinct from “Free As In Beer” to denote lack of a monetary price.

A later term was developed called Open Source, which put the focus on making the source code freely available. Now, it is clear from the Four Freedoms above that this is essential to Free Software, so I am not sure just how big a difference this makes. But if you want to explain to the average user why any of this matters, you have to acknowledge that the average user really doesn’t care if the source code is available since they can never imagine themselves trying to modify the code. In point of fact, I would expect that it is highly likely that I will go to my grave without ever attempting to modify the code of any software I use. I am not a programmer, and I don’t have any desire to be one. I like programmers, some of my best friends are programmers, and the world is undoubtedly a better place because of programmers, but I don’t think that is my role in FOSS. So I don’t have strong interest in looking at the source code. And to you in the back with your hand up, I agree that it would be silly to buy a car that had the hood welded shut, but I don’t repair my own cars either. Instead I support the economy by helping a mechanic to earn a semi-honest living.

The term I have adopted for this purpose is to call what we do “Community-Supported Software” because I think that puts the emphasis where it more properly belongs, at least for some uses. If we value this software, I think we all have a responsibility to support it in whatever way we can. Some do that as programmers, but the rest of us have a role to play. And I want to explore some of those options (and maybe motivate some people to get involved) in some posts over the next few weeks. And if you find this discussion at all useful, please feel free to forward to anyone you think will be interested. Because I think it is true that freedom is never free. It requires all of us to take part in defending and supporting it.

Listen to the audio version on Hacker Public Radio!

It’s Just Semantics

Yesterday I began my morning with a meeting involving members of various departments who are dealing with a major change to our IT systems. We are replacing a system from Vendor A with another from Vendor B, and just about everything changes. As a result, we have a lot of meetings. But I didn’t bring this up to get sympathy. Everyone has pain in their lives, and mine is not particularly more impressive than yours.

But in yesterday’s meeting, we got to to a discussion of terminology. You see, Vendor B sold us a system that uses different names for a wide number of of our data fields, and we needed to agree on the names we would use in our reporting systems. Should we use the new vendor’s names, use the ones we had traditionally used, or some combination of the two? Now, at this point I’m sure you’re thinking “Gosh, that sounds like fun.
I wish I could have been in that meeting!”

But what got me thinking was when one of the IT folks said “That is just semantics. I don’t care what you call them.” This statement was so profoundly wrong that I nearly admired it for the awesome scope of its wrongness. The first level of wrongness comes when you consider that all of us at this particular site, in all of the different departments, need to talk to each other. And that means we all need to understand what we are talking about. I wondered if this IT person had ever heard the term “naming convention”, and if so, did he comprehend why that was important.

Then I got to thinking about that phrase “It’s just semantics.” This is where the real problem lies, I realized. It is a common phrase, and usually used to imply that the meaning of the words is not important to understanding the issues at stake. In this colloquial sense it says that people sometimes use weasel words to avoid a truth. For example, a politician trying to explain away an embarrassing situation, like Clinton saying “I did not have sex with that woman.” We correctly see that people who do this are misusing language to confuse the situation.

But saying that this is semantics is profoundly wrong. What is really happening when people use this phrase is that they are saying that words and their meanings do not matter. And when you go down that road you have a serious problem. I doubt you can even think intelligently if you cannot use words with a certain degree of precision. And communication becomes pretty much impossible if we cannot use words and agree what we mean by them. That is what semantics is really about. So if someone accuses me of using semantics, I thank them for the complement. What they have said is that I care about what I say and try to use the best words to convey the meaning I have in mind. Of course, they don’t realize that is what they said.

Trent Reznor on Social

This courtesy of the current issue of Wired magazine: “I don’t care what my friends are listening to. Because I’m cooler than they are.”

So, can I claim that is why I’m not interested in social recommendations for music?

Android, Apple, and Market Dynamics

Or Why Tim Cook may be the world’s unluckiest man

Please understand that I don’t wish anything bad to Tim Cook. I’ve never met the man. But I am observing something about the market dynamics in the smartphone and tablet market that I have not yet seen anyone else talk about. Eric Raymond in his blog Armed and Dangerous has covered the idea of price pressures from Android affecting Apple, and I consider his blog required reading for anyone interested in this topic. But I think I can offer a slightly different take on the issue.

If we start with smartphones, Apple really kicked off this market, and raced to an early lead. The first iPhone was unveiled in January 2007, and there was nothing like it. This phone got 2.7% of the mobile phone market in 2007, 9.6% in 2008, and 15.1% in 2009. On November 5, 2007 the Open Handset Alliance was announced, and the very first version of Android unveiled. On September 23, 2008 the G-1 was released with Android 1.0.  In 2008 this gained Android only .5% of the mobile phone market, but this increased to 4.7% in 2009. So in 2009 we have a situation where Android’s market share is less than one-third of Apple’s. Yet by November 2010 Android pulled ahead (slightly) at 26% to Apple’s 25%. And by September 2011, 10 months later, Android is at 44.8% to Apple’s 27.4%. What makes this even more significant is that these share numbers are for the U.S., and it appears that Android is even more dominant in other countries.

If we look at the timeline, it looks like Apple is first to market, and holds a lead for nearly three years before the competition catches it. I think this may be significant for the tablet market. The iPad was introduced in January 2010. In the most recent figure I could find, which is for September 2011, it looks like 75% of the market is held by iPad, and 25% by Android. I think these numbers are pretty comparable to what we saw in the smartphone market if you allow for the fact the the market share numbers were for all mobile phones. Nokia was still selling candy bar phones in 2009, for instance. If we take the smartphone market in 2009 as a two-horse race between Apple and Android, it really looks very close to 75% Apple and 25% Android then as well. I point this out because  even as Android was starting to dominate sales in the smartphone market, I heard a number of people claim that the tablet market was different. But I never heard anyone give a compelling argument as to why the tablet market would be different. Maybe there will be a different outcome this time, but I’d like to hear a sane evidence-based argument before I believe it. If the pattern from the smartphone market is repeated, we could see something like a 50/50 split by the end of 2012, with Android pulling ahead to dominant position by the end of 2013.

Now at this point I have mostly recapped some numbers, but not added anything significant to Eric Raymond’s analysis. I think I can do that now by adding something based on the history of the consumer electronics market. Back in 1990 Professor Michael Porter at the Harvard Business School published a very important work called The Competitive Advantage of Nations. I used this book with some of my more advanced students because it had some great insights. Prof. Porter started with the insight that national economies are not the appropriate level of analysis, and that in specific markets a country might have an advantage while not having it in other markets. So he looked at a number of specific markets where one or another country dominated and asked why that was the case. In the market for consumer electronics, Japan was clearly dominant (remember that in 1990 Sony was still a major force, not a bumbling also-ran<g>). And why was that? Because of the intense competition within the domestic Japanese market. Japanese consumers would purchase the newest products with great fervor, and always demand newer, better products. The product cycles were a matter of months, while comparable US firms, for instance, were still operating with product cycles of years. As Prof. Porter noted, this placed huge pressure on Japanese companies, such that if they could succeed in the Japanese market they would find competition in the global market relatively “a piece of cake”.  And we now know that they basically eliminated the American firms in this market and took it over.

I think this example is relevant to the smartphone and tablet markets as well. I expect that the combination of rapid innovation, short product cycles, and price pressures will create further challenges for Apple. The competition is not from Japan this time, but from the countries that learned from Japan, which would be South Korea, Taiwan, and China. This is where you find the companies like Samsung, HTC, LG, Huawei, etc. HTC, for instance, doubled its shipments of phones from the first half of 2010 to the first half of 2011, and its product cycle is in the area of 6-12 months between concept and a product in the hands of the consumer, according to its COO, Matthew Costello (The Economist, 10/8/11) .  This is rapid, and it is only one company. Together, these companies represent the next wave of Asian Tigers, and they came to the top by out-competing the Japanese. Already in China, which by any measure is the biggest growth market for mobile, Samsung has a larger market share than Apple. Add in Motorola, and new entrants looking for a foothold, like Acer, and you have a lot of competition. These companies are releasing a new phone every month, and sometimes more than one. And any time a feature proves popular, it is quickly adopted by every manufacturer. As a result, the Android phones, which started playing catch-up, are now moving ahead of Apple. Already the technical specs for Android phones exceed those for Apple, and in terms of software it was notable that the latest version of iOS mostly played catch-up to Android.

One way to think about this is the contrast between Intelligent Design and Evolution. Apple is a very centralized tightly controlled ecosystem that represents the Intelligent Design side, while Android represents the Evolutionary approach. From this perspective, the fragmentation that some people complain about is not a failure, it is Android’s greatest strength. This is what lets Android move into every conceivable market segment, and is a central reason for Android having double Apple’s share in the smartphone market. And when you conceive of this as an Intelligent Design vs. Evolution competition, it is worth noting Leslie Orgel’s Second Rule: “Evolution is cleverer than you are.” Even if Apple’s designers are, pound-for-pound, better than anyone else’s designers, they can’t beat the frenzy of experimentation that comes from the Android market.

The next area worth looking at is China. This country is just starting to gear up for massive smartphone usage, and there is no doubt that large numbers of Chinese consumers appreciate the Apple products. After all, look at the many Apple stores there that are doing great business. Of course, Apple never heard of these stores until recently because they are all fakes. Still, imitation is the sincerest form of flattery. But the largest growth in the Chinese market is going to come from less expensive phones, and that is simply not in Apple’s DNA. In a Wall Street Journal article the Chinese market was analyzed and right now Samsung leads Apple slightly (15% to 13% respectively). But the really interesting development is the charge by Huawei,  ZTE, and other Chinese companies to develop smartphones aimed at the price point of 1000 yuan (around $157 at the time of the article). ZTE, for instance, has an order for 2 million 0f its Blade smartphone, priced at 999 yuan, or free with a 2-year contract. And these phones run Android. Computer maker Lenovo has also jumped in with an Android phone (the A60) with a price of 959 yuan. This kind of competition at the low end will be just as challenging for Samsung as for Apple, but it represents gains for Android.

In the tablet market, I expect similar dynamic for the same reasons. Too many analysts have looked at current market share declared “Game Over”, using statements like “There is no market for tablets, there is only a market for iPads.” This completely ignores the history of smartphones, and is, I believe, fundamentally mistaken. Android already has approximately 25% of the global tablet market. And just within the past couple of days (as I write this) comes the news that Amazon has increased its orders for the Kindle Fire for the second time, up to 5 million units. And this has not even been officially released yet, so they must be getting a lot of pre-orders. So if this follow the same pattern, and I see no reason why it shouldn’t, expect rough parity between Android and iOS in the tablet space by the end of 2012, with Android pulling ahead in 2013.

To get back to the subhead of this article, all of these factors were in play when Steve Jobs was still the CEO of Apple. If he had not gotten cancer, if he was still alive and running things at Apple, everything we have talked about would have happened pretty much on schedule. In fact, the dominant market share in smartphones going to Android did happen while Steve was in charge. For now, Apple is able to generate huge profits even as its market share is eroding, but that cannot hold up indefinitely either. Electronics companies all over the world can see the importance of the mobile space and want to be there. And Android is pretty much freely available to all of them. Chines companies that focus in driving down manufacturing costs (the story goes that if you ask any Chinese manufacturer what they plan to have as a marketing advantage, they all answer “Price”) will adopt Android because it gives them a software stack free of charge. Combine this innovation with Moore’s Law, and ask yourself what happens when the equivalent of a Samsung Galaxy Nexus is free with a contract, or perhaps $150-200 outright with no contract. There is no room for Apple’s margins in that scenario. And that is why I think Tim Cook may be the unluckiest guy in the world. When these things happen, no one will say that it was due to market forces that no one could have prevented. They will say that somehow Steve Jobs would prevented it, and that Tim Cook just was not up to the job.

Ohio LinuxFest Registration is Open for Business!

The premier Linux event in the Mid-West USA will run Sept. 9 through Sept. 11 in Columbus, Ohio, and registration is now open to all. Keynoters include Cathy Malmrose, Bradley Kuhn, and Jon ‘maddog’ Hall. There is an extensive Medical track focusing on the use of Open Source in various aspects of medicine, training from the Ohio LinuxFest Institute, and a great slate of presentations. Register now and reserve your place.

As always, we have a “Enthusiast” category for those short on funds. If you pre-register at the Web site, you can join us free of charge. Walk-ins will be charged a small fee.

Wikipedia and The Consumer Internet

I was thinking today about something I have thought about before, but a new connection happened in my mind. It all started with Wikipedia. I use Wikipedia a lot, in fact I use it enough that I recently felt compelled to make a small donation, as I usually do for open projects I rely on that need support. I have found Wikipedia to be generally pretty accurate, particularly if the topic is a technical one. The way I assess accuracy in any type of source is to take a topic I happen to know a lot about, look at what the source says, and ask “Did they get it right?” When I do this with Wikipedia, I tend to find that they do get it right, and in fact I find they do a lot better than most of the media on this particular test. At the same time, I often encounter people who say they don’t trust Wikipedia, and it has become common to hear that teachers, for instance, will prohibit students from using Wikipedia at all as a source of information. To my mind this is a very interesting disconnect, and I think there may be larger implications we can tease out about this.

The first thing that comes to my mind is that this rejection is a lazy, minimum effort path to feeling sophisticated. It is minimum effort because you don’t have to actually assess the quality of the information. You may even be discouraged from attempting to do an assessment by a school policy or by a consensus of the establishment. Going along with everyone else is always the path of least resistance. But I also think there is a seductive pull to the idea that you can appear sophisticated by giving a sad but knowing look while saying “I would never trust a source like Wikipedia.> After all, they let anyone write and edit Wikipedia. This is so unlike the highly reliable media which only lets very competent people like Judith Miller, Jayson Blair, and Glenn Back present information to the rest of us.

Reliability of information

Of course, the above is somewhat oversimplified, but in both directions. As to Wikipedia, they do not, in fact, let just anyone do whatever they want on the site. There are controls and safeguards in place that catch a lot of the problems, and the ones that don’t get caught right away are usually because no one was looking at the page to begin with. As with any open system that relies on “many eyeballs make bugs shallow”, you must first make sure many eyeballs are indeed looking. Every open project has this problem, and how you solve it is probably worthy of a good book in itself. But anyone who uses Wikipedia regularly knows that in fact if you edit a page, your edit will probably go on a list to be reviewed by someone else with a lot of Wikipedia experience. And they have riules in place, such as you cannot edit your own page in Wikipedia. (This only applies to celebrities, of course, since you and I probably don’t qualify to even have a page on Wikipedia). They also tend to require independent corroboration from other sources. As an example, please look at the page for Earned Value Management. This topic may look like all greek to you, but it happens to be a page I have referred to more than once since I am in this line of work. The information is very accurate. And if you scroll down to the bottom of the page, you will see they have a good list of references, and they are the right references. They used Quentin Fleming’s book as a reference, for instance, he really did write the book on this topic.

Now as regards the so-called “Main Stream Media”, picking on Judith Miller, Jayson Blair, and Glenn Beck does involve a certain amount of snark, but it’s my blog and I’ll be snarky when I feel like it. But the deeper issue is that the press is not nearly as open, and therefore is less likely to catch and correct errors. I frequently find a Wikipedia page where an editor has posted, at the top of the page, a notice to the effect that the page needs more independent sources before it can be acceptable. Have you ever seen this on a story in your newspaper? Of course not. We are supposed to assume that somehow there is someone on the background who is doing this, but clearly this does not always happen. In fact, any media outlet that is in business to make money has a strong incentive to push the other way. Being the first to break a story matters, and getting independent verification only adds time. And then who are the sources for those stories? Most newpapers, for instance, will have policies about limiting or not using unnamed sources, but they manage to prevent such policies from interfering with a good story. And that means they can be manipulated to publish stories that are either not true (Iraq has nuclear weapons!!), or seriously slanted.

Now, the point of this analysis is not that Wikipedia is a better source than the Washington Post, though if you catch me on the right day I might be interested in that discussion. The real issue is that you should not trust anything you read or anything you see on television or anything you hear on the radio without first doing some thinking and testing. And that is why I called the rejection of Wikipedia by many teachers “lazy”. The real point that any good teacher should be making is that you need to assess the validitiy of all sources, to question the internal consistency of their reports, to see how they match up with other sources. That is the only way to have an intelligent understanding of what they are saying. It isn’t fool-proof of course, but it gives you a fighting chance.

The Consumer Internet

And that leads the final connection in this essay. Doing what I have suggested is not easy, it demands engagement with the material and genuine thought. That much is fairly obvious. But the more subtle point is that it starts moving you in the direction of being a participant/producer rathan a passive consumer when it comes to information. And there are powerful forces that very much want to make all of us into passive consumers. And that would mean losing one of the great opportunities that this technology gives us.

When the Internet was first developed, no one thought it was particularly important, so no one bothered with the fact that the Internet is inherently a much more participatory medium. After all, if it is just a toy for a few geeks, who cares? So things like Web sites, then blogs, could flourish without anyone noticing. But as the Internet became more popular and therefore more important, those powerful forces had to take notice, and devise ways to get control. Many of the intellectual property arguments are really about this, when you look at it. Remix is inherent in how the Internet works now, and interest groups are working hard to sue it into oblivion. If you quote from an AP story or a newspaper article, you get a cease-and-desist or even a suit. Same thing if you take a small bit from a song, or a movie, or a TV show. This is certainly part of the insanity of the “Culture of Ownership” and of copyright run amok, but do not overlook that it is an attack on people being productive with the information around them. Bach and Beethoven would be criminals in the current regime because they too were remixers of the music around them. In one form of this principle, attributed to Picasso, it reads “good artists copy, great artists steal.” In fact, almost by definition to be a creative and productive participant in society you have to engage with the cultural material around you. This was always understood until such time as a few corporations found they had a financial interest in tying up everything.

So now we find ourselves fighting to keep a medium of creativity and participation. That is one of the major issues with network neutrality. The carriers and the corportate producers of culture want to regain control and turn all of us back into passive consumers of culture rather than active producers. Why is it that everyone has faster download speeds than upload speeds? The carriers will mumble about technical issues, but these are not the real point. Equal upload and download capacity is just as technically feasible as the system we have now. But the truth comes out when they say that “no one needs that much upload capacity”. Well, we do if we are equal participants in the generation of culture, and that is the point. If we start generating our culture, maybe we don’t have as much need to buy it from the RIAA or from Hollywood. And that is something that matters when we talk about Wikipedia. For all of its faults, and there are many faults, it is in the last analysis an expression of creativity that comes from people, not from anointed gate-keepers.

I think we all need to keep this in mind and fight to keep a participatory, creative, and generative Internet. And while you are at it, support places like Wikipedia and the EFF that are trying to keep it that way.

Indiana LinuxFest

We have a new arrival on the scene for Linux and FOSS fans. It is Indiana LinuxFest, and it is running from 3/25 through 3/27/11 in Indianapolis. The Web site with more details is at This looks like a nice addition and it comes at a time when we are coming out of our winter hibernation. I will probably be going, and I may even do a presentation there. So get your hotel room and make your plans.