Category Archives: Editorial

Proper Date Formats

Something you quickly run into if you correspond with people in both the U.S. and Europe, which I have done over my career as well as in my personal life, is that we don’t write dates the same way. If you think March 14th is Pi day because in the U.S. it is written as 3/14, people in most of Europe will wonder why you think there is a 14th month to the year. And if you want to make a joke about May 4th, as in “May the fourth be with you”, it is 5/4 in the U.S., and 4/5 in most of Europe. And it can be even more complicated once you drag in the rest of the world. There is simply no uniformity. You can see this with this page at Wikipedia. And we are not even consistent in how we talk about dates. In the U.S. we might well say “May 4th”, and that does indeed match how we write dates. But then we will insist that our independence day is the “Fourth of July”, almost like we are not a British colony any longer, but let’s use their date format for one of our most important dates.

In my experience, each side thinks the other is a bit odd, but regards it as a harmless eccentricity. But which side is correct in this? The answer, of course, is neither. The one absolutely correct date format has been defined, and you can find it in the ISO 8601 standard. The correct date format is YYYY/MM/DD, because that puts the elements of the date in a logical order. Why is this the logical order? Well, suppose you were filing documents by date. Would you start by putting all of the documents from the 4th day (without regard to month or year) into a group? Or would you first collect all documents for a given year? Now, you might argue that filing documents is something people don’t do as much of these days. We have computers and digital documents, we don’t need any filing cabinets. But that only strengthens my argument, as you can easily verify. For example I am writing this on February 13, 2025. If I use a date code for my digital file, and I make it 02132025, what happens if I later on create file on January 6, 2026? That would then be 01062026. Try this, and you will see that in your file manager 01062026 will appear before 02132025, because all computers treat the significance of digits from left to right.

But if you follow the ISO 8601 standard, the most significant part of the date is on the left, and all of your files will be in order. And once you get used to it, your life is easier. An example of this is photos. My wife and I like to travel, and we take a lot of photos using our smart phones. And every photo we take uses date/time stamp as part of the file name, and the dates all follow the ISO 8601 standard. So I can easily sort my photos in the order in which they were taken. And since I have over 13,000 photos in my Flickr Pro account, a little help with sorting them is really nice. I now use this format not just for digital file names, but for most of my dating purposes. It just makes sense.

 Save as PDF

Why I Gave Away a 3-D Printer

Ken Fallon posted a request on the Hacker Public Radio mailing list for shows about 3-D printers, and I innocently replied that I couldn’t help since I gave mine away. This of course led to Ken saying he would love to have a show about why a hacker-type person would give away a perfectly good 3-D printer, so I was trapped.

In October of 2017 I went to Ohio LinuxFest, which I have done many times. I spent a few years running publicity for them after all, and it is a good convention for open source folks. Now how did a guy from Michigan get involved in an Ohio event? For those who are not from the Midwest of the United States, Michigan and Ohio are “friendly enemies”. There was a border war in the early 19th century, which Michigan won when Ohio was forced to take Toledo. (That is a joke. Actually we have a family membership at the Toledo Museum of Art.) And the University of Michigan and Ohio State University are football rivals that close out their seasons each year with the rivalry matchup. But the joining of this University of Michigan alumnus with the Ohio LinuxFest came about because of Penguicon, which I had been going to for some time, and where I became the Tech Track programmer for a few years after I stepped down from my position at Ohio Linux Fest.

I had gone to a panel at Penguicon where Jorge Castro of Canonical talked about how to get help with your Linux install. (Jorge recently left Canonical to join VMWare where he is Community Manager.) It was good, but I noticed something missing: he never mentioned Linux User Groups! I was at the time the leader of the Washtenaw Linux User Group, and we helped people all the time at our monthly meetings. And I was certain that there were lots of other groups out there doing the same thing. So I spoke up and asked Jorge to “correct the record”, which of course he graciously did. But then in the hallway I was approached by Beth Lynn Eicher, who said they needed someone at Ohio LinuxFest to be the Liaison with the Linux User Groups. So I agreed to take that on, working under Joe “Zonker” Brockmeier, who was in charge of publicity. The following year Zonker stepped down (He is now Editorial Director at Red Hat), and I became the head of publicity.

Now even after I stepped down a few years later I continued to attend each year, and I think 2019 was the first year I missed since 2008, my first year attending there. I had retired, my wife and I had a trip for our 40th wedding anniversary, and other family matters just filled up my schedule. And this year the event is virtual, for obvious reasons. But one of the ways Ohio LinuxFest raised some cash (and it takes a lot of money to put on an event like this) was by having a raffle. Corporate sponsors would donate items to be raffled off, and attendees would buy raffle tickets. So of course I did what I usually do and bought something like $20 worth of tickets. And when they got to the main prize, a 3-D printer, my name was the one they called out! So that is the story of how I obtained the printer. But how did I give it away?

That takes me back to Penguicon. Penguicon chooses a charity each year to receive both focus and some money that is raised through raffles and such. And in 2016 this was an organization called E-Nable, which uses 3-D printers to create prosthetic limbs for children who are missing limbs through things like birth defects. I thought this was a very good thing to be doing, and I was proud that Penguicon was promoting it. So when my name was called at the OLF raffle, I knew almost immediately what I would do. My choices were either to have a neat toy I could play with, or maybe make lives better for some children, and that was no contest at all.

The reason E-Nable was the charity that year at Penguicon was because one of the organizers was involved with the group and was making limbs. So when the printer was delivered to my home, I messaged him to see if he could use it. It turns out the one I got was a much better one than what he had been using, so he could do even more good work with it. And it is not like I like for toys in my life. I know I did the right thing, and I have never regretted it.

 Save as PDF

Statistics and Polling

This is a bit of a change of pace, but I got some inquiries about this and thought I would offer my own two cents on something that often confuses people. My qualifications for this are two-fold:

  1. In my past life I was a professor who taught classes in Statistics;
  2. I have worked for a political consulting company that among other things performed polling for clients.

So you can use this in deciding if you want to pay any attention to what I have to say on the subject. 🙂

To get started, the basic question of epistemology: How do we know what we say we know. In the case of statistics, the basic mathematics began to be developed as a way of analyzing gambling. When you play poker, and a hand with three of a kind beats a hand with two pair, that is because two pair (shows up 4.75% of the time) is more likely than three of a kind (shows up 2.11% of the time). But after its start in gambling, statistics took a big step during the Napoleonic wars, when for the first time large armies met and the casualties mounted up. Some doctors realized that gathering evidence about wounds and their treatment would lead them to select the best treatments. But they key factor is that this is all based on probability. And the best way to think about probability is to think about what would happen if you did the same thing and over and over. You might well get a range of outcomes, but some outcomes would show up more often. And this is the first thing that throws a lot of people, because they often have this sense that if something is unlikely, it won’t happen at all. And that is simply untrue. Unlikely things will happen, just not as often. As a joke has it, if you are one in a million, there are 1,500 people in China exactly like you. But the heritage of gambling persists in the technique called Monte Carlo simulations, which run an experiment many, many times, often via a computer algorithm, to generate random data to test theories. John von Neumann understood the significance of this approach, and programmed one the first computers, ENIAC, to carry out Monte Carlo simulations

The next key concept is called the Law of Large Numbers, which in layman’s terms says that if you repeat the experiment many times, the average result should be equal to the expected result. Now this is the average we are talking about here. Any particular experiment could give weird results that are nothing like the expected result, and that is to be expected in a distribution of results. But when you average out between each experiment, the occasional high ones are offset by the occasional low ones, and the average result is pretty good. But to get this you need to do it many, many times. The more times you repeat the experiment the closer your results should be.

Our third key concept is Random Sampling. This says that every member of a population has an equal chance of being selected for a sample. And the population is whatever group you want to make a claim about. If you want to make a claim about left-handed Mormons, your sample should exclude anyone right-handed people or any Lutherans, but it should afford an equal chance of selection for all left-handed Mormons. This is where a lot of problems can arise. For instance, many medical studies in the 20th century included all or mostly men, but the results were applied to all adults. This is now recognized as a big problem in medicine. When this happens we call the problem Sampling bias.

So, with these basic concepts (and see, I did not use any math yet!) we can start to look at polling, and just how good it is or isn’t as the case may be. And it is often very good, but history does show some big blunders along the way.

The first thing to get out of the way is that sampling, done properly, works. This is a mathematical fact and has been proven many times over. You may have trouble believing that 1000 people are an accurate measure of what a million people, or even 100 million people will do, but in fact it does work. When there are problems it is usually because someone made a mistake, such as drawing a sample that is not truly an unbiased sample from the population in question. This does happen and you need to be careful about this in examining polling results. In the earlier part of the twentieth century there were some polls done via telephone surveys, but because telephones were not universally available at that time these polls overstated the views of more affluent people who were more likely to have phones. By the latter part of the century, however, telephone surveys were perfectly valid because almost everyone had a phone (and the few who didn’t were not likely to be voters anyway). But now we have a different problem, in that many people (myself included) have gone to using mobile phones exclusively, and the sampling methods in many cases relied solely on landline telephones. Polling outfits are beginning to adjust for this, so it should not be a problem. But you need to watch out for ways pollsters will limit the sample. A big issue is whether you should include all registered voters (in the U.S., you need to be registered before you can vote. I am not familiar with how other countries handle this.), or if you want to limit it to “likely voters”. Deciding who is a “likely voter” is place where some serious bias can creep in, since it is purely a judgement call by the pollster.

So how do we know that samples work? We have two strong pieces of evidence. First, we know from Monte Carlo simulations how well samples compare to the underlying populations in controlled experiments. You create a population with known parameters, pull a bunch of samples, and see how well they match up to the known population. Second, we have the results of many surveys which we can compare to what actually happens when an election (for instance) is held. Both of these give us confidence that we understand the fundamental mathematics involved.

The next concept to understand is Confidence Interval. This comes from the fact that even an unbiased sample will not match the population exactly. To see what I mean, consider what happens if you toss a fair (unbiased) coin. If it is a truly fair coin, you should get heads 50% of the time, on average, and tails 50% of the time. But the key here is “on average”. If you tossed this coin 100 times, would you always get exactly 50 heads and 50 tails? Of course not. You might get 48 heads and 52 tails the first time, 53 heads and 47 tails the second time, etc. If you did this a whole bunch of times and averaged your results, you would get ever closer to that 50/50 split, but probably not hit it exactly. And what this means is that your results will be close to what is in the population most of the time, but terms like “close” and “most of the time” are very imprecise. How close, and how often really should be specified more precisely. And we can do that with the Confidence Interval. This starts with the “how often” question, and the standard usually used is 95% of the time. This is called a 95% confidence interval, but sometimes the complement is used and it gets referred to as “accurate to the .05 level. These are essentially the same thing for our purposes. And if you are a real statistician, please remember that this podcast is not intended to be a graduate-level statistics course, but rather a guide for the intelligent lay person who wants to understand the subject.  The 95% level of confidence is kind of arbitrary, and in some scientific applications this can be raised or lowered, but in polling you can think of this as the “best practice” industry standard.

The other part, the “how close” question, is not at all arbitrary. It is called formally the Margin of Error, and once you have chosen the level of confidence, it is a pretty straightforward function of the sample size  In other words, if you toss a coin ten times, getting six heads and four tails is very likely. But if you toss it 100 times, getting 60 heads and 40 tails is much less likely. So the bigger the sample size, the closer it should match the population. You might think that pollsters would therefore use very large sample sizes to get better accuracy, but you run into a problem. Sampling has a linear cost. If you double the sample size, you double the cost of the survey. If that resulted in double the accuracy it might be worth it, but in fact for reasonable sample sizes it won’t. Doubling the sample size might get you 10% more accuracy in your results, and is that worth spending twice the money? Not really. So you are looking for a sweet spot where the cost of the survey is not too much, but the accuracy is acceptable.

Any reputable poll should make available some basic information about the survey. The facts that should be reported include:

  • When the poll was taken. Timing can mean a lot. If one candidate was caught having sex with a live man or a dead woman, as the joke has it, it matters a lot whether the poll was taken before or after that fact came out in the news.
  • How big a sample was it?
  • What kinds of people were sampled? Was there an attempt to limit it to likely voters?
  • What is the margin of error?
  • What is the confidence interval?

Now a reputable pollster will make these available, but that does not mean they will be reported in a newspaper or television story about the poll. Or they may be buried in a footnote. But these factors all affect how you should interpret the poll.

Example: http://www.politico.com/story/2013/12/polls-obamacare-100967.html

In this brief news report we don’t get everythng, but we got a lot of it. This story is about two polls just done (as I write this) on people’s opinions regarding “Obamacare”.

The Pew survey of 2,001 adults was conducted Dec. 3 to Dec. 8 and has a margin of error of plus-or-minus 2.6 percentage points.

The Quinnipiac survey of 2,692 voters was conducted from Dec. 3 to Dec. 9 and has a margin of error of plus-or-minus 1.9 percentage points.

What I would note is the the first poll says it was a poll of “adults”, while the second poll was one of “voters”. That makes me wonder about any differences in the results (and the polls did indeed have different results). They were sampling different populations, so the results are not comparable. If the purpose of the survey is to look at how people in general feel, a survey of adults would probably make sense. If the purpose was to forecast how this will affect candidates in the 2014 elections, the second poll may be more relevant.

Second, note that the survey with the larger sample size had a slightly smaller margin of error. That is what we should expect to see.

Third, note that the second poll was “in the field” as we say for one more day than the first poll. Does that matter? It might if some very significant news event happened on the 9th of December that might affect the results.

What I don’t see in this report is any explanation of how the people were contacted, but if I went to their web site, here is what I found on the Quinnipiac site:

From December 3 – 9, Quinnipiac University surveyed 2,692 registered voters nationwide with a margin of error of +/- 1.9 percentage points. Live interviewers call land lines and cell phones.

So if you dig you can get all of this. And note that they specifically mentioned calling cellphones as part of their sample.

One final thing to point out is that if you accept a 95% confidence level, that means that by definition approximately one out of every 20 polls will be, the use the technical term, “Batcrap crazy”. That is why you should never assign too much significance to any one poll, particularly if it gives you results different from all other polls. You are probably looking at that one out of twenty polls that should be ignored. There is a human tendency to seize on it if it tells you what you want to hear, but that is usually a mistake. It is when a number of pollsters do a number of polls and get roughly the same result that you should start to believe it. That does not mean they will agree exactly, there is still the usual margin of error. That is why a poll that show one candidate getting 51% of the vote and her opponent getting 49% will be described as a “dead heat”. With the margin of error, the candidate could be getting anywhere between 53% and 49% assuming the poll is accurate and unbiased.

Listen to the audio version of this post on Hacker Public Radio!

 Save as PDF

What’s Wrong With Free, Anyway?

So I am at my LUG meeting the other night listening to a spirited discussion, which is pretty normal for us. We have a lot of very opinionated people there, and there is never a lack of discussion. The trick is getting a word in edgewise, and normally three people are all talking at once trying to grab the floor. In this case, it got to piracy, the music industry, bit torrent, etc. One person tried to make the argument that bit torrent promotes piracy and is harming the industry, and seemed genuinely surprised that no one in the room agreed with him. But we all agreed that the music business had changed irrevocably, and that there would never again be a group as big as The Beatles. But why is that? I tend to think a necessary precondition for anyone getting that big is that they would first have to be that good, and in my own curmudgeonly way I don’t think any of the current acts are that good. Now, if you like to discuss the current music scene and the music business, I always recommend you read The Lefsetz Letter, by Bob Lefsetz. He is constantly explaining that the music world is different now, that you can’t just go into the studio, cut an album, and let the riches roll in.

I think the new music business is about the relationship the artist has with the fans. And it does not rely on mass media in any way. One of the things the Internet has done is kill broadcasting, and bring us instead narrowcasting. By this I mean that instead of attracting a mass audience, you go after a niche audience that wants what you offer. And to get that audience you need to work on your relationships. A very eloquent explanation is given by Amanda Palmer in her TED talk. She frames the question beautifully by saying that the industry is focused on how to make people pay for music, while she focuses on how to let people pay for music. Notice how the language changes when you do this, and what it implies. When you talk about making people pay you are using the language of force, the language you use with enemies, the language on conflict and confrontation. Is it any wonder the industry is imploding? Any business that treats its customers like the enemy does not have a long future in front of it. But if you follow Amanda Palmer and talk about letting people help you, this is the language of trust, of mutual respect.

This has implications beyond the obvious one of treating your customers better. Amanda Palmer recorded an album on a traditional music label, sold 25,000 copies, and was considered a failure. Then she left the label, started a kickstarter campaign to fund her next recording project, and raised $1.2 million. From whom? About 25,000 fans. In other words, she has a hard-core audience of about 25,000 who love what she does and will support it. For record labels, that is not enough. And for certain rock stars with a sense of entitlement that is not enough since they want mansions and expensive sports cars. But it seems to be enough for someone who just wants to make an honest living. This is the niche audience you get in an environment of narrowcasting, not the mass audience we used to get from broadcasting.

I see this in my own music tastes. There are a half-down artists from whom I will buy any product they put out, and I bet you haven’t heard of them. They are not mass artists. One of them, Jonatha Brooke, just did a campaign on PledgeMusic to raise the money for her next album, and I was happy to make my own pledge on return for a CD when it is done and updates and photos while it is being done. And you can be sure I will buy a ticket to her show any time she is in town. That is not to say I don’t enjoy music from some of the “big” acts. About 7 years ago I bought tickets for The Who. What I got was 2 tickets the cost over $100 each, and was so far from the stage that I had trouble even seeing the JumboTron. When Jonatha comes to town, she will play a local club that seats about 400 max, the tickets will cost about $25, and I will be maybe 20′ away from her. And she will stay after the show to sell and sign CDs and talk to her fans. It is artists like this that I support with my money, because I feel some relationship with them. But by the same token, if they didn’t make enough money to keep going, these artists with stop doing what they do. So my feeling is that I support you, and you give me something I want. Amanda Palmer puts her music out on the Internet without DRM, But she asks people to pay her for it, and they do.

I think this is something we can learn from in the Free Software community. If you focus on getting something for nothing, that is not sustainable as a model. Not only do developers have to eat, I think they need to know that people value their work and are willing to support it. And I think that can happen with small-scale applications, and in the age of narrowcasting that is viable, but only if the support is there. All too many people are looking for something free of charge, and get outraged when they can’t get it. This showed up recently when Google decided to end the Google Reader. This free-of-charge application was cancelled because the market was not large enough to make it viable. And that explanation does make sense. Google is one of the world’s largest corporations, and they operate at a very large scale. They simply cannot afford to put resources into small projects. I have heard that the usage for Reader was in the neighborhood of 10-20 million. A petition to keep it gathered 150,000 signatures. And while those may sound like large numbers, for Google they are tiny. They need 100 million to make it worth their while.

But, for a smaller developer, a market of  1-2 million might be plenty. Imagine this developer could provide a “cloud” service, similar to what Google offered, that would cost $2 per month. That would be $24 per year, and form 1 million customers it be $24 million. That is quite enough to run a good RSS Reader service, and it is completely sustainable. The service would have sufficient predictable income to maintain and develop the product. And they could develop a community of users who are passionate about the product. And the same reasoning would apply to downloadable software, even “free software”, if you use that term like I do, to denote software that gives you the Four Freedoms the Free Software Foundation has published. But the key is to understand that you need to support software that you rely on. If you only want “free-of-charge” software, you will probably pay for it with your your personal information or by watching ads. And you will be at the mercy of companies that will drop the product any time it suits them. I think you will find that this rarely happens in the free software community as long as a project has a passionate community that supports it, the way Amanda Palmer’s fans support her.

So what software are you passionate about? And how do you support it?

Listen to the audio version of this post on Hacker Public Radio< !

 Save as PDF

Tablet share numbers prove I was right

In November 2011 I made the claim here, and on my blog, that by the end of 2012 Apple and Android would have essentially equal market shares. And the 4th Q 2012 numbers show that I was correct. I doubt there are any wonderful prizes for this, but there it is.

My prediction was based on one simple observation: At the time I made this prediction, the relative market shares of iOS and Android in the tablet market had tracked, with the right lag, the market shares in the smartphone market. So I looked ta how long it took for Android to achieve parity in the smartphone market and predicted it would do similarly in the tablet market. And there is no reason to think this won’t continue. The point of rough parity in the smartphone market came in November 2010, and since then Android’s share has only grown, to the point that the world-wide share of Android is now around 4x that of iOS. So I expect a firm lead to develop by the end of 2013, and by the end of 2014 total dominance for Android.

 Save as PDF

Data-Driven Objectivity

I recently had an exchange online with someone I tend to like, and it was about self-driving cars. My correspondent said that he would never, under any circumstances, get into a self-driven car. In fact, he seemed to think that self-driven cars would lead to carnage on the roads. My own opinion is that human driven cars have already led to a very demonstrable carnage, and that in all likelihood computers would do a better job. As you might imagine, this impressed my correspondent not the least. When  I observed that his opbjections were irrational, he said I shouild choose my words more carefully, but that he would overlook the insult this time.

Possibly that is a bad way to phrase my objection, but it is also, in the strict sense of the term, the precisely proper word to use. What I was saying is that his view had no basis in data or facts, and was purely an emotional response. We all have those, and I’m not claiming any superiority on that ground. But when the Enlightenment philosophers talked of reason it was in contrast to religion and superstition, and really did mean thinking in terms of data, facts, and logical thinking. It is my own view that this type of thinking has the major reponsibility for the progress the human race has made in science and technology over the last few centuries. And it is also my view that this type of thinking is being attacked severely in these days.

The hallmark of rational thinking is that it starts from a basis in observed facts, but always keeps a willingness to revise the conclusion if new facts come to light. If that seems reasonable to you, good. Now think of how the worst insult you can pin on a politician is flip-flopping. The great 20th century economist John Maynard Keynes was accused of this and responded “When my information changes, I alter my conclusions. What do you do, sir?” That is how a rational person thinks. There are people who attack science for being of no use because occasionally scientists change thier minds about what is going on. But that is an uninformed (to be most charitable about it) view. Science is a process of deriving the best possible explanations for the data we have, but always ready to discard them in favor of other explanations when new data comes in. That may bother people who insist on iron-clad certainty in everything, but in fact it does work. If it didn’t work you wouldn’t be reading this. (Did you ever notice the irony of television commentators attacking scientists? You might think the plans for television were found in the Bible/Koran/etc.)

One of the biggest obstacles to clear, rational thinking is what is termed confirmation bias. This is the tendency of people to see the evidence that supports their view, while simultaneously ignoring any evidence that does not support their view. This why the only studies that are given credibility are what we call “double-blind” studies. An example is a drug trial. We know there is a tednency for people to get better because they believe they are being given a new drug. In addition, we know that just being shown attention helps. So we take great care (in a good study) to divide the sample into two groups, with one group getting the great new drug, and the other group getting something that looks exactly like it, but has no active ingredient. It may be a sugar pill, or a solution of saline liquid being injected, just so long as the patient cannot tell which group they are in. But the bias can also be on the experimenter side. If a team of doctors has devoted years to developing a new drug, they will naturally have some investment in wanting it to succeed. And that can lead to seeing results that are not there, or even in “suggesting” in sub-conscious ways to the patient that they are getting the drug or not. So none of those doctors can be a part of it either. Clinicians are recruited who only know that they have two groups, A & B, and have no idea which is which. This is the classic double-blind study: neither the patient nor the experimenter has any idea who is getting the drug and who isn’t.

The reason we need to be this careful is that people are, by and large, irrational. People will be afraid of flying in an airplane but think nothing of getting into a car and driving, even though every bit of data says that driving is far more dangerous. People are far more afraid of sharks than they are of the food they eat, though more people die every year from food poisoning than are ever killed by sharks. And we all suffer from a massive case of the Lake Wobegone effect, in that we all tend to think we are above average, even though by definition roughly half of us are below average on any given characteristic. We just are not good judges of our own capabilities in most cases. In fact, the Dunning-Kruger effect suggests that we are frequently wrong in self-assessments.

But the worst case is the person who is absolutely certain, no matter what he is certain of. Certainty is great enemy of rationality. Years ago, Jacob Bronowski filmed a series called The Ascent Of Man. In one scene, he stood in a puddle outside at Auschwitz and talked about people who had certainty, and said “I beg of you to consider the possiblity that you may be wrong.” This is the hallmark of a rational person, this is the standard by which every scientist is judged. If you know anyone who can say “This is what I think, but I might be wrong,” you will have found the rarest kind of person, and you should cultivate their aquaintance. This type of wisdom is all too rare. And if you ever find a politician who says that, please vote for them, no matter what their party affiliation. They are worth infinitely more than a hundred of the kind that never have changed their minds about anything.

 Save as PDF

Stop Buying from Dell Computer

At a recent company meeting in Denmark, something very disturbing happened. An “entertainer” was brought on stage right after Michael Dell and proceeded to complement the crowd on not having many women. He then said IT should remain a bastion of male privilege, and the the way to address women as “Shut up, bitches.” You can read an account here. I think the only reasonable response from people with more than three functioning brain cells should be to stop all business with Dell Computer.

 Save as PDF

Freedom and Licensing – Some Thoughts

A few weeks back there was a small tempest-in-a-teacup when Linux Action Show invited Richard Stallman (RMS) on to their show, and were astonished that he refused to compromise his views. This led Bryan Lunduke to accuse Stallman of wanting to starve the Lunduke family since he would not give his imprimatur to Mr. Lunduke writing proprietary software. My initial thoughts were along the lines of “Lunduke is an idiot”, which are thoughts I have had before. Full disclosure: I find him to be annoying and grating. For that reason, I did not comment at the time. But I just read an interview with Michael Meeks, the LibreOffice developer, that brought up some of the thoughts I had previously, and I decided to write them out.

The essence of the dispute between Bryan Lunduke and RMS was that RMS argued, as he has consistently done, that proprietary software takes away the freedom of the user, and is therefor evil. Lunduke was arguing that he makes his living by writing proprietary software, and therefor deserved some kind of exemption from RMS, and was very upset that he didn’t get it.The immediate reaction I had was “Dude, have you ever listened to RMS?” Lunduke getting RMS to say that was about as likely as getting the Pope to say “You know, that Ten Commandments thing? Totally optional.” Then Lunduke just went nuts and accused RMS of trying to starve the Lunduke children. So it became a reason for me to once again unsubscribe from that podcast, as I have done before. (Though this time it will probably stick.)

But the point of interest is that Lunduke accused RMS of being against freedom, in this case the freedom of Lunduke to write proprietary software. And this is worth taking a closer look, since arguments about freedom often get bogged down in similar dichotomies. And to understand that, I think there are some fundamental truths that need  to be pointed out and incorporated into the discussion. The first is that freedom is never absolute if you are living in a society. There are always conflicts and constraints in how you exercise freedom because what you do can impact on others. As Oliver Wendell Holmes said it in a Supreme Court decision, you cannot falsely shout “Fire!” in a crowded theater. Or as another legal scholar put it, your right to swing your hand ends where my nose begins. In fact, a good many court cases are argued to decide among two different freedoms as to where the line will be drawn. This means that to say “I am in favor of freedom” is to make a mostly meaningless statement. It doesn’t become meaningful until you clarify whose freedom, and in what circumstances. And when you do clarify, you should not be surprised if someone says, and probably correctly, “But you are taking away my freedom to…” Yes, we are, and that is the point. Does my freedom to breathe clean air trump your freedom to pollute?

In this case, the conflict was between the freedom to make a living by writing proprietary software, versus the freedom of the software user to use software that gives us the Four Freedoms. Now to be clear, RMS never claimed he was in a position to actually stop Lunduke. He merely refused to countenance it as a legitimate practice. So the real issue boiled down to “He called me names!” But it is worth looking at this carefully because there is a  real issue here that is worth exploring. And the issue is whether we should be more concerned with the freedom of the software user, or the freedom of the software producer. RMS is clearly on the side of the user. Lunduke was clearly on the side of the producer. And because of how these are related, you cannot simultaneously maximize both. If users have all of the freedom, there is nothing left for producers, and vice-versa. And that is why I want to turn this discussion to the topic of software licensing. For this is where the decisions are often made on where we draw the line.

In the case of proprietary software, the rights of the user are as minimal as companies can get away with. The road to evil began when someone got the bright idea that you don’t own the software you buy, you only license it, and the producer of the software can decide what you are allowed to do with it. And they can revoke your license to use the software any time they decide it violates their license, and even prevent you from selling it to someone else when you are done. Frankly, I am with RMS on this one. It is evil, and we should fight it. The answer he and others came up with was the GPL. This pushes the balance pretty far in the direction of the rights of the user, as defined by the Four Freedoms:

  • The freedom to run the program, for any purpose (freedom 0).
  • The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
  • The freedom to redistribute copies so you can help your neighbor (freedom 2).
  • The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

I think of these as opposite ends of a spectrum. What is in the middle? The “less restrictive” licenses. Now, some would argue that these licenses are even “more free” than the GPL, but that just repeats the fallacy of thinking of freedom as an absolute without context. I am thinking of it as the balance between producers and users, and in important ways these “less restrictive” licenses move the balance back towards producers. The way this happens is through how the software gets ultimately used. For instance, it is a matter of record that important parts of BSD form the basis of the Apple OsX operating system. Apple no doubt used this software because there were essentially no restrictions on what they could do with it. And what they did was create a tightly-controlled OS that severely restricts what the  user can do with it. I think that when you look at how the software offered with these “less restrictive” licenses is used, you will see far too many examples of this being used to restrict the rights of users when incorporated into corporate products. You may be of the opinion that what is wrong in the software arena is that companies just don’t have enough power, but I don’t see that on the planet I live on.

And that brings me back to Michael Meeks and the interview I read. He was talking about a huge increase in energy and activity in the LibreOffice project since it split off from OpenOffice. And the major reason he saw for this was that they went to the GPL! I think that makes sense. If I had worked hard on software code that I wanted people to use freely, I would want to know that it was in a license that guaranteed that freedom through all derivative works. And that is what GPL does. I think that is why so many proprietary software creators hate it so much. They are just fine with something like the BSD license that says they can take code and do whatever they want with it. But with GPL they can’t do that. And one thing I find kind of funny is that they could just not use the code if it is that big of a deal, but they don’t seem to think that is a good idea. Their software is licensed to people on a “You do what we let you, or you can’t use it” basis and they have no problem with that, but if free software developers throw it back at them, suddenly it becomes a “cancer”, “first step to communism”, etc. What you should consider when you hear these arguments is whose interests are they protecting? Yours?

Listen to the audio of this post on Hacker Public Radio!

 Save as PDF

Supporting Free Software – Getting Involved

I started this particular series of posts on January 5th, and now I am going to finish it on March 4th, so it has been just 2 months. In that time we have explored some of the ways everyone can support Free Software, such a by filing bugs, writing documentation, and by providing financial support. I want to wrap it up by exploring what may be the best way of all to get started, and this is to get involved. Join a group. Help out.

The first place you might to look at is your local Linux User Group (LUG). This is where you can meet people in your community who also are interested in Free Software. You might think that only Linux gets discussed there, but I’d bet you would be surprised. I know my local LUG has speakers covering a wide range of topics in Free Software. Last month we learned about Sourceforge, for instance, which supports a bunch of different Free Software projects. LUGs also provide community outreach, such as by doing install fests and by cooperating with local schools and organizations. I always suggest to people that this is the first place to go both to get help and to get involved.

The next place you might want to look into is with your Linux distro of choice. Mine is Kubuntu, which is a variant on Ubuntu that uses the KDE desktop. So I have joined my Ubuntu Local Community (i.e. LoCo), which in my case is Michigan. This group organize Bug Jams, where people get together to file and work on bugs. And they organize release parties twice a year when new releases come out. I know that Fedora has what they call the Fedora Ambassadors program, and many other distros have opportunities to get involved. You have only to ask.

Finally, I am going to mention the various Linux and Free Software conferences. I am involved with one called Ohio LinuxFest, where I am the Publicity director. I just finished writing a page for our web site where I listed 8 major positions we are trying to fill, as well as a bunch of day-of-event positions for volunteers. If you have never been involved with an event like this, you might not realize just how much work is involved in making the magic happen each year. But it is hard work, and every one of them is looking for volunteers to help put it on. And this is something you can do even if you don’t feel like you can file bugs or write documentation, or you don’t have the money to provide financial support. You can always provide help at these events. Chances are there is one not too far from you.

What really matters, though, is that you make a contribution of some kind. As we said when we started this series of posts, Free Software means Community-supported Software.
When it stops getting community support, it dies. If you value Free Software, then you have a responsibility to support it in one way or another. My role in this series is to give you ideas on how you can do that.

Listen to th audio version of this post on Hacker Public Radio!

 Save as PDF

Why I Am An Optimist

This is going to be a little bit different from what I usually post here, but it’s my blog. If you don’t like it, just click away.

It is easy to list many things going wrong these days, and I think most of us tend to look at the negative side. Regardless of political or social persuasion, most people would agree that the world is going to hell in various interesting ways. The politicians are so bad they aren’t even worth the bullets it would take get rid of them. Giant corporations are raping us all. And don’t even get me started on kids today.

But I am going to take a contrary point of view and say that things are getting better all the time.

I was born in 1951, which makes me 60 right now. I am part of that “Baby Boom” group that arrived after World War II and the Great Depression had created havoc with lives all over the world. And that is the first thing to point out: We have not made war entirely an anachronism, but after two major wars within 20 years of each other, we have not had any conflict like those since. And as Steven Pinker pointed out in his book The Better Angels of Our Nature: Why Violence Has Declined, This is part of a general trend in declining warfare.

Nor is it confined to just war. Because of the nature of news in our electronic age, and the relentless use of violence as a form of entertainment (which Hollywood is responsible for), we miss the fact that violence within our country has gone down. Because we see it on television without let up we think it is rising, but in fact it is falling.

When I was born, a number of US States had what were called “Anti-Miscegenation” laws which made it a crime for a white person and a black person marry. The last of these laws was not struck down until 1967 when the US Supreme Court decided the case of Loving vs. Virginia. This was the same year as the movie Guess Who’s Coming To Dinner? which addressed the social discomfort felt by a white family over their daughter bringing a black fiance. Now, I know there are still racists in this country, a fact that is abundantly clear in the frothing at the mouth over Obama, but for most people this is simply not an issue they would even notice.

In a related vein, when I was born Brown vs. Board of Education was still 3 years in the future. I grew up watching on television as Bull Connor used fire hoses on black citizens trying to get their civil rights. And of course I lived through the assassination of Martin Luther King, Jr. But in my life time I have seen a black man elected President of the United States in a free and fair election, and he is favored to win re-election this fall. I can say that even as late as the 1990s I would not have believed that could happen in my lifetime.

When I was born the roles of men and women were quite separate. My mother, who was by any measure a very liberal and forward-thinking woman, taught me and my brothers how to wash dishes and do laundry because “Until we got married we would have to do it for ourselves.” But my wife and I both have demanding careers, though you will have to ask her about the housework division (some of you wouldn’t believe me if I said we split it.) And I have had a number of female bosses, and many co-workers. That is a change just in my lifetime. If you younger folks want to know what it used to be like, find some old episodes of Ozzie and Harriet on YouTube. That was the world I was born into. And come to think of it, if we had not elected a black man as President in 2008, we would have elected a woman, since the only serious opposition to Barack Obama was Hillary Clinton.

One more thing I will point out. When I was a boy I don’t think I had ever heard of homosexuals. But there were laws in effect make homosexual behavior illegal, to prevent homosexuals from immigrating to this country, and for a time even to prevent homosexual literature from being sent through the mail. And now we have marriage equality in an increasing number of states (most recently Maryland), and 22 Democratic Senators have called for endorsing this in the official party platform for the 2012 election.

So when you think everything is going to hell in a handbasket, take another look. While change is sometimes slow and maddening, it is definitely happening.

And Linux on the desktop grew 64% in the last 9 months. See, I didn’t forget all about technology.

 Save as PDF