It’s Just Semantics

Yesterday I began my morning with a meeting involving members of various departments who are dealing with a major change to our IT systems. We are replacing a system from Vendor A with another from Vendor B, and just about everything changes. As a result, we have a lot of meetings. But I didn’t bring this up to get sympathy. Everyone has pain in their lives, and mine is not particularly more impressive than yours.

But in yesterday’s meeting, we got to to a discussion of terminology. You see, Vendor B sold us a system that uses different names for a wide number of of our data fields, and we needed to agree on the names we would use in our reporting systems. Should we use the new vendor’s names, use the ones we had traditionally used, or some combination of the two? Now, at this point I’m sure you’re thinking “Gosh, that sounds like fun.
I wish I could have been in that meeting!”

But what got me thinking was when one of the IT folks said “That is just semantics. I don’t care what you call them.” This statement was so profoundly wrong that I nearly admired it for the awesome scope of its wrongness. The first level of wrongness comes when you consider that all of us at this particular site, in all of the different departments, need to talk to each other. And that means we all need to understand what we are talking about. I wondered if this IT person had ever heard the term “naming convention”, and if so, did he comprehend why that was important.

Then I got to thinking about that phrase “It’s just semantics.” This is where the real problem lies, I realized. It is a common phrase, and usually used to imply that the meaning of the words is not important to understanding the issues at stake. In this colloquial sense it says that people sometimes use weasel words to avoid a truth. For example, a politician trying to explain away an embarrassing situation, like Clinton saying “I did not have sex with that woman.” We correctly see that people who do this are misusing language to confuse the situation.

But saying that this is semantics is profoundly wrong. What is really happening when people use this phrase is that they are saying that words and their meanings do not matter. And when you go down that road you have a serious problem. I doubt you can even think intelligently if you cannot use words with a certain degree of precision. And communication becomes pretty much impossible if we cannot use words and agree what we mean by them. That is what semantics is really about. So if someone accuses me of using semantics, I thank them for the complement. What they have said is that I care about what I say and try to use the best words to convey the meaning I have in mind. Of course, they don’t realize that is what they said.

 Save as PDF

Trent Reznor on Social

This courtesy of the current issue of Wired magazine: “I don’t care what my friends are listening to. Because I’m cooler than they are.”

So, can I claim that is why I’m not interested in social recommendations for music?

 Save as PDF

Android, Apple, and Market Dynamics

Or Why Tim Cook may be the world’s unluckiest man

Please understand that I don’t wish anything bad to Tim Cook. I’ve never met the man. But I am observing something about the market dynamics in the smartphone and tablet market that I have not yet seen anyone else talk about. Eric Raymond in his blog Armed and Dangerous has covered the idea of price pressures from Android affecting Apple, and I consider his blog required reading for anyone interested in this topic. But I think I can offer a slightly different take on the issue.

If we start with smartphones, Apple really kicked off this market, and raced to an early lead. The first iPhone was unveiled in January 2007, and there was nothing like it. This phone got 2.7% of the mobile phone market in 2007, 9.6% in 2008, and 15.1% in 2009. On November 5, 2007 the Open Handset Alliance was announced, and the very first version of Android unveiled. On September 23, 2008 the G-1 was released with Android 1.0.  In 2008 this gained Android only .5% of the mobile phone market, but this increased to 4.7% in 2009. So in 2009 we have a situation where Android’s market share is less than one-third of Apple’s. Yet by November 2010 Android pulled ahead (slightly) at 26% to Apple’s 25%. And by September 2011, 10 months later, Android is at 44.8% to Apple’s 27.4%. What makes this even more significant is that these share numbers are for the U.S., and it appears that Android is even more dominant in other countries.

If we look at the timeline, it looks like Apple is first to market, and holds a lead for nearly three years before the competition catches it. I think this may be significant for the tablet market. The iPad was introduced in January 2010. In the most recent figure I could find, which is for September 2011, it looks like 75% of the market is held by iPad, and 25% by Android. I think these numbers are pretty comparable to what we saw in the smartphone market if you allow for the fact the the market share numbers were for all mobile phones. Nokia was still selling candy bar phones in 2009, for instance. If we take the smartphone market in 2009 as a two-horse race between Apple and Android, it really looks very close to 75% Apple and 25% Android then as well. I point this out because  even as Android was starting to dominate sales in the smartphone market, I heard a number of people claim that the tablet market was different. But I never heard anyone give a compelling argument as to why the tablet market would be different. Maybe there will be a different outcome this time, but I’d like to hear a sane evidence-based argument before I believe it. If the pattern from the smartphone market is repeated, we could see something like a 50/50 split by the end of 2012, with Android pulling ahead to dominant position by the end of 2013.

Now at this point I have mostly recapped some numbers, but not added anything significant to Eric Raymond’s analysis. I think I can do that now by adding something based on the history of the consumer electronics market. Back in 1990 Professor Michael Porter at the Harvard Business School published a very important work called The Competitive Advantage of Nations. I used this book with some of my more advanced students because it had some great insights. Prof. Porter started with the insight that national economies are not the appropriate level of analysis, and that in specific markets a country might have an advantage while not having it in other markets. So he looked at a number of specific markets where one or another country dominated and asked why that was the case. In the market for consumer electronics, Japan was clearly dominant (remember that in 1990 Sony was still a major force, not a bumbling also-ran<g>). And why was that? Because of the intense competition within the domestic Japanese market. Japanese consumers would purchase the newest products with great fervor, and always demand newer, better products. The product cycles were a matter of months, while comparable US firms, for instance, were still operating with product cycles of years. As Prof. Porter noted, this placed huge pressure on Japanese companies, such that if they could succeed in the Japanese market they would find competition in the global market relatively “a piece of cake”.  And we now know that they basically eliminated the American firms in this market and took it over.

I think this example is relevant to the smartphone and tablet markets as well. I expect that the combination of rapid innovation, short product cycles, and price pressures will create further challenges for Apple. The competition is not from Japan this time, but from the countries that learned from Japan, which would be South Korea, Taiwan, and China. This is where you find the companies like Samsung, HTC, LG, Huawei, etc. HTC, for instance, doubled its shipments of phones from the first half of 2010 to the first half of 2011, and its product cycle is in the area of 6-12 months between concept and a product in the hands of the consumer, according to its COO, Matthew Costello (The Economist, 10/8/11) .  This is rapid, and it is only one company. Together, these companies represent the next wave of Asian Tigers, and they came to the top by out-competing the Japanese. Already in China, which by any measure is the biggest growth market for mobile, Samsung has a larger market share than Apple. Add in Motorola, and new entrants looking for a foothold, like Acer, and you have a lot of competition. These companies are releasing a new phone every month, and sometimes more than one. And any time a feature proves popular, it is quickly adopted by every manufacturer. As a result, the Android phones, which started playing catch-up, are now moving ahead of Apple. Already the technical specs for Android phones exceed those for Apple, and in terms of software it was notable that the latest version of iOS mostly played catch-up to Android.

One way to think about this is the contrast between Intelligent Design and Evolution. Apple is a very centralized tightly controlled ecosystem that represents the Intelligent Design side, while Android represents the Evolutionary approach. From this perspective, the fragmentation that some people complain about is not a failure, it is Android’s greatest strength. This is what lets Android move into every conceivable market segment, and is a central reason for Android having double Apple’s share in the smartphone market. And when you conceive of this as an Intelligent Design vs. Evolution competition, it is worth noting Leslie Orgel’s Second Rule: “Evolution is cleverer than you are.” Even if Apple’s designers are, pound-for-pound, better than anyone else’s designers, they can’t beat the frenzy of experimentation that comes from the Android market.

The next area worth looking at is China. This country is just starting to gear up for massive smartphone usage, and there is no doubt that large numbers of Chinese consumers appreciate the Apple products. After all, look at the many Apple stores there that are doing great business. Of course, Apple never heard of these stores until recently because they are all fakes. Still, imitation is the sincerest form of flattery. But the largest growth in the Chinese market is going to come from less expensive phones, and that is simply not in Apple’s DNA. In a Wall Street Journal article the Chinese market was analyzed and right now Samsung leads Apple slightly (15% to 13% respectively). But the really interesting development is the charge by Huawei,  ZTE, and other Chinese companies to develop smartphones aimed at the price point of 1000 yuan (around $157 at the time of the article). ZTE, for instance, has an order for 2 million 0f its Blade smartphone, priced at 999 yuan, or free with a 2-year contract. And these phones run Android. Computer maker Lenovo has also jumped in with an Android phone (the A60) with a price of 959 yuan. This kind of competition at the low end will be just as challenging for Samsung as for Apple, but it represents gains for Android.

In the tablet market, I expect similar dynamic for the same reasons. Too many analysts have looked at current market share declared “Game Over”, using statements like “There is no market for tablets, there is only a market for iPads.” This completely ignores the history of smartphones, and is, I believe, fundamentally mistaken. Android already has approximately 25% of the global tablet market. And just within the past couple of days (as I write this) comes the news that Amazon has increased its orders for the Kindle Fire for the second time, up to 5 million units. And this has not even been officially released yet, so they must be getting a lot of pre-orders. So if this follow the same pattern, and I see no reason why it shouldn’t, expect rough parity between Android and iOS in the tablet space by the end of 2012, with Android pulling ahead in 2013.

To get back to the subhead of this article, all of these factors were in play when Steve Jobs was still the CEO of Apple. If he had not gotten cancer, if he was still alive and running things at Apple, everything we have talked about would have happened pretty much on schedule. In fact, the dominant market share in smartphones going to Android did happen while Steve was in charge. For now, Apple is able to generate huge profits even as its market share is eroding, but that cannot hold up indefinitely either. Electronics companies all over the world can see the importance of the mobile space and want to be there. And Android is pretty much freely available to all of them. Chines companies that focus in driving down manufacturing costs (the story goes that if you ask any Chinese manufacturer what they plan to have as a marketing advantage, they all answer “Price”) will adopt Android because it gives them a software stack free of charge. Combine this innovation with Moore’s Law, and ask yourself what happens when the equivalent of a Samsung Galaxy Nexus is free with a contract, or perhaps $150-200 outright with no contract. There is no room for Apple’s margins in that scenario. And that is why I think Tim Cook may be the unluckiest guy in the world. When these things happen, no one will say that it was due to market forces that no one could have prevented. They will say that somehow Steve Jobs would prevented it, and that Tim Cook just was not up to the job.

 Save as PDF

Ohio LinuxFest Registration is Open for Business!

The premier Linux event in the Mid-West USA will run Sept. 9 through Sept. 11 in Columbus, Ohio, and registration is now open to all. Keynoters include Cathy Malmrose, Bradley Kuhn, and Jon ‘maddog’ Hall. There is an extensive Medical track focusing on the use of Open Source in various aspects of medicine, training from the Ohio LinuxFest Institute, and a great slate of presentations. Register now and reserve your place.

As always, we have a “Enthusiast” category for those short on funds. If you pre-register at the Web site, you can join us free of charge. Walk-ins will be charged a small fee.

 Save as PDF

Wikipedia and The Consumer Internet

I was thinking today about something I have thought about before, but a new connection happened in my mind. It all started with Wikipedia. I use Wikipedia a lot, in fact I use it enough that I recently felt compelled to make a small donation, as I usually do for open projects I rely on that need support. I have found Wikipedia to be generally pretty accurate, particularly if the topic is a technical one. The way I assess accuracy in any type of source is to take a topic I happen to know a lot about, look at what the source says, and ask “Did they get it right?” When I do this with Wikipedia, I tend to find that they do get it right, and in fact I find they do a lot better than most of the media on this particular test. At the same time, I often encounter people who say they don’t trust Wikipedia, and it has become common to hear that teachers, for instance, will prohibit students from using Wikipedia at all as a source of information. To my mind this is a very interesting disconnect, and I think there may be larger implications we can tease out about this.

The first thing that comes to my mind is that this rejection is a lazy, minimum effort path to feeling sophisticated. It is minimum effort because you don’t have to actually assess the quality of the information. You may even be discouraged from attempting to do an assessment by a school policy or by a consensus of the establishment. Going along with everyone else is always the path of least resistance. But I also think there is a seductive pull to the idea that you can appear sophisticated by giving a sad but knowing look while saying “I would never trust a source like Wikipedia.> After all, they let anyone write and edit Wikipedia. This is so unlike the highly reliable media which only lets very competent people like Judith Miller, Jayson Blair, and Glenn Back present information to the rest of us.

Reliability of information

Of course, the above is somewhat oversimplified, but in both directions. As to Wikipedia, they do not, in fact, let just anyone do whatever they want on the site. There are controls and safeguards in place that catch a lot of the problems, and the ones that don’t get caught right away are usually because no one was looking at the page to begin with. As with any open system that relies on “many eyeballs make bugs shallow”, you must first make sure many eyeballs are indeed looking. Every open project has this problem, and how you solve it is probably worthy of a good book in itself. But anyone who uses Wikipedia regularly knows that in fact if you edit a page, your edit will probably go on a list to be reviewed by someone else with a lot of Wikipedia experience. And they have riules in place, such as you cannot edit your own page in Wikipedia. (This only applies to celebrities, of course, since you and I probably don’t qualify to even have a page on Wikipedia). They also tend to require independent corroboration from other sources. As an example, please look at the page for Earned Value Management. This topic may look like all greek to you, but it happens to be a page I have referred to more than once since I am in this line of work. The information is very accurate. And if you scroll down to the bottom of the page, you will see they have a good list of references, and they are the right references. They used Quentin Fleming’s book as a reference, for instance, he really did write the book on this topic.

Now as regards the so-called “Main Stream Media”, picking on Judith Miller, Jayson Blair, and Glenn Beck does involve a certain amount of snark, but it’s my blog and I’ll be snarky when I feel like it. But the deeper issue is that the press is not nearly as open, and therefore is less likely to catch and correct errors. I frequently find a Wikipedia page where an editor has posted, at the top of the page, a notice to the effect that the page needs more independent sources before it can be acceptable. Have you ever seen this on a story in your newspaper? Of course not. We are supposed to assume that somehow there is someone on the background who is doing this, but clearly this does not always happen. In fact, any media outlet that is in business to make money has a strong incentive to push the other way. Being the first to break a story matters, and getting independent verification only adds time. And then who are the sources for those stories? Most newpapers, for instance, will have policies about limiting or not using unnamed sources, but they manage to prevent such policies from interfering with a good story. And that means they can be manipulated to publish stories that are either not true (Iraq has nuclear weapons!!), or seriously slanted.

Now, the point of this analysis is not that Wikipedia is a better source than the Washington Post, though if you catch me on the right day I might be interested in that discussion. The real issue is that you should not trust anything you read or anything you see on television or anything you hear on the radio without first doing some thinking and testing. And that is why I called the rejection of Wikipedia by many teachers “lazy”. The real point that any good teacher should be making is that you need to assess the validitiy of all sources, to question the internal consistency of their reports, to see how they match up with other sources. That is the only way to have an intelligent understanding of what they are saying. It isn’t fool-proof of course, but it gives you a fighting chance.

The Consumer Internet

And that leads the final connection in this essay. Doing what I have suggested is not easy, it demands engagement with the material and genuine thought. That much is fairly obvious. But the more subtle point is that it starts moving you in the direction of being a participant/producer rathan a passive consumer when it comes to information. And there are powerful forces that very much want to make all of us into passive consumers. And that would mean losing one of the great opportunities that this technology gives us.

When the Internet was first developed, no one thought it was particularly important, so no one bothered with the fact that the Internet is inherently a much more participatory medium. After all, if it is just a toy for a few geeks, who cares? So things like Web sites, then blogs, could flourish without anyone noticing. But as the Internet became more popular and therefore more important, those powerful forces had to take notice, and devise ways to get control. Many of the intellectual property arguments are really about this, when you look at it. Remix is inherent in how the Internet works now, and interest groups are working hard to sue it into oblivion. If you quote from an AP story or a newspaper article, you get a cease-and-desist or even a suit. Same thing if you take a small bit from a song, or a movie, or a TV show. This is certainly part of the insanity of the “Culture of Ownership” and of copyright run amok, but do not overlook that it is an attack on people being productive with the information around them. Bach and Beethoven would be criminals in the current regime because they too were remixers of the music around them. In one form of this principle, attributed to Picasso, it reads “good artists copy, great artists steal.” In fact, almost by definition to be a creative and productive participant in society you have to engage with the cultural material around you. This was always understood until such time as a few corporations found they had a financial interest in tying up everything.

So now we find ourselves fighting to keep a medium of creativity and participation. That is one of the major issues with network neutrality. The carriers and the corportate producers of culture want to regain control and turn all of us back into passive consumers of culture rather than active producers. Why is it that everyone has faster download speeds than upload speeds? The carriers will mumble about technical issues, but these are not the real point. Equal upload and download capacity is just as technically feasible as the system we have now. But the truth comes out when they say that “no one needs that much upload capacity”. Well, we do if we are equal participants in the generation of culture, and that is the point. If we start generating our culture, maybe we don’t have as much need to buy it from the RIAA or from Hollywood. And that is something that matters when we talk about Wikipedia. For all of its faults, and there are many faults, it is in the last analysis an expression of creativity that comes from people, not from anointed gate-keepers.

I think we all need to keep this in mind and fight to keep a participatory, creative, and generative Internet. And while you are at it, support places like Wikipedia and the EFF that are trying to keep it that way.

 Save as PDF

Indiana LinuxFest

We have a new arrival on the scene for Linux and FOSS fans. It is Indiana LinuxFest, and it is running from 3/25 through 3/27/11 in Indianapolis. The Web site with more details is at http://www.indianalinux.org/cms/. This looks like a nice addition and it comes at a time when we are coming out of our winter hibernation. I will probably be going, and I may even do a presentation there. So get your hotel room and make your plans.

 Save as PDF

New Slide Show created

It has been a while since I posted here, but I just finished a new slide show, called “Help, My Computer Is Sluggish!”. I have added it to the Slide Show section, where you can download the ODP file or just run the presentation in your Web browser.

 Save as PDF

Ohio LinuxFest Registration and Contest Deadline Extended

Registration for the 2010 Ohio LinuxFest has been extended through September 8th, and the registration contest has also been extended until the 1,000th registration has been reached.
One lucky registrant will win an upgrade to the Supporter Pass, or a Professional Pass registration for Ohio LinuxFest 2011 worth $350, at the choice of the winner. Sign up today and have a chance to win!
Online registration also qualifies attendees for door prizes and giveaways the day of the conference.
As always, the main schedule takes place on Saturday. The schedule kicks off with a keynote from GNOME Foundation Executive Director Stormy Peters, followed by five tracks of talks from open source and Linux experts like Taurus Balog, Amber Graner, Catherine Devlin, Dru Lavigne, Paul Frields, and Jon ‘maddog’ Hall. This year’s OLF also features a special medical track for those interested in the use of free and open source software in medicine.
The final keynote will be a real treat for Linux and open source enthusiasts interested in free media. Christopher “Monty” Montgomery of Xiph.org will be talking about next generation open source media formats.
Once again the Ohio LinuxFest is free to all, but space is limited. for $65 that includes lunch and an OLF t-shirt. For those who want to attend Friday’s OLF University sessions, a professional pass is also available for $350.
The Ohio LinuxFest is a grassroots conference for the open source community that started in 2003 as an inter-LUG meeting and has grown steadily since to become the Midwest’s largest open source event. It’s an annual event for Linux and open source enthusiasts to gather, share information, and socialize.

 Save as PDF

Ohio LinuxFest is coming!

I just want to make sure everyone knows that Ohio LinuxFest is not that far off. It runs 9/10-12, with some great speakers such as Jon “maddog” Hall, Stormy Peters of the Gnome Project, and “Monty” Montgomery, creater of ogg and founder of Xiph.org, plus many other great speakers.

This year the theme is “How Will Free Change the World”, and in the wake of the SCOracle lawsuit I think the idea of free software is more important than ever.

Another very timely focus this year is on the use of free software in the medical field. You may now be aware of just how much of the Health Care Reform is focused on IT, and we want to make sure that free software gets every chance to be a part of it. So there is a full track of medical-oriented talks looking at how free software is impacting the practice of medicine. You won’t want to miss it.

You can register for the conference here, and book a hotel room here. But don’t delay, because it is filling up fast!

 Save as PDF

Hardware Discovery Commands

Did you ever find that you did not remember exactly what hardware you installed on your computer? Well, that happened to me just now. I had a situation about 6 months ago where I was trying to build a computer and things were going wrong. After buying various replacement parts and not fixing the situation, I finally took everything to a local shop and had them fix it. They did, but mostly by installing the motherboard and processor they wanted to use. I got a great computer out of it, so I am not complaining, but I had this stack of hardware left over. I thought I would see what I could use in replacing an older computer that is starting to look pretty long in the tooth. As I pulled out various boxes, I found thing like an ASUS motherboard in a Gigabyte box. But does that mean the Gigabyte motherboard was installed somewhere? And if so, where? I suppose I could start opening up cases, but the current issue (August 2010) of Linux Journal has an article entitled “What Hardware Do I Have?”, which seemed perfect. So I thought I would try it out.

The first command they recommended is lshw. This command LiSts HardWare, as you might suspect. You can run this as a normal user, but it warns you that you really ought to have root priviliges when you run it. The output is immense, but here are the first few sections:
kevin@kimball:~$ sudo lshw
[sudo] password for kevin:
kimball
description: Desktop Computer
product: GA-MA785GT-UD3H
vendor: Gigabyte Technology Co., Ltd.
width: 64 bits
capabilities: smbios-2.4 dmi-2.4 vsyscall64 vsyscall32
configuration: boot=normal chassis=desktop uuid=30303234-3144-3846-4339-4232FFFFFFFF
*-core
description: Motherboard
product: GA-MA785GT-UD3H
vendor: Gigabyte Technology Co., Ltd.
physical id: 0
version: x.x
*-firmware
description: BIOS
vendor: Award Software International, Inc.
physical id: 0
version: F1 (07/03/2009)
size: 128KiB
capacity: 960KiB

So, I definitely found my Gigabyte motherboard. The next section told me my CPU:
*-cpu
description: CPU
product: AMD Athlon(tm) II X2 240 Processor
vendor: Advanced Micro Devices [AMD]
physical id: 4
bus info: cpu@0
version: AMD Athlon(tm) II X2 240 Processor
slot: Socket M2
size: 800MHz
capacity: 3GHz
width: 64 bits
clock: 200MHz

So far, so good. But suppose you wanted more information about your CPU. You could try the lscpu command:

kevin@kimball:~$ sudo lscpu
Architecture: x86_64
CPU op-mode(s): 64-bit
CPU(s): 2
Thread(s) per core: 1
Core(s) per socket: 2
CPU socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 16
Model: 6
Stepping: 2
CPU MHz: 800.000
Virtualization: AMD-V
L1d cache: 64K
L1i cache: 64K
L2 cache: 1024K

And for the video card try lspci, which gives info on all devices plugged in to the PCI bus.

01:05.0 VGA compatible controller: ATI Technologies Inc RS880 [Radeon HD 4200]

In this case, it is picking up the onboard video because I didn’t need anything more than that. I just play music and video podcasts downloaded from the Internet, so there was no good reason to invest in a video card.

To look at USB devices, try (what else?) lsusb:

Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 003 Device 002: ID 0d3d:0001 Tangtop Technology Co., Ltd HID Keyboard
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 003: ID 050d:0234 Belkin Components F5U234 USB 2.0 4-Port Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

For hard drives and other block devices, use blkid:

/dev/sda1: UUID="2c395ab8-1a12-4ce1-94d6-38942a6fadc6" TYPE="ext4"
/dev/sda5: UUID="1fb964d6-6c0e-4b89-bc05-eef44ae1397a" TYPE="ext4"
/dev/sda6: UUID="5205a37e-bab5-4db8-9e75-1ce70f8059db" TYPE="ext4"
/dev/sda7: UUID="1a021347-4e03-40fb-84c6-44c306e02c0c" TYPE="swap"
/dev/sdb1: UUID="750d70ef-74bc-4fbd-8a3b-21fc8f1cb5a0" TYPE="ext4"

You can see that my first hard drive has 3 data partitions and a swap partition, and I have second drive configured as one large partition. That is pretty standard for me, so I saw what I expected to see. I usually configure my first hard drive to have a root partition (/), a /var partition, and the rest as /home. Then the second drive is set up as /data.

The last thing to look at is whether your kernel is using all of this lovely hardware. The kernel uses hardware by using kernel modules, so you can see those modules using lsmod:

kevin@kimball:~$ sudo lsmod
Module Size Used by
binfmt_misc 7960 1
ppdev 6375 0
vboxdrv 1792375 0
snd_hda_codec_atihdmi 3023 1
snd_hda_codec_realtek 279040 1
fbcon 39270 71
tileblit 2487 1 fbcon
font 8053 1 fbcon
bitblit 5811 1 fbcon
softcursor 1565 1 bitblit
snd_hda_intel 25677 5
snd_hda_codec 85759 3 snd_hda_codec_atihdmi,snd_hda_codec_realtek,snd_hda_intel
snd_hwdep 6924 1 snd_hda_codec
snd_pcm_oss 41394 0
snd_mixer_oss 16299 1 snd_pcm_oss
vga16fb 12757 0
vgastate 9857 1 vga16fb
snd_seq_dummy 1782 0
snd_pcm 87882 3 snd_hda_intel,snd_hda_codec,snd_pcm_oss
snd_seq_oss 31219 0
snd_seq_midi 5829 0
snd_rawmidi 23420 1 snd_seq_midi
snd_seq_midi_event 7267 2 snd_seq_oss,snd_seq_midi
snd_seq 57481 7 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_seq_midi_event
snd_timer 23649 2 snd_pcm,snd_seq
snd_seq_device 6888 5 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_rawmidi,snd_seq
radeon 740390 2
ttm 60847 1 radeon
drm_kms_helper 30742 1 radeon
snd 71106 23 snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_seq_oss,snd_rawmidi,snd_seq,snd_timer,snd_seq_device
soundcore 8052 1 snd
drm 199204 4 radeon,ttm,drm_kms_helper
i2c_algo_bit 6024 1 radeon
snd_page_alloc 8500 2 snd_hda_intel,snd_pcm
i2c_piix4 9639 0
edac_core 45423 0
shpchp 33711 0
edac_mce_amd 9278 0
psmouse 64576 0
serio_raw 4918 0
lp 9336 0
parport 37160 2 ppdev,lp
usbhid 41084 0
hid 83440 1 usbhid
ohci1394 30260 0
ieee1394 94771 1 ohci1394
ahci 37838 5
pata_atiixp 4209 0
r8169 39650 0
mii 5237 1 r8169

So, there you have it. And thanks to Linux Journal for putting all of this together. If you do not have a subscription to Linux Journal, you might want to get one. They frequently have useful articles like this

 Save as PDF