Hire Tom
Mostly Mozart


Logic 2
Posted August 5, 2001

Welcome to another episode of Mostly Mozart, on Nelson's own Kootenay Coop Radio, cjly 93.5 fm. Comfort and Joy is happy to sponsor Mostly Mozart. My name is Tom Clegg. I've got lots of music for you to listen to, but I'm sure you don't care about that, because you're so excited to hear what I have to say about logic.

But I already know exactly what I have to say about logic, and I want to listen to some music, so music is what it will be.


You're listening to Mostly Mozart on Kootenay Coop Radio 93.5fm cjly in Nelson. The show is sponsored by Comfort and Joy, a unique children's store. Always has been, always will be.

Last week I said a few things about logic in general, and this week I'm going to give you some more details about modern symbolic logic.

Modern symbolic logic is a langauge, in the mathematical sense of the word. A language is a set of symbols and a set of allowable ways to string the symbols together. It's obviously an analogy to human languages, which also have symbols and certain allowable ways of stringing them together.

But, unlike human languages, math languages can be well-defined, which means that it's possible for every single string of symbols to be categïrized as either allowable or not. For example, in the language of arithmetic, the string 2+1=3 is allowable, but 2+1=4 is not allowable. Neither is 2+1=.

In case you're wondering, these strings have nothing to do with string theory, but they are conceptually related to the strings in programming languages like C and Perl.

Symbolic logic refers to strings as sentences. You're allowed to make lots of fascinating sentences, like "A or B", or perhaps, "A and B implies C". But you're not allowed to say "A and not A".

And that's why they call it logic. "A and not A" is an unreasonable sentence, no matter what A stands for. It doesn't mean anything, unless you have some bizarre definition of the words "and" and "not".

There's a set of rules that you can use to decide whether a given sentence is allowed in the language of symbolic logic. When you use the rules to show that a sentence is allowed, you're proving that sentence. You start with some sentences which have already been proved, and you apply the rules in a sequence which is called your argument.

Or, if you want to deal with things in the real world where nothing is certain, then you have to start your argument with some premises, and show in the end that if the premises are true, then every sentence in your argument is true.

Either way, you use the same language, and you start with the same set of rules. I say you start with the same set of rules because there's a rule that lets you make new rules. It's called modus ponens: if "A" is true, and "A implies B" is true, then "B" is true. You can make new rules, like, "not (A and B) implies (not A or not B)". And vice versa.

If all this sounds too obvious, that's because it's meant to. The whole point is that all allowable logic sentences are true beyond debate, and all non-allowable logic sentences are false beyond debate. It's hard to argue with someone who tells you that if A and B are both true, then A is true.

Of course, things get much more complicated if you try to add meaning to a logical argument. Let's try that. If Wally World isn't allowed to build a new store, then they will seek revenge by closing their existing store. If they close the existing store, then hundreds of employees will be put out of work. Therefore, if hundreds of employees are not to be put out of work, then Wally World must be allowed to build a new store.

There may be lots of problems with that argument, but the logic is perfectly sound. It uses the modus tollens rule: if "A implies B" is true, and "B" is false, then "A" is false.

Fortunately, there are perfectly valid ways to argue against a logically sound argument: you can start by claiming that the premises are false, or that the conclusion is irrelevant. In this case, you would probably do well with both methods.

But I don't want to talk about Wally World, I want to talk about computers. Digital computers are just logic machines. Where a logical argument has a set of logic rules, some premises, and a conclusion, a computer has instructions, input, and output. The interesting kind of computer is the stored-program digital computer. That's the one invented by Alan Turing in the 1930s in order to settle a question about mathematical logic. The "stored-program" feature sets it apart from Charles Babbage's difference engine, which was invented about a hundred years earlier. It means that the machine doesn't enforce a distinction between input, output, and instructions. They're all just strings of symbols, and any given string of symbols can be reinterpreted as input, output, and instructions, as the machine runs. You can write a program whose output is itself a program, and the purpose of the output program might be to print statistics about the original program.

That's just like using modus ponens to make new rules. Rules, premises, arguments, and conclusions are all just strings of symbols. For example, if you take it out of context, you can't tell whether "A implies B" is supposed to be a rule, a premise, or a conclusion. It all depends on where it appears in the argument.

Before I run out of time I want to tell you what "or" means, but first I'll say a little about the process of proving things. A logical theorem is not the same thing thing as a scientific theory. A theorem is a sentence that has been proved. If it's been proved, then it's valid to use it as part of any argument. Not surprisingly, a proof is an argument that proves a theorem.

Proof by contradiction involves starting an argument with a bunch of premises and proving something that's obviously false, like "A and not A". If you can prove that "A and not A", then one of your premises is false. If you've already proved all of your premises except one, then that one must be the false one.

OK, I've used the words "if" and "or" and "and" enough times without defining them. It's important to be unambiguous when it comes to logic, and you might be surprised at how ambiguous the words "if" and "or" are.

I'll start with "if". If you're in Nelson, you can listen to KCR. That means that at least one of two things is true: either you can listen to KCR, or you're not in Nelson. There are two things to watch out for when I tell you, "if A then B", or more formally, "A implies B". First of all, I might know that A is false, in which case I imply nothing about B. For example, if it's raining chairs, then I can bench press 120 pounds. If it stops raining chairs, then I will end up stubbing my toe all day.

The other thing to watch for is that the word "implies" doesn't signify causality, or even relevance. For example: If the Queen of Great Britain lives in England, then America refuses to trade with Cuba. If you jump up and down and say "nutbar" three times, then I will have long hair tomorrow. Post hoc ergo procter hoc.

"A implies B" just means that if A is true, then B is true. That's all.

There's an English word with the same spelling and pronunciation as "if". At the liquor store, they might tell you that they will sell you alcohol if you prove that you are over 19. According to the definition I just gave you, that doesn't say anything about what happens if you don't show your ID. So next time that happens, you can ask, "do you really mean if and only if?" And maybe they'll sell you the stuff just to get rid of you. But probably not.

"A if and only if B" just means "A if B and B if A". In other words, "A = B".

I have a little story about "if and only if". I once took a course called discrete mathematics -- mainly because of the fact that it was called discrete mathematics. That's discrete "e-t-e", having finite intervals, not discreet "e-e-t", but I always like to think of it as the kind of math that you do in the dark when nobody's looking.

My prof was writing a paper about logic. Apparently he was really good at logic, but he wasn't very good at stuff like spelling, so he gave his manuscript to his wife for proofreading. She didn't know a whole lot about logic, but she was good at grammar and so on. When she gave it back to him, she said, "I knew you were a bad speller, but I'm surprised that you can't even spell 'if' properly". It turned out that he had written "i-f-f" as an abbreviation for "if and only if"; regular "if"s were just written "i-f". She had carefully erased all of the second "f"s from the "i-f-f"s, so he had to go back over the whole paper and figure out which ones were supposed to be "if and only if" and which were supposed to be just "if".

Well, it seemed funny at the time. Maybe you had to be there. Sitting with 200 other people learning about Cantor's diagonalization argument, so you can prove that there is an infinite supply of rational numbers betwen 0 and 1. If the need ever arises.

Maybe I'll cover Cantor on some other day.

For now I'll just tell you a little bit that you already knew about the word "or". "Or" has two different meanings in English. Usually it means one or the other, but not both; for example, "is that for here or to go?" When someone asks you that, you're not supposed to say "both." The logical equivalent is called "exclusive or". The other meaning of "or" is inclusive. "This no library; buy or get out." You can buy something; or you can get out; or you can do both. You could recode the sentence "buy or get out" as, "not buying implies leaving." "Inclusive or" is what "or" means in symbolic logic. "A or B" is true if A is true, it's true if B is true, and it's true if both A and B are true.

I've used the words "true" and "false" many times, after telling you last week that logic doesn't deal in truth. Well, it doesn't deal in metaphysical truth. Logic statements are "true" and "false" the same way that electrical charge is "positive" and "negative". Other than the fact that they're opposites of each other, they have no philosophical significance, except in the eye of the beholder.

In case you missed my show about NP-complete problems, or in case your memory has faded at all since January, I'd like to mention some other things about computers and mathematical languages.

Any finite set of symbols can be encoded as a string of binary numbers. If a language uses 100 different symbols, then each symbol can be encoded as a 7-bit binary number. So a string of 8 symbols can be encoded as 56 bits, each of which is either 1 or 0. Assuming the language is well-defined, there is some algorithm, or deterministic process, that lets you decide whether that string of 56 bits is part of the language.

In my opinion, one of the most interesting areas of mathematics is the question of whether a given language is well-defined, and if so, whether there is a reasonable decision algorithm for it. By reasonable, I mean that it shouldn't take 100 centuries to decide whether a 100-bit string is part of the language.

And that's the difference between arithmetic and mathematics. Figuring out whether the 56-bit string is part of the language is arithmetic. Figuring out how long it takes to decide all 56-bit strings is mathematics.