Introduction to the Study of Logic

1. What is Logic?

 We use our  minds for many different purposes, but the most significant of these is to extend our knowledge and understanding of the world. While there is no precise formula for accomplishing this, it certainly doesn't happen by accident. Significant strides in learning come about  only when we think carefully, systematically or, one might say, logically.  In the broadest sense of the term, then, logic can be understood as the study of how we can reliably increase our knowledge and understanding of the world.

 Everyone will agree that logical thinking involves connecting our thoughts in some way.  We say our thoughts are logically connected when one or more of them can be seen to reliably support  or lead to  others.   These metaphors are where we begin.  The idea of thoughts reliably supporting others draws on an architectural metaphor in which some thoughts provide a strong foundation for others.  The idea of one thought reliably leading to another draws on a mapping metaphor: our thoughts are organized in a way that makes it possible to navigate reliable routes between them. Logicians use the word "imply" to capture these metaphors.  When they want to say that one thought leads to another they say that one logically implies the other. When we study logic, then, we are mainly trying to define, extend and apply the idea of logical implication.

2.  Pure and Applied Science

 The study of logic occurs at two different levels which can be usefully distinguished as pure vs. applied.  Our concern in this class is with the latter, but it is very important to begin our inquiry with a clear understanding of how these two levels are related.

 In general, the difference between a pure and applied science is that the former pursues knowledge for its own sake whereas the latter seeks to put that knowledge to some use.  Physics, for example, is a pure science in the sense that it seeks to understand the behavior of matter without regard to whether it will afford any practical benefit. Engineering is the correlative applied science in which physical theories are put to some specific use, such as building a bridge, a nuclear reactor, or a computer.

 Engineers obviously rely heavily on the discoveries of physicists, but an engineer's knowledge of the world is not the same as the physicist's knowledge.  In fact, an engineer's know-how will often depend on physical theories that, from the point of view of pure physics, are false. There are a couple of reasons for this.  First,  theories that are false in the purest and strictest sense are still sometimes very good approximations to the true ones, and often have the added virtue of being much easier to work with.  Second, sometimes the true theories apply only under highly idealized conditions which can only be created under controlled experimental situations. The engineer does not operate in this world, and sometimes finds that in the real world theories rejected by physicists yield more accurate predictions than the ones that they accept.

 The relation between pure and applied logic can be understood similarly.  The pure logician investigates logical relationships simply to learn more about them and they apply most accurately to highly idealized thinking. In trying to put some of this knowledge to practical use, the applied logician will borrow heavily from the knowledge amassed by the pure logician, but the applied logician will also employ concepts that are practically useful, even if they are not relevant to the pure study of logic.

3. Thoughts, Statements and Reasons

 You'll notice that we began by defining logic as a certain way of thinking. We can begin to understand what that  is when we realize that logical thinking essentially involves language.  You may think this is an obvious point, but it is not obvious to everyone (including some philosophers).  Thinking, understood broadly as any sort of mental activity, does not always involve the use of a natural language, like English.  The vast majority of our thinking (e.g., that which takes place when your brain is processing all its visual, auditory and tactile stimuli into a coherent representation of your environment)  is unconscious,  and not even all of our conscious thinking is linguistic in nature, since much of that seems to occur in pictures or images.

 Of the thoughts that can be expressed in language logicians are mainly interested in those that express statements.  A statement is just a sentence that it makes sense to evaluate as either true or false.  Pure logic studies the logical relations between statements.  In order to do this we divide statements into two categories: premises  and conclusions, where the premises are the statements given in support of conclusions.  In this context the fundamental kind of question that can be posed is:  "Do the premises logically imply the conclusions?"

In applied logic we are very interested in questions of implication, but we also take into account the fact that people assert logical relationships for very different purposes. One way of expressing this is to say that we are not only interested in the question whether premise P implies conclusion C, but also in the question whether P is a reason for C.

 A reason is a statement made with the intention of accomplishing one of two aims.  One of these aims we call argument,  the other we call explanation . These are pivotal concepts in applied logic and we will elaborate them in detail below.  But for now they can be briefly distinguished as follows.  Argument occurs when a certain statement is not obviously true and involves the attempt to produce reasons for believing it.  Explanation occurs when a statement is accepted as expressing a fact, and involves the attempt to produce the reasons for how or why that fact came to be.

 The distinction between argument and explanation does not interest the pure logician.  The question whether P implies C is totally unaffected by whether the statements involved are explanatory or argumentative in nature.  But, practically speaking, there is a huge difference between establishing that something actually did occur and understanding why it occurred, and this is one of the differences that gives rise to the field of applied logic.

4.  Deductive Implication

 The importance of the distinction between pure and applied logic becomes more apparent as we begin to answer the question what it means for one statement to imply another.  For this question can really be understood in two ways.  We might mean, does P absolutely  imply C, such that knowing P we can always and with complete confidence conclude that C?  Or we might mean, does P practically  imply C, such that knowing P, it is reasonable to conclude C for all practical purposes? Absolute implication is what we call deductive implication .  (Practical implication is what we call inductive implication and will be discussed in the next section.)  Deductive implication can be defined as follows:

"P deductively implies C" means "It is impossible for P to be true and C to be false."
 

Another way of saying that P deductively implies C is that the inference from P to C is deductively valid.

 Deductive implication is not too difficult to understand.  The important thing to see is that it doesn't actually require either P or C to be true.  What it requires, basically, is that if P is true, then C absolutely has to be true.  Here is a simple example of a P that deductively implies a C.

P:  The sun will never shine again.

C:  The sun will not shine next Tuesday.

We would say that P deductively implies C because if its true that the sun will never shine again, then it has to be true that the sun will not shine next Tuesday.  We can see that P deductively implies C even though we don't agree that P is true.

 Deductive implication is the central concept in pure logic, but there are many cases in which P deductively implies C which strike most people as peculiar.  For example,

P:  The sun will never shine again.

C:  The sun will never shine again.

Here P and C are obviously the same, but P does imply C.  Just check the definition, if you don't agree. Since P and C are identical, it is impossible for P to be true and C to be false.    Here is another example of  P deductively implying C.

P:  The sun will never shine again.

C:  Parallel lines never meet.

Here C is a statement that is always true. We say it is true by definition because "never meeting" is just part of what it means for two lines to be parallel.  But, since C is always true, it is impossible for P to be true and C to be false, so P deductively implies C here as well.

 Many people object to saying that P implies C in either of the above examples because it doesn't satisfy their basic intuition about what it is for one statement to logically support another.  But that is because they are thinking like applied logicians. One way of putting this objection is to say that, even though P deductively implies C, it doesn't make sense to give P as a reason for C. In the first example P just is C, and something can't be a reason for itself.  In the second example P has nothing to do with C, and P can't be a reason for something to which it is totally unrelated.  We will talk more about reasons below.  For now just note that we have found it useful to introduce the concept of a reason in an attempt to characterize the practical limitations of deductive implication.

5.  Inductive Implication

 Deductive implication is still an extremely useful concept.  If you adhere to it strictly you will never make the mistake of inferring false statements from true ones.  But we have just seen that the concept of deductive implication does not completely capture what we mean for one statement to imply another for all practical purposes. The failing noted above shows that there are some cases in which P deductively implies C but P is not a reason for C.  But there is another problem, too:  some perfectly good reasons do not satisfy the conditions of deductive implication.  To see this, consider the following example:

  P:  The sun has risen every day for the past 4.5 billion years.

 C:  The sun will rise again sometime within the next 24 hours.

Most people would say that P is a pretty good reason for inferring that C.  But is this a deductively valid inference?  The answer is no.  P seems to make C highly probable, but it is conceivable that within the next 24 hours some bizarre physical phenomenon will result in removing the earth from its customary orbit. Therefore it is not impossible for P to be true and C to be false.
The concept of inductive implication is customarily introduced in order to deal with this problem.  Inductive implication is usually defined something like this:

"P inductively implies C"  means "It is very unlikely for P to be true and C to be false.

Another way to say that P inductively implies C is that the inference from P to C is inductively valid.

The concept of inductive implication does not adequately characterize what we mean by P implying C. It does successfully formalize the point that some non deductive inferences are reasonable, but "very unlikely" is extremely vague and, unfortunately, we can't be much more precise than that.  We could try substituting "95% " for "very" but there is really no absolute reason to choose 95% rather than 90% or 99.5%.  The degree of likelihood required really depends on a lot of fundamentally practical considerations which vary from context to context.

 The concept of inductive implication also does not adequately characterize what it is for something to be a reason.  This is because all deductively valid inferences are also inductively valid inferences.  (Think about it: if it's impossible for P to be true and C to be false, it's also very unlikely for P to be true and C to be false.) So we still have the problem of P implying C when P is not a reason for C.