Author Topic: Deduction versus Induction  (Read 1279 times)

Offline Eric Bright

  • Administrator
  • Full Member
  • *****
  • Posts: 135
Re: Deduction versus Induction
« Reply #15 on: March 09, 2014, 06:44 PM »
Excellent points Alain! Thank you very much for the clarifications.

I believe the case is already settled by your clear explanations. But, in case we need to really flesh it out, I am going to try and put what you said into more formal formulations. Please note that I might make mistakes in here and I would expect you guys to point out errors in my articulations should they happen. I also thank all of you in advance for doing so.


How to make a hypothesis

Quote
A hypothesis involves an explanation of currently available observations that also has implications for observations that have not yet been made. [...]

I am going to take Alain’s recipe for hypothesis making. Later, if you found it not accurate enough, please point out the problems, and I will adjust my articulations accordingly.

So, let us assume that Alain has explained how hypothesis is made in science accurately. Then, let us consider the opening paragraph of the article The Problem of Induction in SEP:

Quote
[...] in Hume’s words, that “instances of which we have had no experience resemble those of which we have had experience” (THN, 89). Such methods are clearly essential in scientific reasoning as well as in the conduct of our everyday affairs. [...]

And compare it with Carnap’s taxonomy of Induction explained in the same SEP article:

Quote
3.2 Carnap’s inductive logic

Carnap’s taxonomy of the varieties of inductive inference (LFP 207f) may help to appreciate the complexity of the contemporary concept.
   
  • Direct inference typically infers the relative frequency of a trait in a sample from its relative frequency in the population from which the sample is drawn.
  • Predictive inference is inference from one sample to another sample not overlapping the first. This, according to Carnap, is “the most important and fundamental kind of inductive inference” (LFP, 207). It includes the special case, known as singular predictive inference, in which the second sample consists of just one individual.
  • Inference by analogy is inference from the traits of one individual to those of another on the basis of traits that they share.
  • Inverse inference infers something about a population on the basis of premises about a sample from that population.
  • Universal inference, mentioned in the opening sentence of this article, is inference from a sample to a hypothesis of universal form.

Alain’s description of how a hypothesis is made seems eerily similar to the way induction is defined by different logicians. Given that observation, then the following assertion needs an explanation.

Quote
If we’re talking about the scientific method, as in the minimal parts that are required for hypothesis forming, then inductive weighing of alternative explanations is however not a part of science, [...]

How can that be reconciled with what was said above about how a hypothesis is made? Once this point is explained, I guess we will have a fairly consistent view of the role of induction in science.

The next things that need a bit of elaboration in here are these:

(a) Deduction does not generate new knowledge
(b) “Does that last step, i.e. the comparison of the observed results with the expected results, or more specifically determining what the expected results would be, qualify as deductive reasoning? If so, then I would agree that deduction plays an essential role in the scientific process”

Point (a) is pretty uncontroversial and I should not spend too much time on it. Deduction works just fine on zeros and ones, ONs and OFFs, or any other indicators that can distinguish at least two different states of being (I say at least, because it can be extended to multi-valued systems too). Deduction does not tell us anything out of nothing. It does not generate knowledge and it does not claim to do so. It is the study of the possible, necessary relationships between concepts. Those concepts can be anything. Since this is not controversial, I will leave it at that.

Point (b) is extremely important and essential to our understanding of the issue in hand. I am going to suggest that (b) is actually the case. Here is a simple example:

(1) Assume that hypothesis Z is true
(2) IF Z, THEN Q (because Z predicts the occurrence of Q)

From this point on, to see if Z is true, we need to make observations and record them in order to compare them with the prediction that Z had made. Now, at least two things might happen: either Q or ~Q. In case the result of our observation is ~Q, then the hypothesis is shown to be false in its current form.

(3) We observed ~Q
(4) THEREFORE ~Z is true (i.e. Z is false)

This is one outcome.

Also, deduction predicts, or dictates, that if we observe Q instead of ~Q, we cannot say that Z is certainly true. So, it is not possible to say:

(3)' We observed Q
(4)' THEREFORE Z is true

And that makes sense. The reason why we cannot do that, is because Q can co-occur or follow Z, and yet not be causally connected to Z in any way. Something else could have caused Q and this coincidence could give the false impression that Z was the explanation for why Q occured, whereas, in reality, it was not the case. These are all dictated by deduction and our day-to-day experiences also agree with them (how could they not to?).

So, I agree with Alain on almost all regards when the points are detailed in simple language.

In here, again, I might have made mistakes or might have been confused. I would like to know were I made a mistake, if any, and also to know what the correct way of articulating the points mentioned above could be. So, please be kind enough to point out the errors in this post. Also, if I sound vague, confusing, or unclear, I would sincerely appreciate it if you mention that too, so I would try again to clarify my thoughts. If I cannot explain it clearly, the chances are that I don’t understand it myself either.


Now, I am going to re-read The Problem of Induction in SEP one more time in case I missed something. Please direct me to any other article, book, lecture, video, or presentation that you think it might enlighten me, or might benefit others as well.





Reference

Vickers, John, “The Problem of Induction”, The Stanford Encyclopedia of Philosophy (Spring 2013 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2013/entries/induction-problem/>.
“Don’t speak unless you can improve on the silence.”

Offline Andreas Geisler

  • Moderator
  • Full Member
  • *****
  • Posts: 117
Re: Deduction versus Induction
« Reply #16 on: March 10, 2014, 02:08 AM »
First of all, Universal Induction is how we form knowledge, this is how we know our names, as well as form our notion of the ontological, which we only realize to be probabilistic in deep analysis.

A problem in the quoted Carnap quote is the use of the word hypothetical which seems to refer to hypothesis. But remember, hypothetical is also a way to express that a conclusion is probabilistic, not factually correct (as it turns out, this is superfluous, since all human knowledge is probabilistic and conditional). Induction forms probabilistic inferences, they are "could be" or "might be" or "probably are" depending on the strength of the basis.

Secondly, to see why I insist Abduction is what hypothesis is about, refer to the definition by Peirce, from Princeton:
Quote
Peirce said that to abduce a hypothetical explanation a from an observed surprising circumstance b is to surmise that a may be true because then b would be a matter of course.[2] Thus, to abduce a from b involves determining that a is sufficient (or nearly sufficient), but not necessary, for b.

For example, the lawn is wet. But if it rained last night, then it would be unsurprising that the lawn is wet. Therefore, by abductive reasoning, it rained last night. (But note that Peirce did not remain convinced that a single logical form covers all abduction.)[3]

Peirce argues that good abductive reasoning from P to Q involves not simply a determination that, e.g., Q is sufficient for P, but also that Q is among the most economical explanations for P. Simplification and economy are what call for the 'leap' of abduction.[4]
http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Abductive_reasoning.html

In other words, abduction is the formulation of explanations, against the basic criteria of simplicity and economy.

And that not only smacks of hypothesis, it IS hypothesis.

Offline Alain Van Hout

  • Moderator
  • Newbie
  • *****
  • Posts: 34
Re: Deduction versus Induction
« Reply #17 on: March 10, 2014, 04:22 PM »
Quote
Excellent points Alain! Thank you very much for the clarifications.

You're welcome, though given that there would have been no need to thank me if I had put more effort in the first attempt, there no actual need to thank me ;).

Quote
How can that be reconciled with what was said above about how a hypothesis is made? Once this point is explained, I guess we will have a fairly consistent view of the role of induction in science.

The issue here is that there is a very big difference between things that are inherently essential and things that are simply extremely useful. I've already discussed this, but I'll try to summarize it more clearly: hypothesis forming by induction allows us to very quickly narrow in on that subset of in potentia hypotheses that have the largest probability of offering useful results. For instance, if we see a wet meadow in the morning, we can form multitude of hypothesis regarding the cause, including (a) rain, (b) moisture condensing out of the air, (c) a tsunami, (d) pixies that go around with watering cans, etc. It's obvious that the first two are far more likely that the third (based on induction of past experience), which is still more much, much likely that the fourth.

Through induction we can order this sort of list based on probability estimates and take the ones with the highest probability (that this technique is useful, is also a matter of induction). That's induction-based hypothesis forming (it's in fact really 'selection' rather than 'forming'). There's however no actual inherent necessity to do it this way: we can first test the tsunami hypothesis and the pixie hypothesis, and only then the other two. This would cripple the efficiency of the scientific process (because it's very useful), but it would not break it  (because it's not essential) ... that's the difference I was talking about, between talking about how scientists tend to form hypothesis and what the scientific method inherently requires with regard to hypothesis formation.


Quote
(a) Deduction does not generate new knowledge
[...]
Point (b) is extremely important and essential to our understanding of the issue in hand. I am going to suggest that (b) is actually the case. Here is a simple example:

(1) Assume that hypothesis Z is true
(2) IF Z, THEN Q (because Z predicts the occurrence of Q)

From this point on, to see if Z is true, we need to make observations and record them in order to compare them with the prediction that Z had made. Now, at least two things might happen: either Q or ~Q. In case the result of our observation is ~Q, then the hypothesis is shown to be false in its current form.

(3) We observed ~Q
(4) THEREFORE ~Z is true (i.e. Z is false)

This is one outcome.

Also, deduction predicts, or dictates, that if we observe Q instead of ~Q, we cannot say that Z is certainly true. So, it is not possible to say:

(3)' We observed Q
(4)' THEREFORE Z is true

And that makes sense. The reason why we cannot do that, is because Q can co-occur or follow Z, and yet not be causally connected to Z in any way. Something else could have caused Q and this coincidence could give the false impression that Z was the explanation for why Q occured, whereas, in reality, it was not the case. These are all dictated by deduction and our day-to-day experiences also agree with them (how could they not to?).

When looking at how science is actually performed, there is problem with regard to (3) and (4): scientific results work with probabilities in nearly all cases. In this respect, to be representative, I would have to re-write them as:

(3) We observed ~Q
(4) THEREFORE ~Z is less likely

Similarly, with regard to (3)' and (4)', actual application of science would involve:

(3)' We observed Q
(4)' THEREFORE Z is more likely

An additional requirement here would of course be that 'observing Q' qualifies as a 'test', i.e. that observation is not the one that helped generate the hypothesis in question.

I'm don't think that my changes resolve the issue, but I hope they do help illustrate that the originals are not sufficient(ly accurate) as a representation of deduction at work in science.

Offline Andreas Geisler

  • Moderator
  • Full Member
  • *****
  • Posts: 117
Re: Deduction versus Induction
« Reply #18 on: March 11, 2014, 06:49 AM »
I am suddenly unsure of something. It seems to me that induction forms theories, whereas abduction forms hypotheses.
This thought has rendered me unable to examine whether it is correct.

Offline Eric Bright

  • Administrator
  • Full Member
  • *****
  • Posts: 135
Re: Deduction versus Induction
« Reply #19 on: March 11, 2014, 04:22 PM »
Your precision was very helpful in here Alain. I need to add two things to this discussion for the sake of clarity.

1- There is no such thing as deduction in science versus deduction in other fields. There is only one kind of deduction. (When people interpret holey texts, they come up with one thousand and one different interpretations of the same text, and who can say whose interpretation is THE interpretation. I am glad that we are not talking about the thousands different interpretations of a holey text. We are talking about natural deductive logic (interestingly enough, it is also know as “Propositional Calculus”) and first-order logic (again interestingly, it is also know as “first-order predicate calculus”). For more information, see the References).

2- Alain and I are talking about two different things for sure. The “correction” that’s made, is not actually applies to a specific way that science is done, not the principle of scientific investigations. Alain’s example would explain something that is at work in scientific observations and measurements but it does not refute the underlying principle that these measurements are trying to follow. The ideal state is to follow deduction as closely as possible, but, as Alain pointed out, it is impossible to do in practice, therefore (a) whatever science does, is not logical deduction as it is known to logicians but an approximation of it, and (b) whatever it is, it is not a counter example to logical deduction that it is trying to follow in principle (it can be seen as an approximated version of the original, i.e. natural deductive logic). It shows that science tries to follow deduction as closely as possible in observations and measurements, and all it can do is to approximate the principles. That is fair to say and I think it is an accurate observation.

3. Since we are discussing two separate things, it might help if I give an analogy:

What I am saying is this: in mathematics, when you define the notion of numbers, such as number one and number two (to show them with these symbols correspondingly: 1, 2), and when we define the notions such as addition and equality (and identity, etc.), then the following sentence can be made which is true by definition, which is deductive in nature: 1 + 1 = 2

What Alain is saying is that in science, when we observer p and then q, we can only talk about the approximations, representations, and interpretation of our supposed observations. As such, when we try to add the two measured quantities, what we can actually do is nothing more than adding two imprecise numbers together, like two percentages, two probabilities and such. As a result, what we get in the end is not a precise sum of two previous numbers, but merely another probability with a margin of error.

If that is what you are try to say, and please correct me if it is not, then what I say and what you say are two different things. What you say is not a refutation of the importance and fundamentality of deduction in science.

In other words, I am talking about deduction the way it is understood in logic. You are talking about observations and measurements the way they can be understood and done in science. I say that the underlying principles that science, or anything that can ever hope to make sense, to initiate an interpretation in the mind of a human being, does follow the laws of deductive logic. What you say is that science in imprecise, approximate, and probabilistic. I say that probability does not and cannot exist if deductive logic does not exist (which is a silly thing to say, because it is a logical impossibility anyway). You say probability is what science uses to do its work. I say, imagine this: You need to have a language to communicate, or some other means of communication to communicate. Without having a means to communicate, communication would be impossible (which is a tautology anyway). You say: ‘Look! Science uses probability to convey its finding.’

I cannot understand how anyone can go about proving or disproving anything, if they don’t think deduction is what they are actually using to do so. I cannot understand how one can use deduction to reject deduction, deductively.

So, the next natural step in the evolution of our conversation in here, I hope, would be to explain how one can prove anything of any sort, such as any of the probability theorems or any mathematical concept that are used in science, while one does not think that it is done by deduction (don’t forget that mathematical arguments are all deductively proven (Goldrei, 2005). And therefore anything that uses mathematics is using deduction; science, I am looking at you). I submit that no such act would be possible.

Now, to understand this a bit more deeply, I would like to investigate this: Mathematics is the language of science. The modern science is impossible without mathematics. The question is, how anything in mathematics is proven (see Goldrei, 2005)?



References

Goldrei, D. (2005). Propositional and predicate calculus: A model of argument. London: Springer.

Sakharov, Alex. “First-Order Logic.” From MathWorld--A Wolfram Web Resource, created by Eric W. Weisstein. http://mathworld.wolfram.com/First-OrderLogic.html
 
Sakharov, Alex and Weisstein, Eric W. “Propositional Calculus.” From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/PropositionalCalculus.html
“Don’t speak unless you can improve on the silence.”

Offline Andreas Geisler

  • Moderator
  • Full Member
  • *****
  • Posts: 117
Re: Deduction versus Induction
« Reply #20 on: March 12, 2014, 05:45 AM »

What I am saying is this: in mathematics, when you define the notion of numbers, such as number one and number two (to show them with these symbols correspondingly: 1, 2), and when we define the notions such as addition and equality (and identity, etc.), then the following sentence can be made which is true by definition, which is deductive in nature: 1 + 1 = 2


There is a problem with that. 1+1=2 is very simple to show inductively. But it took thousands of years and a multitude of theoretical frameworks to prove it mathematically. And even then it took about 200 pages of text.

If anything is true by definition, and is not directly tautological, it is actually fallacious, circular, to be exact.

I am talking about fundamental cognitive processes, of which there are three kinds.

Offline Pat Johnston

  • Moderator
  • Newbie
  • *****
  • Posts: 37
Re: Deduction versus Induction
« Reply #21 on: March 14, 2014, 01:35 AM »
I've been following several of these discussions through different threads on logical reasoning, and my take is this: it seems that a lot of effort is going into over complicating them, and whether intentional or not, at least some of that appears to be obfuscation. Is there an intent to lose the layperson here? These really are just different styles of reasoning, with each having merits of applicability and also degrees of ineffectiveness to be reckoned with depending on circumstances, and as well can be used in a complimentary manner with each other.
 
Deduction attempts to answer a discrete question/local unknown, based on given and discernable facts - e.g. who killed Colonol Mustard in the library with the candlestick (the facts being that he was found dead in the library by an apparant blow to the head, with the bloody candlestick laying next to him).  The deductive method is by process of analysis of facts, from propositions taken to validation or elimination.  As such, forensic anlaysis would further deduce whether the blow to the head was the cause of death, if the candlestick was in fact the murder weapon, who was last to touch it, etc. It connects, reveals and also rules out relationships between facts, so can and does actually reveal new truths in this manner, previously unknown to the observer. (Only a universal observer can possibly be in a position to know all knowable facts.) Deduction has been called "top-down" reasoning, but I prefer to just call it "from here to there" factual analysis & validation.
 
Induction begins with a given (ie accepted) fact/phenomenon that lacks underlying clarity as to how it came about, reflects contextually about what is known about it, then lays out a set of connecting assumptions towards a probable, but not as yet validated notion (ie hypothesis) that best explains it - once the likeliest set of connecting probabilities are ascertained, deductive investigation can be used to validate and/or further refine or refute the assumptions, thus corroborating/reinforcing them (so, if effectively deduced, turning them into proven/disproven facts) to either strengthen or weaken the logical basis of your induced reasoning.
 
Abduction works similarly to induction but works back from a final inferred premise (theory) AND seeks to validate a set of necessary probable assumptions to confirm or refute the final premise (to 'begin with the end in mind' as Covey would put it) - i.e., suggest an encompassing theory to explain some ready phenomenon, break it down to elemental hypothesis that can be tested for verification, then validate via deductive investigation, as with induction.)
 
It is actually hard to say whether or not there is a little (or a lot) of abduction included with any inductive line of reasoning, and vice versa. my personal feeling is that the process of visualization allows for a fairly liberal overlapping of the two, such that you may be coming up with a plausible schema of both means and rule, near simultaneously.
 
I agree that human nature is such that we naturally induct and abduct a lot of what we think directly from associations with experience, and would assert further that it takes a formal riger of discipline (ie education) to learn how to effectively qualify one's thoughts through deductive means.
 
...by the way, I disagree that the act of counting physical objects (fruit, etc.) in demonstration of math (2 apples + 2 oranges = 4 fruit, etc.) is induction, as was suggested earlier in this thread. This is unquestionably direct observation of clear facts before you - deduction (no inference involved or required.)  e.g. - oh look - Martha has a bowl of fruit on her table. Observation: there are 4 of us visiting Martha, and we are all hungry and like fruit. Question to Martha: how many fruit are in the bowl?  Martha says, "well, let us deduce this by counting them - 1 banana, 2 bananas, etc..."  If however every time you come over to Martha's house you see a similar bowl of fruit on the same table, you can begin to abductively guess how many are in the bowl (e.g., "hey, it looks like there's enough for all of us this time, or not, etc.") Still, only deduction will corroborate or refute your guess.  Abductively, you might also start to worry that they look to be the same fruit as were there before, so perhaps less inclined to ask for a portion and more inclined to inductively reason that they are made of plastic.  Again, only by deductively going over and checking, will you be able to prove or disprove this conjecture.

Offline Andreas Geisler

  • Moderator
  • Full Member
  • *****
  • Posts: 117
Re: Deduction versus Induction
« Reply #22 on: March 14, 2014, 04:25 AM »
Losing the layperson seems like the least concern, considering the initial posts in the free will forums, just sayin.

However, your portrayal of deduction is both problematic and revealing: Cluedo is a game designed to reward deductive reasoning. There are many of that kind of party games, and they all share one property: They have been vetted so that A) all the required premises are artificially provided, and B) the unknowable fallacies have been eliminated. The unknowable fallacies are things like relying on a dichotomy that may or may not be a false dichotomy. Mostly we are using induced facts to do our deduction, meaning that deduction, at best, is a way to speed up the process of learning.
And, the problem arises with the fundamental dishonesty of this: Deduction claims that it must be true if the premises are sound and the arguments are valid (and sufficient). In reality, when we perform linear combination (which deduction is) of probabilistic premises, the uncertain associated with the premises is multiplied during the process of combination. So, instead of giving better results than induction, it gives more uncertain results, with a false sense of certainty.

Look at cogito for example. Cogito is a quite good deductive argument, which goes from a subjective observation that observation occurs, and leads to the conclusion (still subjective), that the one doing the observing exists. That's fine. But it defines "I" as just that, the fact that observation is going on.

Meanwhile, the inductive argument for "I" involves assuming that "I exist as I seem to exist in a world that seems to be like my senses tell me it is".
Which, ultimately, is far more useful than cogito's anemic result.
Because cogito can't lead anywhere at all. It is our one certain fact, and it is subjective. There are no arguments that can take that anywhere.

Offline Pat Johnston

  • Moderator
  • Newbie
  • *****
  • Posts: 37
Re: Deduction versus Induction
« Reply #23 on: March 14, 2014, 10:24 AM »
My opening statement intentionally contained a mix of abductive/inductive inference, in the form of supposition, to emphasize a point:

Quote
...it seems that a lot of effort is going into over complicating them, and whether intentional or not, at least some of that appears to be obfuscation. Is there an intent to lose the layperson here?

- (implicit, inductive) the previous efforts made were not necessary for each of you to make your points; inducing that the points are overstated and unnecessarily complex, whereas a more deductive review of the posts and referential links and citations might logically lead one to a different conclusion.

- (implicit, abductive) there are motives towards complication and obfuscation behind the posts; this of course is not substantiated directly and overtly in them. It is my narrator's 'read' into them, and should reveal a clear bias in my own thinking as presented.


Wherever one uses terms like "seems", it can be anticipated that a form of inductive/abductive inference will follow. Acumen would require a deductive analysis of it to demonstrate corroboration, else a potential retraction is in order. Having gotten to my point, I hereby retract the above said inference.

Cogito is incomplete because it is only half of a rationalization of objectivity from subjectivity. The other half is manifest corroboration - i.e. validation of the continuity and context of reality, through direct observation and communal correlation of observation and its analysis with that of other discrete observer perspectives. That's 'me' checking my observations and assumptions with 'you' and 'you' and 'you' out there. Deduction then gets into a comparison of findings in the sampling as part of the analysis.

At this point I should state emphatically that a counting of apples and oranges sitting on the table before us is in no way "induced facts" or representations of "unknown fallacies" or "false dichotomies".  1 is 1, not 2 or other. 2 is always 2, not 1 or 3 or other. There are many rationales that might lead one to a false sense of certainty. By the set rules of logic this is not one of them. If it were, then we should be prepared to throw all logic, and reason itself, to the wind, because nothing would make any sense. Purple may arbitrarily said to be brown, and all progressive thinking would grind to a halt. Yes, this is some more supposition/inferred thinking, coming from this discrete observer perspective...

Deductive reasoning is not infallible and always contains an 'if' for a reason. If ever there is a flaw, it is to be found in the substantiation of the premise, not in the deductive mechanism itself. All forms of reasoning may be susceptible to inefficiencies, since there's a difference between the rules and the variable content of its application.




 

Offline Andreas Geisler

  • Moderator
  • Full Member
  • *****
  • Posts: 117
Re: Deduction versus Induction
« Reply #24 on: March 14, 2014, 06:11 PM »
Cogito is incomplete because it is only half of a rationalization of objectivity from subjectivity. The other half is manifest corroboration - i.e. validation of the continuity and context of reality, through direct observation and communal correlation of observation and its analysis with that of other discrete observer perspectives. That's 'me' checking my observations and assumptions with 'you' and 'you' and 'you' out there. Deduction then gets into a comparison of findings in the sampling as part of the analysis.
No it doesn't. Nobody should propose cogito as a valid basis for realizing the existence of the world, as it very clearly is not involved in how every living person has formed its realization of the existence of the world.
Indeed, cogito would be an incredibly odd choice for such a basis, since it specifically involves admitting that all the sensory input could be false. Yes. It could.
Now, what you then describe is observation, and that is inductive. It's not "philosopher inductive", it's cognitive-inductive.
We see a stream of input. It might be true or false. But there is a persistence to it. The present segues with the past, if you'll allow the metaphor. And, I am sure you've already noticed where I am going. The only cognitive process that can assume an ontological existence from a series of segueing sensory inputs IS induction.
Whether or not it is conscious. Because, unlike deduction, which we are not very good at, induction is something we can do subconsciously.

And counting apples is also inductive. That is in fact how we learn arithmetic, as I said, it took thousands of years and hundreds of pages to formally prove that 1+1=2. But inductively it's incredibly simple to realize. Just take one object, place it next to another, then count them. If you do it enough times, you will inductively know that 1 and 1 is 2. Not just those times, but every time.

The idea that deduction can be involved with that is clearly preposterous, given the complexity of the proof for 1+1=2.
It is simply not possible to deductively formulate a theory of ontology without first forming an inductive theory of ontology.
And in that case, it is fairly clear that it is the inductive one we're really referring to, despite our claims of deductive rigor.

We mustn't fool ourselves. And we are the easiest ones to fool.

Offline Pat Johnston

  • Moderator
  • Newbie
  • *****
  • Posts: 37
Re: Deduction versus Induction
« Reply #25 on: March 30, 2014, 12:42 AM »
(apologies for the time it’s taken to prepare this response, but I wished to take great care to properly think it through…)
 
Andreas, to begin, I am thankful that you, Alain, Eric and others regularly challenge me, as I aim to be very careful, and extremely skeptical of every idea I explore, whether it is mine or someone else’s. These challenges are a part of the balancing equation for my ongoing rationalization of reality.  Though my disposition as a finite being - my humanity - accounts for the probability that I may vary in the effectiveness of this from day to day, I am self-assured that my investment in dogmatic rigor to this discipline above all else, will in most circumstances set me a ‘right. I am likewise self-assured that this same rigor will also challenge me to let go of ideas unequivocally proven implausible. It is to this latter ideal that I recognize the regular need to ‘bite my tongue’ and withhold response until I’m well passed the ‘impulse thinking’ that is typically generated by the emotional qualities of some discourse.
 
I also continue to struggle with what appears to me to be a bit of ‘fuzziness’ and cross-over in terms and expressions – lines of definition seem to be arbitrarily crossed - so if I may, please allow me to offer the following working corollaries for consideration, and upon which I may anchor some foundational distinctions between deduction and induction/abduction. They may still need a little more tuning, but I think I have the gist in place, at least.  Thereafter I will attempt to further address your responses:
 
Corollary number one: ‘observation’ is observation, not anything more. It is sensorial data, potentially discernible in terms of pattern, but otherwise distinct from categorical associations and causal inference.  I take what my senses tell me at face value, regardless of coming from a waking reality, a dream induced state, or some other form of illusion. The experience of waking up from a dream state becomes new sensorial data, its own ‘observation’, to add to the set.  It is important to note that data is just that – data, not information (i.e., data plus context), and as I’ve said often, “data without the context is (no different than) noise”.
 
Corollary number two: Human beings are learned beings, in that we construct relative fact around discrete observation using socially acquired tools.   Human beings are not born learned beings but observation of discernible pattern indicates that, where not impeded in some fashion, we all from inception become learned beings through our social interactions and efforts. This of course is not to say we all have the same degree and form of knowledge – far from it. Just that some level of ‘general and functional saturation’ is reached where a commonality of abilities, terms and understandings are in the majority.  At each person’s birth, a pre-established coalescence of societies already exists, and begins infusing every new person from inception with current and local data in providing context as information. I am a learned being of some fashion, so I may draw on established association for classification and categorization. This correlation of observed pattern and learned registry is the basis of necessary inference. This process is the acquiring of relative facts, that is, facts placed in a context relative to my learning experiences. These then are also socially relevant facts, given the communal background to their formation.
 
Corollary number three: there is a clear distinction between necessary and unnecessary inference. Unnecessary inference is all that is not immediately self-evident by direct observation of discernible pattern and established corroboration.  I see an object before me, sitting on a table. It is red, roundish, and has all the observational qualities defining it is an apple.  Taste and smell further corroborate this observation.  The formation of the necessary inference that it is a perceptively real red apple is established.  Having acquired both the descriptive associations and basic math through learned experience, I observe that there are not one but two such objects of near identical nature before me.  All of this is necessary inference.  Note that I could make this observation at, say, ages 5 or 20 or 50, but I could not make this observation at 1 month of age.  These concepts must all at one point or another be introduced.  I observe further that there is a bruise on the apple on my right.  Another necessary inference that is corroborated by additional analysis – it is indeed a bruise.  This is all valid deduction.  I then surmise that this apple must have been dropped or banged at some point in finding its way to this table – this is not self-evident by any direct observation and consideration of the observant facts available to me – I have not been attendant to this apple from the time of its growth on a tree, to its picking & shipping to market, and its procurement and final manner of placement on the table. It is unnecessary inference, an assumption, and a clear distinction from observable facts. Inferring the possible ways it was bruised is a shift in thinking over to induction.
 
Corollary number four: deduction must be an open and iterative process.   Its discontinuation is an interruption in the deductive process, and a deviation from deductive reasoning. Beginning as an unlearned being, indications are that we have no innate knowledge – no factual inventory.  Through stimulus and response, attention is brought into focus, syncing up with our physiology, including various forms of concentration and motor control.  Through attention and retention, pattern is registered.  Registration allows for comparison and association to occur.  Repetition of observed experiences generates broader associations – observable coherence emerges. Through this coherent registry, axiomatic classification and relational categorization can be built up and retained, but it holds no absolute as it is open to iterative substantiation and re-examination, and, more importantly, it does not emerge of its own accord – all classification and categorization arrives from a dependency on social interaction, beginning with language, and with that, our earliest forms of dialectic exchange.
 
Corollary number five: inductive thinking is an innate trait – a natural habit.   Observation informs us that we have a natural predisposition for intuitive thinking – the more familiar we are with a given pattern, the better able we are to predict additional associations not yet corroborated by direct observation. A logical review of our nature indicates that we are physiologically evolved to do this. Survival was a predominant cause of formation of much of our physiology.  The fight/flight response needs be triggered ahead of observant threat, hence the natural selection of abilities favoring optimal fight/flight capability would dominate.
 
Corollary number six: deduction cannot be used to formulate predictive outcomes. It can only make judgments of what is known.  Any inference into the future becomes unnecessary inference, and in so doing, switches reasoning to inductive logic. In this regard it is quite correct to say that induction plays a dominant role in predictive reasoning.
 
Corollary number seven: ‘critical thinking’ must involve an overt and effective balance of inductive and deductive reasoning.   Or more accurately, to be optimal it must find an effective balance of the two, relative to every purpose it undertakes.  The rationale for this is simple – too much inductive reasoning without deductive corroborating may lead one far afield of reality. This may be more or less acceptable, depending on the objective, as say in exploring theoretical possibilities, versus analyzing unidentified details.  Likewise, too much emphasis on deductive corroboration may impede the intended objective, halt progress, and ultimately make ineffectual the effort of analysis.  Consider risk/reward scenarios where fit-for-purpose and timely response are more critical to an objective outcome than the degree of exacting accuracy.
 
Andreas said:
Quote
No it doesn't. Nobody should propose cogito as a valid basis for realizing the existence of the world, as it very clearly is not involved in how every living person has formed its realization of the existence of the world.
Indeed, cogito would be an incredibly odd choice for such a basis, since it specifically involves admitting that all the sensory input could be false. Yes. It could.

Ok, given the corollaries I’ve outlined, and contrary to your somewhat inflammatory insistence to the contrary (…and putting aside for a moment the claim that you somehow have intimate knowledge of how every person in the world thinks?), it does actually make logical sense for a person to rationalize their existence beginning with the idea of ‘cogito’ (the philosophic principle that one's existence is demonstrated by the fact that one thinks) as a starting point – one side of the balancing equation between perception and corroboration - given the most immediate, direct and intimate (i.e. first person) validation we have is our own conscious awareness and memory thereof. But reflecting upon these corollaries, one realizes that the obvious gap between raw ‘awareness’ and critical ‘thinking’ is only bridged by our external and socially corroborating nature.  I’ll come back to this premise in a moment.
 
From Wikipedia:
Quote
Deductive reasoning (top-down logic) contrasts with inductive reasoning . . . in the following way: In deductive reasoning, a conclusion is reached reductively by applying general rules that hold over the entirety of a closed domain of discourse, narrowing the range under consideration until only the conclusion is left.

“1+1=2” because general rules are in place. Whether it took 1 day, 1 month, 1 year or 2,000 years to establish this is irrelevant, given that it is now a part of established, socially contexting information.  The general rule that "1" is representative of singularity,  along with the general rules for the other digit and symbols, represent the entirety of the closed domain of discourse for this formulation. If your argument is to break it down further to first cognition of a rule, as in the first time we learn to count objects before us, then my submission still stands: deduction plays a role, as it involves external (social) corroboration through first registration of pre-established context. The cognition of a symbol alone won't suffice. Both the symbols and the sensorial information would be meaningless without established context - there must be learned association.   It is impossible to know intent of meaning separate from random and disassociated perception without the necessary ‘confirmation bias’.  I intentionally use this term, in contrast to the negative association given it in psychological circles – the very point is that attention bias singles out pattern familiarity for better referencing context – so in this context, familiarity does not breed false validation (nor, I would hope, contempt!). In human terms absolute solitude as basis for learning – acquiring contextual association for pattern recognition – can't work - inter social corroboration is elemental and essential. If it’s the peace from solitude one requires to think creatively, then that’s a different matter, as going solo to observe, induce and formulate in a meaningfully relevant way can only happen once you’ve acquired the necessary toolsets. General rule builds from inter-social experience into axiomatic premise, beginning with our first interactions with our parents and/or primary caregivers. And there is an enormous amount of this interaction that occurs through our maturation, working towards the aforesaid ‘general and functional saturation’ of corollary number two.
 
"There's nothing more elusive than an obvious fact." - Sherlock Holmes, via A.C. Doyle.
 
The deductive analysis - the efforts towards corroboration, involve secondary validation with purported entities (such as yourself) of this perceived objective reality. So it is critical comparison of perceptions and precepts, in copious dialog and inquiry with others using general rule forms of established language, communication and logic, just as you and Alain and others routinely do throughout your inter-social existence. Wouldn't you also say it is "incredibly odd", if not a bit hypocritical, to discount this, in a very forum predicated on it?
 
A few entries back, you said:
Quote
However, your portrayal of deduction is both problematic and revealing: Cluedo <sic> is a game designed to reward deductive reasoning. There are many of that kind of party games, and they all share one property: The have been vetted so that A) all the required premises are artificially provided, and B) the unknowable fallacies have been eliminated. The unknowable fallacies are things like relying on a dichotomy that may or may not be a false dichotomy. Mostly we are using induced facts to do our deduction, meaning that deduction, at best, is a way to speed up the process of learning.
And, the problem arises with the fundamental dishonesty of this: Deduction claims that it must be true if the premises are sound and the arguments are valid (and sufficient). In reality, when we perform linear combination (which deduction is) of probabilistic premises, the uncertain associated with the premises is multiplied during the process of combination. So, instead of giving better results than induction, it gives more uncertain results, with a false sense of certainty

On my use of a Clue board game example: this was of course intentionally simplistic for illustrative purposes, the example premises, and the fact that such devices are contrived to an intended outcome, are obvious but otherwise completely irrelevant, other than to simplistically demonstrate the deductive process.
 
On the unnecessary inference that to “reward deductive reason” is somehow a bad thing:  it can be alternatively predicated that exercises that strengthen the soundness of one’s deductive reasoning are a very good thing, as I’ve suggested in the above corollaries that true deduction takes great care and utmost rigor of discipline to know when one is deviating from it and venturing into induction. I do not mean to say induction is bad here, rather, it is the undisciplined and often unnoted wandering between the two that should be suspect.
 
On unknowable fallacies: being, as you say “…things like relying on a dichotomy that may or may not be a false dichotomy.” Please explain how it is that something ‘unknowable’ can be known and addressed – that is, “eliminated” from consideration in a given premise, as you say.  To be frank, the term “unknowable fallacy” is itself an oxymoron.
 
On the unnecessary inference of a “fundamental dishonesty”: And yet again, deduction is based on necessary inference. This is a hard rule, no exceptions.  "Probabilistic premises" are unnecessary (not necessary) inferences. If you're reasoning in this way, it is no longer deduction. If this greatly reduces the actual deductive elements of someone’s reasoning, then so be it.  The uncertain results may arise precisely because it is a switch to inductive thinking. The rules of deductive reasoning are not flawed - only the premises may be flawed. If a line of deductive reasoning generates a completely false outcome, then the reasoning is not corroborated, and the premises must be revisited. For this reason, corollary number four is required: deduction must be open to scrutiny and iteratively revalidated.
 
Inductive/Abductive thinking – reasoning by unnecessary inference - could just as easily lead you much further afield, and into completely un-predicated territory. Especially as you yourself say – where inductive premises are compounded through combination, the level of uncertainty as to outcomes is multiplied.  It would be to accept it on blind faith - to be "charitable" that, for example, you are not an AI program, or that your credentials are just as you present them to be, or that some other learned individual having additional training in this specific area is not typing under your name and account, and that therefore all your posits wrt the question of existence that you claim to infer them from, are not to be taken at face value as you present them. I can with ease inductively imagine numerous scenarios by which you selectively give me false information or misinformation, or even lazy (that is to say, unsubstantiated/uninvestigated/uncorroborated) information, or likewise imagine numerous scenarios where you didn’t simply buy into your premises but were misled by someone else that you held in high intellectual regard, and so still through your language be able to effectively infer to me with personal certainty that they are all affirmed de facto facts. Or even, that you have not (as Alain put it over in free will) used 'cherry picking’ to selectively focus in on just the premise and points you wish, so as to egocentrically ensure from your perspective the discussion is led a certain way.
 
I might also therefore abductively reason from these wholly unnecessary array of inferences, a greatly compounded unnecessary inference that you have some ulterior motive, some hidden agenda, for doing so. That for example you may be seeking to build an army of ‘like-minded’ beings, and dissuade and muddle the thinking of all those who might oppose you.  This may be ‘true’, or it may be utterly 'preposterous' - such is how wide open an outcome the amplifiable uncertainties of unnecessary inference can lead us.
 
(…and over to the heart of my current free will question at hand – a potential bias not acknowledged?  That you would argue for the basis of an absolute dichotomy there, and yet allow for the connotation of a unreliable dichotomy as an unknowable fallacy here, is potential evidence of topical confirmation bias that should perhaps be explored?  It is as I said and intended it in that free will post: the current question being a challenge to all, to identify and reveal such bias in an act of mutual honesty – so as not to “fool ourselves”, but fairly also, not to try and fool each other.  Taking a cue from your/Alain’s exchange in the post on “what is Truth?”, either we have access to evidentiary validation of absolute states in reality, or we do not, and can only surmise such absolutes as theoretical abstraction, working from the discrete and relative knowledge our finite subjective nature affords us. To situationally criss-cross betwixt these two positions is to risk contravening the law of non-contradiction.   ...stating this, I will therefore commit to attempt to address any misconceptions of my own propositions on relative ‘free will’ versus absolute ‘Free Will’ over in that thread…)
 
I am compelled to mention, through a hardened sense of irony, that this seems to be about a greater struggle as to how we might all agree on the first few steps to be taken...
 
The point of suggesting cogito's possible use *as a starting point*, is that it would be no different than reasoning that, though I am certain that I experience something, I've no certainty that the basis of that experience is itself real, given the counter arguments set against such veracity, whether being in a potential state of hypnotism or dreaming or other illusionary forms of false perception. I can attest that the words you present make up sentences and inferences - utterances. But of their veracity I've no assurance. Our only recourse is that deductive investigation must take over, if we mutually intend it and enact it correctly, to work out corroborative validations of any claims.  I thus might manage to sustain some degree of coherency of reality for the duration of my existence, knowing that much of that upheld coherence may die with me, and/or with all other corroborating sources.
 
So how then to explain your apparent ‘selective attention’ to what I say?  Looking back through my posts, I clearly stated that I see deduction's purpose in corroboration of hypothesis and theory, not their "formulation" – the validation of (or rejection of, or indecision upon) the inferred suppositions of inductive/adductive reasoning.  And it would be for this reason that I suggest a person would then of logical necessity go beyond cogito, to seek evidence in the perceived world that would verify or dispel one’s reasoning of it.  Also noting that, with every test case, we through learned discipline remain aware of the "if" rider that goes with it.

At no point did I say it played a direct role in ‘formulation’. This is the basis for corollary number six - all what if’s come inductively.  Deduction may reveal factual gaps, and it may also lack access to the necessary inferences required to fill those gaps. But formulations are not at all constrained by necessary inference – they run the gambit of unnecessary inferences as well. Deduction is strictly the ‘litmus test’, and direct observation of repeatedly testable results under mutually agreed general rule, the logical standard form of corroboration.
 
Another bias of my own to reveal - I have a personal investment in all of this, in particular as it relates to the case for cogito’s possible role in a balanced reasoning of existence. Was it through some form of synchronicity that you introduced cogito in working your premise against the said ‘trustworthiness’ of deduction?  No idea, but none-the-less, it offered a 'door' for me to open. To make the case that the balancing relation between perception and inter-social corroboration is of critical importance.  If through good reason this relation is deconstructed and proven unreliable - thrown out as it were – then I am left wide open to an impossible irrationality about perceived reality itself to deal with, as defined by a personal experience that itself contradicts all known rational explanations of reality (that I am aware of), as formulated, corroborated and conveyed by all other learned beings to this point.


…this added context may help explain why my emotional response to these types of discussions is sometimes a little stronger than one might anticipate. Comes with having a bit of ‘skin’ in the game, I guess…
 

Offline Andreas Geisler

  • Moderator
  • Full Member
  • *****
  • Posts: 117
Re: Deduction versus Induction
« Reply #26 on: March 31, 2014, 04:28 AM »
(apologies for the time it’s taken to prepare this response, but I wished to take great care to properly think it through…)
 
Andreas, to begin, I am thankful that you, Alain, Eric and others regularly challenge me, as I aim to be very careful, and extremely skeptical of every idea I explore, whether it is mine or someone else’s. These challenges are a part of the balancing equation for my ongoing rationalization of reality.  Though my disposition as a finite being - my humanity - accounts for the probability that I may vary in the effectiveness of this from day to day, I am self-assured that my investment in dogmatic rigor to this discipline above all else, will in most circumstances set me a ‘right. I am likewise self-assured that this same rigor will also challenge me to let go of ideas unequivocally proven implausible. It is to this latter ideal that I recognize the regular need to ‘bite my tongue’ and withhold response until I’m well passed the ‘impulse thinking’ that is typically generated by the emotional qualities of some discourse.
Likewise, it is refreshing to have a real challenge.

I also continue to struggle with what appears to me to be a bit of ‘fuzziness’ and cross-over in terms and expressions – lines of definition seem to be arbitrarily crossed - so if I may, please allow me to offer the following working corollaries for consideration, and upon which I may anchor some foundational distinctions between deduction and induction/abduction. They may still need a little more tuning, but I think I have the gist in place, at least.  Thereafter I will attempt to further address your responses:
One possible cause for this is that some definitions may be circular in a deductive system built on inductive basis, it is good to hammer out the basis to make sure this is not the case. And it is good to short-circuit the definitions, in order to test for undesirable circularities.

Corollary number one: ‘observation’ is observation, not anything more. It is sensorial data, potentially discernible in terms of pattern, but otherwise distinct from categorical associations and causal inference.  I take what my senses tell me at face value, regardless of coming from a waking reality, a dream induced state, or some other form of illusion. The experience of waking up from a dream state becomes new sensorial data, its own ‘observation’, to add to the set.  It is important to note that data is just that – data, not information (i.e., data plus context), and as I’ve said often, “data without the context is (no different than) noise”.
This is a point of contention. You see, the sensorial data doesn't make any sense at all, as such. It takes the infant a long time to piece together a coordination between the different dimensions of sense. So we must distinguish between Sensory data, and Perception. Perception is the categorized digest of the sensory input. And that makes a huge difference to this, would you not agree?


Corollary number two: Human beings are learned beings, in that we construct relative fact around discrete observation using socially acquired tools.   Human beings are not born learned beings but observation of discernible pattern indicates that, where not impeded in some fashion, we all from inception become learned beings through our social interactions and efforts. This of course is not to say we all have the same degree and form of knowledge – far from it. Just that some level of ‘general and functional saturation’ is reached where a commonality of abilities, terms and understandings are in the majority.  At each person’s birth, a pre-established coalescence of societies already exists, and begins infusing every new person from inception with current and local data in providing context as information. I am a learned being of some fashion, so I may draw on established association for classification and categorization. This correlation of observed pattern and learned registry is the basis of necessary inference. This process is the acquiring of relative facts, that is, facts placed in a context relative to my learning experiences. These then are also socially relevant facts, given the communal background to their formation.
I think this is a somewhat dangerous assumption to make, the risk of inadvertently introducing circularities is enormous. How do you receive information from the society, without a language? And once you have acquired language, there is this : Of course the language already has codified a lot of stuff, but how can you understand what it means, if you do not have access to the reasons for its codification: I.e. if the language doesn't show its work, how can you learn from it?
 
Corollary number three: there is a clear distinction between necessary and unnecessary inference. Unnecessary inference is all that is not immediately self-evident by direct observation of discernible pattern and established corroboration.  I see an object before me, sitting on a table. It is red, roundish, and has all the observational qualities defining it is an apple.  Taste and smell further corroborate this observation.  The formation of the necessary inference that it is a perceptively real red apple is established.  Having acquired both the descriptive associations and basic math through learned experience, I observe that there are not one but two such objects of near identical nature before me.  All of this is necessary inference.  Note that I could make this observation at, say, ages 5 or 20 or 50, but I could not make this observation at 1 month of age.  These concepts must all at one point or another be introduced.  I observe further that there is a bruise on the apple on my right.  Another necessary inference that is corroborated by additional analysis – it is indeed a bruise.  This is all valid deduction.  I then surmise that this apple must have been dropped or banged at some point in finding its way to this table – this is not self-evident by any direct observation and consideration of the observant facts available to me – I have not been attendant to this apple from the time of its growth on a tree, to its picking & shipping to market, and its procurement and final manner of placement on the table. It is unnecessary inference, an assumption, and a clear distinction from observable facts. Inferring the possible ways it was bruised is a shift in thinking over to induction.
I am afraid that's another source of circularity, this time definitely active, not merely potential. "Apple" is an inductive abstraction. So is "bruise". What you are saying is that your perceptions conform with some of your expectations, formed of earlier perceptions. This goes all the way down to the physical level: Your neurons learn in an inductive fashion: The connections that correspond to the various aspects of the context are either strengthened or left to deteriorate, based on whether the same stimulus recurs with a similar context. What you were actually doing was recognizing the inductive categories, "apple" and "bruise on apple". That is not deduction. Deduction only comes into play when you start to consider why there is a bruise there.

I apologize for leaving the rest of your post for later, but I feel there are already too many points of contention for it to make sense to continue before you have given your opinion of my objections. Our differences seem to be very basic, so taking small steps is crucial.
Also, it won't all fit as quoted text because of the 25000 character limit (apparently).

About 1+1=2, though, I reject the idea that the formal proof is part of anything. Nobody needs the formal proof for anything, so the fact that it now exists is little more than a curiosity for after-dinner conversation. Again, the inductive learning of arithmetic is far too compelling to avoid.
Of course, maths is (by now) an axiomatic system, where all the possible discoveries are implicitly included in the axioms, so that "discoveries" can be made using deduction. This is, funnily enough, all things we would have to say we already could have known from the axioms, but showing how requires deduction.

The only cognitive processes that enter new information into the cognition engine are inductive processes. So, learning is inductive. Extracting the full benefit of what one has learned can involve deduction, though.

Also, you needn't worry about life without an illusory reliance on deduction and absolute fact, reality is, I assure you, incredibly robust.
Actually, make that credibly robust.
Reality being our inductive assumption that our sensory input is indicative of an ontological existence. Remember, the odds of STRONG inductive basis leading you astray is far far smaller than the odds of accidentally doing deduction with a weak inductive inference, which is guaranteed to corrupt your deduction forever.

Offline Pat Johnston

  • Moderator
  • Newbie
  • *****
  • Posts: 37
Re: Deduction versus Induction
« Reply #27 on: April 03, 2014, 01:14 AM »

Quote
One possible cause for this is that some definitions may be circular in a deductive system built on inductive basis, it is good to hammer out the basis to make sure this is not the case. And it is good to short-circuit the definitions, in order to test for undesirable circularities.

Andreas, I think if you had read through my entire post you would have noted that I challenge the premise of deduction built upon induction, and see them as distinctly different.  Consider the manner in which you use ‘induction’ in you inferences – try this ‘short-circuiting’ of the definition: replace this word with “unnecessary inference” wherever you use it, to see what I’m getting at.  When discussing forms of reasoning, ‘unnecessary inference’ simply means proceeding a line of thinking beyond what is known – so taking the apple’s bruise that you do see, and imagining ways it might have received such a bruise that you don’t have explicit knowledge of.  When applying this rule as such, it’s easy to distinguish the difference.

It's an unnecessary leap to go straight from the mechanics of abstract reasoning to how all forms of ‘thinking’ take place technically/physiologically inside our brains.

lf a first impression of an object never experienced before – e.g. ‘apple’ – is captured with a unique synaptic configuration that likewise never stood out before (by virtue of its first ‘pruning’), then there is nothing pre-existent from which to draw a basis of any unnecessary inference. For the brain to perform the function of induction – i.e., to predict contextually coherent examples of unnecessary inference – it must have BOTH sensory data AND referential context, the latter of which leads to some recognizable pattern by which an unnecessary inference could be drawn.   Without the latter, you have nothing but untreatable noise and no discernible pattern to leverage inductively. Hence the argument ‘falls down’.

Check out this link, and note in particular what it says regarding memory: https://www.childwelfare.gov/pubs/issue_briefs/brain_development/how.cfm

Maybe what you may consider inductive process at the neural level, is herein referred to as ‘implicit memory’ (e.g. the limited examples of a baby’s innate recognition of its mother’s voice, or human faces, etc…) whereas a much larger and growing expanse of explicit memories are formed of necessity, built up through repeated experiences interacting with the environment and people.

Similarly, in this link, note how critical the child’s interaction with their environment and social sphere is in supporting its brain development: http://www.urbanchildinstitute.org/why-0-3/baby-and-brain

…from this 2nd link:

Quote
“The brain’s ability to shape itself – called plasticity – lets humans adapt more readily and more quickly than we could if genes alone determined our wiring. The process of blooming and pruning, far from being wasteful, is actually an efficient way for the brain to achieve optimal development.”

Point being that it needs that social and environmental interaction in order to affect the ‘pruning’ and ‘blooming’.

…and from the same link:

Quote
At about three months, an infant’s power of recognition improves dramatically; this coincides with significant growth in the hippocampus, the limbic structure related to recognition memory. Language circuits in the frontal and temporal lobes become consolidated in the first year, influenced strongly by the language an infant hears. For the first few months, a baby in an English-speaking home can distinguish between the sounds of a foreign language. She loses this ability by the end of her first year: the language she hears at home has wired her brain for English.

It seems clear that the brain begins as a relatively blank slate, with a small degree of implicit memories carried over through birth, that, it could be loosely argued, are uniquely inductive.  But it thereafter ramps up contextual association, by processing comparative impression – capturing and reinforcing sensorial data in synaptic configurations and routing them in different places for comparison and retention.

At the start, with the first observations of the first apple, that is the first tenuous and unassociated imprint of it in memory – just an unidentified object of certain noted ‘qualia’.  This sensorial data is a continuous stream for varying durations, so build up of referential context can happen very fast, just from a prolonged look at the object.  Subsequent distinct experiences of the same or similar object will then be associated back to this original configuration, by the now established necessary association to the first impression of the object – the qualia match – that is to say, the inbound sense impression has been associated with the established one, reinforcing relevant synapses in the process.

This is a form of ‘circularity’ as it must be for a feedback loop to work, in that it is constantly coming around to compare, strengthen, or weaken the various synaptic associations,  and it is also utterly necessary – the iterative recording is what builds up, reinforces the neural configuration – greater exposure builds stronger associations – and it is as close as you can get to a neurologically natural representation of ‘necessary inference’, to the extent that the qualia contexting the impression better defines it in exactitude with each iteration.  But I won’t go so far as to say that this process is identical to our abstracted forms of rationalized thinking – neuro-chemical processes do what they do by their own natural genetic design. Formal logic historically evolved from pure abstracted concepts, themselves built up from root language and the formalization of meaning, also through copious dialectic – meme constructs that were worked out and could be encoded external to the mind/brain, and referred back to independently of the original thinkers.

This pruning & blooming will be dynamic, in that certain uniqueness’s of each experience alter that first synaptic configuration as well.  This essentially grows the associative repertoire of the concept of ‘apple’, and can even happen well before it ever gets the language label ‘apple’ also assigned to it.  Going through life and acquiring additional associative experiences - seeing the concept of apples by the first pieces of it given to us by our parents, then at the store, in bags in the fridge or in bowls on the table, or alone in someone’s hand, or hanging still fresh on a tree, or sitting in baskets in the back yard, as well as seeing different types of apples (Spartans, Galas, Granny Smiths, BC Macs, etc) – will all certainly enrich the overall conceptual theme that is ‘apple’, causing it to ‘bloom’ further in associative concept. 

Now take a similar though opposite scenario with say ‘cheese’ – if one never iteratively experiences ‘cheese’, either in appearance or taste or narrative description or any other reference, then to one day walk into a cheese factory, one would be at a loss as to contexting what this product is all about. Inductive thinking will be of no help here – there would be nothing contextually relevant to draw the conjecture from.

Andreas said:
Quote
This is a point of contention. You see, the sensorial data doesn't make any sense at all, as such. It takes the infant a long time to piece together a coordination between the different dimensions of sense. So we must distinguish between Sensory data, and Perception. Perception is the categorized digest of the sensory input. And that makes a huge difference to this, would you not agree?

Repetition at a minimum will introduce reinforced synaptic representation, ergo the introduction of a ‘potentially discernible pattern’ as I called it.  Where these experiences are also relevant to instinctual needs – hunger, fatigue, pain response etc., its logical to expect additional associations first building up with the sensorial data relevant to these first needs.  I agree that brain development and maturation of coordinated senses takes time, but the point of this first corollary is to establish basis from where you or I as adult thinking beings might begin this reflection.  The premise is simply that foundationally, sensorial data is distinct from ‘perception’ as you call it (or as I call it information – data with context), and this supplementary ‘categorical digestion’ is the focus of corollary number two.    

As an aside, the infant is piecing nothing together on its own – that is the point: the socializing of what the infant experiences is ‘streaming’ in with all the rest of its sensorial feed – recognition begins from inception with the mother as a profound constant,  standing out from all other sense perceptions. Most importantly because she brings critical associative experiences related to those first instinctual needs – so the mother/parent is key, being both a part of the earliest sensorial stream, but also feeding & reinforcing concepts and associations into it.

Quote
re: corollary number two;
I think this is a somewhat dangerous assumption to make, the risk of inadvertently introducing circularities is enormous. How do you receive information from the society, without a language? And once you have acquired language, there is this : Of course the language already has codified a lot of stuff, but how can you understand what it means, if you do not have access to the reasons for its codification: I.e. if the language doesn't show its work, how can you learn from it?

(…if you’re trying to lead me to your ‘ostentiation’ as basis of learning, I won’t go there.)  My meaning of ‘society’ here is any external human contact.  Language arrives first verbally through the people closest in contact with the infant, as spoken word (perhaps as some argue, even with the mother before birth) and accompanied with tactile interactions – connecting that to written language comes much later, with letter blocks and Sesame Street and flash cards and Dick and Jane readers and such. 

Babies aren’t sitting in some isolation lab somewhere discovering shit on their own – they are at the center of a fairly constant micro attention and socially engaged by their families from inception. The understanding emerges as consistency of pattern in meaning that is strengthened through iterative reinforcement with people around us.  The essence of this inherent social consistency is that in imparting understanding on children, people always refer to apples as apples, and not arbitrarily switch to calling them bananas, then cats, then cardboard, then peat moss, etc.  Simply put, the child becomes familiar with the socially established meaning of things because there is a significant degree of established consistency to it, acting as normative reinforcement.

There is also no circularity in my logic, as I see such thinking as necessary in understanding the emergence of thinking itself – perhaps what you may see as ‘circularity’, I see is iterative growth. There is a distinct starting point to it – the first sensorial impressions, and from there, each experience noted in the infant brain is itself temporally unique, and so while finding a contextual match, it may also add distinctly unique qualia to each iterative experience of it, enriching and expanding what has built up with each pass, until complex associative structure becomes the norm.

Any need to have ‘access to the reasons for its codification’ is only relevant for an already learned adult who intentionally undertakes to formally deconstruct and understand how language historically came to be.

Quote
re: corollary number three:
I am afraid that's another source of circularity, this time definitely active, not merely potential. "Apple" is an inductive abstraction. So is "bruise". What you are saying is that your perceptions conform with some of your expectations, formed of earlier perceptions. This goes all the way down to the physical level: Your neurons learn in an inductive fashion: The connections that correspond to the various aspects of the context are either strengthened or left to deteriorate, based on whether the same stimulus recurs with a similar context. What you were actually doing was recognizing the inductive categories, "apple" and "bruise on apple". That is not deduction. Deduction only comes into play when you start to consider why there is a bruise there.

No, it is not an "inductive abstraction".  Substitute ‘inductive’ and ‘abstraction’ with synonyms for clarity: it is not an ‘unnecessary inference’, pulled from a ‘concept not founded in reality’.   As already described here, it is associative connection to established synaptic configuration.  And I am definitely not saying that perceptions conform with expectations (where ‘expectation’ is a form of projection of what ‘should be’) – I am saying that perceptions conform with established synaptic associations, but only where the synaptic comparison determines it, (ie existing mapped details = context = necessary inference, in so far as ‘this’ has been established to be just like ‘that’)  The point is that this is built up through the very contexting circularity of iterative reflection.  The associations made are therefore somewhat like ‘deductive’ steps, in their limited contextual sense, and through this same ‘deduction’, it will also identify & register notable differences, where ‘this’ is not like ‘that’.  If there is another neural configuration to explain what this distinct ‘not like’ difference means, then that too could be applied conceptually, but now this becomes the imagined unnecessary inference.  

So then, moving to induction, the idea that someone painted a false bruise on the apple, could be established as a discrete comparative image, but only because we also have neural patterns that represent ‘painting an object’, etc. Without that or similar notions, we’d have no idea how an image of a bruise landed on the side of an apple… The idea of ‘thinking it so’ is true only as a sketch of an ‘idea’, whose synaptic elements are not yet corroborated by external sensoria, such that establishing its ‘literal truth’ to this point is conjecture – inductive unnecessary inference – until some additional case-specific information of such an actual event is obtained. 

With both examples, this is a more realistic way to use forms of reasoning as metaphor (not literal explanations) for actual synaptic process.

Likewise, the affirmation as to whether it is a real or painted bruise can only be established through additional fact gathering – deductive corroboration – a targeted investigation to gather additional specific facts about the bruise, and determine if it was painted on or not. This becomes its own new (or adjusted) stored synaptic information, and potential basis of additional deductive reasoning about what it is (a fake bruise on a real apple), and inductive reasoning about what else it might imply. (e.g. that there is some crazy painter,  going around painting bruises on apples?!!)

Quote
The connections that correspond to the various aspects of the context are either strengthened or left to deteriorate, based on whether the same stimulus recurs with a similar context. What you were actually doing was recognizing the inductive categories, "apple" and "bruise on apple". That is not deduction. Deduction only comes into play when you start to consider why there is a bruise there.

I disagree - as described above, this connecting and strengthening by reoccurring stimuli I see as deductive-like corroboration, whereas this considering of “why there is a bruise there” to be the segue into induction (unnecessary inference) as it then becomes ‘what if’ speculation as to the numerous probable/possible and not so probable/possible causes behind it. Then the switch back to deduction to test the elements of each theory.   As you say it here in this response, I am clearly seeing the definitions of induction and deduction as opposite to your usage.   Once an inductive premise to its cause is put forth, deductive steps can again be followed to help substantiate or refute the new claim. This dance between inductive theory and deductive validation is what I meant by corollary number seven – critical thinking.

Quote
About 1+1=2, though, I reject the idea that the formal proof is part of anything. Nobody needs the formal proof for anything, so the fact that it now exists is little more than a curiosity for after-dinner conversation. Again, the inductive learning of arithmetic is far too compelling to avoid.

To be clear, you were the one that made a point that >1,000 years of effort of formal proof for this had some bearing on how we deductively prove it.  The assertion that arithmetic is inductively learned, circles back to the first argument of how symbols, relations – all contextual information – gets initially established.  As such, I refer back to my previous submission for this from corollaries one and two.

Quote
The only cognitive processes that enter new information into the cognition engine are inductive processes. So, learning is inductive. Extracting the full benefit of what one has learned can involve deduction, though.

Again, I disagree, and assertion is not proof.  Simply saying it is this way doesn’t make it so. I am mindful of my own errors in this regard, so circle back many times to ensure I give a logical anchor to compelling reason.  I’ve given reasons above as to why unnecessary inference holds a partial role in establishing knowledge, with a necessity to couple with deductive process.  The aspect of learning that is inductive, is purely the elements of conjecture into ‘what if’s’ atop established knowledge.  Unless you can counter my point as to how a first impression – a new synaptic map of new sensorial datum – can of itself generate an unnecessary inference of contextual association, without a necessary basis in established context?

Another example: a being is familiar with all types of balls, but has never before experienced an apple. The association of the first ‘apple’ with the established meme ‘ball’ is partially inevitable, as a limited necessary inference due to similarities in size and ‘roundness’ qualities, but can tell us nothing further about what ‘apple’ may be beyond ‘ball’. The uniquely ‘apple-like’ characteristics that stand out from ‘ball-ishness’, need to be explored, discovered and contextually mapped. Again, no infants go through this without copious amounts of parental/guardian prompting, coaching and social support.

Quote
Also, you needn't worry about life without an illusory reliance on deduction and absolute fact, reality is, I assure you, incredibly robust.

Actually, make that credibly robust.

…there’s that irony again – an unnecessary inference (induction) that I trust you, that what you say is per se, factual. From my limited familiarity with your thinking styles and beliefs exhibited elsewhere, I would not have predicted that you'd plea for a ‘leap of faith’.

Quote
Reality being our inductive assumption that our sensory input is indicative of an ontological existence. Remember, the odds of STRONG inductive basis leading you astray is far far smaller than the odds of accidentally doing deduction with a weak inductive inference, which is guaranteed to corrupt your deduction forever.

(“…odds of STRONG inductive basis…”??? – I get a kick out of how you’re using unnecessary inference to rationalize the necessity of it!)  Accidental or otherwise, if a premise is proven unreliable, it can no longer be called deduction. Period.

…and yet again - that’s “sensory input” corroborated substantially through our ‘social interactions’.  There is nothing for me to remember – this is your assertion, and it doesn’t even align with what you’ve said before about the compounding inaccuracies of unnecessary inferences. The point of my corollary number three is to not mislabel inductive reasoning as deduction.  Full stop. Compounding unnecessary premise, as you rightly pointed out a few posts back, can only lead to runaway conjecture. Iterative deduction, or full suspension of the theory, are the only two means to derail such conjecture, and by virtue of its iterative necessity (corollary number four) no such corruption of deduction is possible (except by breaking the rules and stepping outside of the deductive process). Said another way – the real corruption is to arbitrarily switch from deduction to induction, but still call it all ‘deduction’.

Offline Andreas Geisler

  • Moderator
  • Full Member
  • *****
  • Posts: 117
Re: Deduction versus Induction
« Reply #28 on: April 03, 2014, 05:27 AM »
When discussing forms of reasoning, ‘unnecessary inference’ simply means proceeding a line of thinking beyond what is known – so taking the apple’s bruise that you do see, and imagining ways it might have received such a bruise that you don’t have explicit knowledge of.  When applying this rule as such, it’s easy to distinguish the difference.
But if we never go to "unnecessary inference" we will never get anywhere. The world is not available to us in such a way that we have a basis for forming necessary inferences. Unless we first go to unnecessary inference - but once we go there, we will no longer have fully necessary inferences, as they are built on unnecessary ones.
This is the problem. Cogito is circular. We have no necessary basis for inference at all.

And laugh all you want about my suggestion that induction is inductively strongly justified, WE HAVE NO ALTERNATIVE

If you want a less spontaneous explanation, try this one: http://lesswrong.com/lw/s0/where_recursive_justification_hits_bottom/

Offline Pat Johnston

  • Moderator
  • Newbie
  • *****
  • Posts: 37
Re: Deduction versus Induction
« Reply #29 on: April 03, 2014, 11:30 AM »

At no point did I say “…we can never go to ‘unnecessary inference’…” – quite the contrary – I’ve said (repeatedly now) that ‘induction’ is something we habitually do. I’ve also said ‘induction’ and ‘deduction’ work together (corollary number seven), and also that you can’t fault the process of deduction itself for human error in substituting it (knowingly or not) with inductive thinking. 

"...spontaneous answer..."??  I am left to wonder with you saying this, immediately after my last post – is that an example of not getting past emotionally charged ‘impulse thinking’? Should I infer it as evidence of myopic thinking?

And the world is definitely “available to us” to some substantial degree, as far as relativity and proximity allow it, to corroborate necessary inference, in many different ways. 

Another example: Driving to work one morning, it was still dark out, and all of the street lights were on.  Then just up ahead, one of the street lights blinked out. My immediate and unnecessary inference, is that it burned out. This is fine as conjecture, but how do I know this for sure?  At that moment, I don’t – insufficient evidence, other than that I have seen other lights randomly blink out as well, suggesting a possible consistency – that these light bulbs have a finite ‘operating window’, and tend to burn out after being on all night.  However the next day, I note that the same light is now back on, therefore I must cycle back on my original inference and re-corroborate – at a minimum, the new corroboration – the necessary inference – is that my previous inference was premature and did not fully consider all possibilities.  It may not have burned out – it may have a defective or incorrectly set sensor, causing it to prematurely turn off before full day break. Or there is defective wiring running to it.  Or alternatively, a city maintenance technician showed up during the day, and replaced the bulb, or the defective sensor, or serviced it in some other way. The flexibility of our unnecessary inferences aside, two points stand out: that at a minimum, we can know what our inferences do and don't entail; and that with sufficient effort, subsequent investigation could determine the underlying truth to a relevant claim. 

I think the real concern one should be addressing, is that induction is habitual and rampant in our ways of thinking – so quit the contrary to saying we ‘never go there’ – we always go there!!   Therefore induction actually happens over-abundantly, and often lacks disciplined reflection, whereas deduction does not come so naturally for many – usually only from the extended pains of learning by trial and error – and so requires significant rigor and discipline to pull it off effectively.

Is there an apparent tendency towards myopia? Else you’d by now acknowledge my repeated premise – that my use of cogito is only half of a fuller rationale, if it will make you happy, call it the inductive side (ie the self-assured inference that all sensory input is defacto real), and the other half is the ongoing corroboration through engagement with our environment and with other humans using contextual language and reason (that I myself did not create from nothing).

By the way, your link/blog is full of unnecessary inferences, assumptions, framing arguments and some pretty clear ulterior motive/pre-established bias (…else why the unnecessary segues into challenging theism??).  For example, assumption: that everyone thinks the future will resemble the past – I certainly don’t, especially if I can will it otherwise; and that we need to “end in unexaminable assumption” – I likewise don’t expect this either.

Putting forth selective arguments framed to emphasize only cases of extreme abstraction are their own viscous circles, and don’t discount other more practical arguments that simply put forth the necessity of both relativity in consideration of facts, and the common sense recursive introspection of them.  A thinking human being naturally may want to be cocky and confident and self assured in all their premises, but our finite nature denies us this right.  Arguments focused on infinite regress, would only be relevant to infinite beings, and it is irrational for finite beings to claim them  – we are finite beings (as my ongoing corroboration with all you guys in the ‘real’ world keeps telling me) and so we can’t assume to have unexaminable assumption – from our perspective it can only be open ended, and it is only the finite recursive discipline of ongoing corroboration that we should be concerned about. And like breathing, hope that it should only stop for each of us when we lose our thinking faculties and die. Sadness is that the former often happens well before the latter. So put aside these assertions of absolutes and learn to find and maintain the balance in critical thinking for each situation. 

The wholesale dismissal of deduction is akin to throwing the baby out with the bath water, when the ‘baby’, as metaphor for ‘relative truth’,  is the object of greatest concern.