Critical Thinking: Understanding Inductive Arguments

Critical Thinking: Understanding Inductive Arguments

Inductive arguments work to apply what is known about objects or concepts to those objects and concepts that are unknown. It attempts to support the validity of its conclusions via the use of probability. A statement is considered to be probable if it is more than 50% likely to be true . That is, it is more likely than unlikely to be true. Inductive arguments, however, are unable to establish their conclusions with full certainty; there is always some degree of truth regarding the veracity of the conclusion. Inductive arguments, therefore, are fallible: no matter how strong or valid the argument is, there is always the possibility that the conclusion may be false. Inductive arguments are unable to be completely certain because the supporting premises utilize empirical, or observational, evidence, and this kind of evidence is never fully reliable.

Inductive reasoning is usually taught alongside deductive reasoning. Deductive reasoning begins with a general statement, and then moves on to explore the avenues of reaching a specified conclusion. In deductive reasoning, what is true for on property is true for all members of that property. For example, all women are female. Elizabeth is a woman. Therefore, Elizabeth is female, is an example of deductive reasoning.

To evaluate inductive arguments, strength and cogency are the elements utilized. Strength pertains to the argument structure, in terms of its validity and veracity. The structure of an inductive argument is considered to be strong if the conclusion is probable. That is, it is unlikely that given true premises, that the conclusion would be false. A cogent inductive argument has a strong structure and all its premises are considered to be true.

Inductive arguments are used to identify patterns and relationships. They allow us to formulate and support empirical generalizations and observations. They are, in essence, crude forms of the scientific method, which goes through observations, experimentation and conclusion. Inductive arguments can support conclusions that make a claim about a certain aspect. Inductive arguments, however, can only be used to support generalizations about empirical data.

There are four kinds of inductive arguments:

1. Inductive generalizations : make a claim about the entire group based on observations of some of that group's members

2. Analogical arguments: something shares one feature with another thing that it shares a second feature with

3. Causal arguments: use correlations to identify cause-effect relationships

4. Abductions: use empirical evidence to evaluate explanations

Inductive generalizations argue from empirical evidence about a subset of a given populations. An inductive generalization has only one premise and one conclusion, and it has three basic parts:

1. Sample: is described in the first remise explained and consists of the members of the given population that have been observed

2. Population : the group that is described in the conclusion

Interested in learning more? Why not take an online class in Critical Thinking?

3. Property : the feature that is being measured in the premise

To determine whether an inductive argument is cogent, that is, whether it is successful as an argument, you must first assume its premises are true. Then, you must focus on the structure of the argument, and the connections between the premises and conclusions. To determine the probability at which the conclusion is true, you must look at:

1. The size of the sample being used in the argument: this refers to the number of members of the population that were observed, essentially, the sample size of experiments. If the sample size is small, the data is probably inaccurate, and the inductive generalization would be considered an unfit generalization, much like a sample size in an experiment would be considered weak and lacking power if the size used is too small. Self-selected samples consist of participants are not screened properly. These are very biased groups.
1. The representativeness of the sample: a sample is supposed to be a representation of a given population as a whole. An unrepresentative sample is considered to be a biased sample. Samples cannot be cherry-picked, as this skews the conclusion's validity and makes unfit generalizations. Cherry-picking occurs when the individual constructing the sample picks and chooses data that supports a particular conclusion that they want to support, and leaves out the data that supports a different conclusion.

Analogical arguments utilize observations of one concept, idea or object, and infer it to another concept, idea or object. That is, they form analogies, or relationships, between concepts, ideas or objects.

This is different to causal arguments, which infer correlations between two things, even in cases where things have not been observed in other places. In analogical arguments, it is not always clear why the first similarity has a relationship with the second, even though it is being posited that the relationship exists.

Analogical arguments are based on analogies, as the name indicates. An analogy is a comparison between two or more things, usually with the intention of indicating the types of relationship between those things. Analogies are helpful when trying to identify those features in things and to point out the similarities between concepts, ideas or objects that may, at first glance, seem dissimilar or unrelated in any capacity. As stated previously, analogical arguments infer that one thing shares features with a second thing.

Therefore, an analogical argument uses one analogy to establish the second analogy of the argument. Analogical arguments are rather versatile, and can be used on many instances to identify relationships. They can also be helpful in determining probably conclusions between the properties of concepts, objects or ideas.

Of course, when evaluating analogical arguments, you must evaluate the validity of the analogies themselves that are being used to compare things. An analogical argument can be evaluated in ways similar to evaluating inductive generalizations.

One thing that must be considered is the number of examples of concept, object or idea A that is described within the argument. This is the same as sample size. Further, the sample must be represented well: the more diversity within a group, the stronger the argument is. Even more importantly, concept, object or idea A should be similar to concept, object or idea B. the more relevant a similarity is, the stronger the analogy, and therefore argument, is. A false analogy leaves out the relevant differences when things are being compared, and this makes the argument misleading, as it will seem stronger than it is.

Of course, an analogical argument can never truly be considered certain: there can always be relevant differences that we have failed to identify. Further, it is not always possible to prove that the similarities are actually relevant. That is, it is not always possible to affirm with complete certainty that the first analogy supports the second analogy.

The last type of inductive argument is causal reasoning. Causal arguments allow us to determine which causes are most likely to be the reason behind an effect. A cause is something that brings an effect into existence. An effect is the result of something.

Causes are divided into categories based on the relationships they have with effects:

1. Necessary condition: a cause that is required to bring about an effect. That is, the effect is only possible via that specific cause.
2. Sufficient condition: a cause brings about an effect on its own, and brings about this effect every time it (the cause) occurs.

A cause may also be both necessary and sufficient. For example, to be a sister, you must be both a female and a sibling. Causes may also be neither necessary nor sufficient. This occurs when they can be replaced and they have to work with other causes to bring about an effect.

The goal of a causal argument is to help determine the most probable cause of a specific effect. Causal arguments do this by analyzing the correlation between the effect and each of likely causes. A basic principle is as follows: the cause should occur when the effect does, and the cause should be absent when the effect is absent.

There are several ways to judge causation:

1. Agreement: if one of the probably causes is always present when the effect is, then that cause is probably the driving force of that effect.

2. Difference: if only one of the probable causes is always absent when the effect is, then that cause is probably the driving force of that effect.

3. Joint Method: combines both Agreement and Difference. If only one of the probable causes is always present when the effect is present, and always absent when the effect is absent, then it is most likely the driving force of that effect. This method is the strongest determinant of causality. It is also known as the "double blind test," which is the ultimate standard of scientific experimentation. It typically consists of a study sample in which the cause is present, and a control group in which the cause is absent. This is very similar to how scientific and psychological experiments are set up.

4. Correlation: if the presence or absence of a cause is highly correlated with the presence or absence of an effect, then it is probably the cause. This is the weakest method to use, and pertains more to imperfect conditions, where the cause is typically present or absent when the effect is typically present or absent. One thing to remember is that correlation does not mean causation. This will be explored further below.

Causal reasoning is used when supporting a causal claim, that is, a claim that something is probably the cause of an effect. Like inductive generalizations, causal arguments can be applied to empirical evidence. Causal arguments are only probably because it is always possible we have overlooked something. We can never truly be positive that the potential causes we have looked at are complete; that is, there could always be other potential causes. Further, correlations may be weakened by exceptions that we have overlooked.

An important consideration is that just because there is a high correlation between things, does not mean that one thing caused the other. This maxim is known as the "correlation does not mean causation" maxim. It could be that the reverse of what we determined is true: what we thought was the effect was the cause, and vice versa. Or it could be that what we thought was the cause and the effect is actually both effects of another cause. Or it could be that whatever correlation we saw was mere coincidence.

So how can correlation and causation be defined? You can do the same as you did with inductive generalizations and analogical reasoning: by adding premises regarding correlations and explanations that indicate the sample we have used is representative of the population, and that Property 1 is indeed relevant to Property 2. With causal arguments, you must add a premise about explanation: explain the relationship between the cause and effect. That is, explain how the cause would bring about the effect.

Abductions are one of the most widely used types of arguments. Abductions are used to evaluate explanations. An explanation is a group of statements that attempt to indicate how or why something is the case in a particular instances. Explanations are also known as theories or hypotheses. Explanations are not the same as arguments as they do not try to prove that something is the case, or that a particular conclusion is true. Rather, explanations take the veracity of those things they are trying to explain for granted, and instead work to clarify the how and why it came to be. That is, an explanation does not work to prove what it is trying to explain, or that the explanation is true. An explanation can, however, be proven true via the use of an argument. An abduction is one of the best ways to explain how an explanation is true.