So you have a particular topic you would like to study and have narrowed down the perspective you wish to employ from one or a combination of the three sociological concepts: functionalism, conflict, or interactionism. Now the question is what methodology will you use in order to monitor your subjects, collect relevant information, analyze, and effectively group or summarize your findings?
The majority of studies conducted employ what are known as social research or evaluation methods such as surveys, focus groups, samplings,
All of these tactics are categorized under the heading of qualitative methods because they produce more than just statistics and head counts; they deliver descriptive answers made by subjects in response to pointed questions.
B. Primary Methodologies for Gathering Research
In short, there are two main types of methodologies used in the research gathering process, general and evaluation.
To clarify, the first methodology listed relates to the gathering of information from a particular group or subset. whereas the latter entails comparing one set of data to another. Hence, the research process requires the additional steps of collecting data from a second group and then mirroring the findings with those of the first group.
Further, divisive measures that may be employed are those of specifying whether the information to be collected in a particular study should be empirical data or categorical data.
Empirical studies rely upon or are derived purely from observation or experiment. In these types of studies, there is no manipulation of the subjects to produce preconceived results.
As quoted from the Web site, answerpool.com: "Empirical usually implies that there has either been no attempt to fit the results to a theory or that the attempts have not been successful. Sometimes, however, our theoretical knowledge is not sufficient to explain the results."
The ideal study formulation entails:
(1) Theory predicts the results of the experiment.
- or -
(2) Fault found in the experiment and remedied.
- or -
(3) Fault found in the theory and remedied.
In contrast, categorical studies seek out subjects and information that conform to a particular preset agenda; for example, a study of persons with eating disorders indicated that in order to be classified as an anorexic, a person had to categorically eat fewer than 600 calories per day. In this instance, "categorical" is used in place of the phrase "across the board." In other words, all subjects must fit this description in order to use the assigned classification term.
C. Survey Creation
While we have laid out the basic types of research-gathering strategies, we have not explained the in-depth process of creating a questionnaire or another useful tool from which data pertinent to the study may be collected and analyzed.
Though there are many different methodologies available, we will focus on the creation of a survey because it is the most basic form to explain and design.
While some surveys that you have filled out in the past may have appeared highly simplistic in nature, there probably was a great deal of forethought put into the creation of each and every question, regardless of the basic wording structure or childlike tone.
In the design of a survey, the initial planning and drafting of questions is extremely important in terms of the ultimate data that is to be obtained.
Once the surveys have been distributed, it becomes very difficult, if not impossible, to make adjustments to the basic set of research questions. Referred to as "standardization,
" this is the term used for ensuring that all subjects are responding to an identical set of questions. Should questions differ from group to group, findings cannot be accurately assessed or compared.
When designing a survey, the two major considerations are validity and reliability.
According to the American Psychological Association, the term "validity"
refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores.
In layman's terms, if you want your findings to be appropriate, meaningful, and useful, then the validity of your findings is paramount.
In order to ascertain whether your survey questions will produce valid findings, you need to consider the purpose and context of the survey in order to determine whether or not the assumptions that you have made are relevant.
While using a particular survey question may produce valid findings for the focus of one particular study, it may not yield the same results for a study focused on a different topic. For example, take a survey designed to measure employee morale at Company XYZ. The same questions may or may not be able to also accurately assess employee productivity.
In order to determine whether the questions of a particular survey would generate valid results, you first need to assess whether the questions have been purposely and accurately designed in line with the objectives of the study.
While validity is a central component of any study, there are actually three main components of validity: content, criterion-related,
Each of these types is actually a variant approach one can take in the design of a survey.
The three main components of validity are distinguished by content, criterion,
Content validity evaluates whether the survey questions accurately represent the topic being addressed.
Criterion-related validity entails the calculation of a validity coefficient (more commonly, a correlation coefficient) found by correlating the survey questions with another measure connected to the objective of the study, to service satisfaction, or to the number of referrals.
Construct validity looks at what is being measured by focusing on the relationship, or correlation, between components, such as motivation or satisfaction.
Of the three, content is the simplest, while criterion and construct involve more complex analyses. For instance, with the construct, one needs to employ theoreticals, meaning a theory of how the data could be correlated, for example, mathematically, in order to identify how one construct relates to another within the study. Relationship patterns need to be created, and those patterns need to be examined.
While these validity concepts can be highly useful, they also take some statistical understanding and analytical know-how to be implemented properly.
The second component we addressed as being integral to the design of a survey is that of reliability
. By definition, reliability
is synonymous with repeatability and accuracy. When applied to a survey design, reliability is the extent to which a survey will provide the same consistent results with repeated measurements.
When data estimates are obtained from a smaller sample of survey data, the possibility always exists that estimates may differ from true figures that are obtained through a formalized census
of the entire target population. This anticipated difference
is referred to as a "sampling error"
(or margin of error, or even as the standard deviation
, if strict statistical analysis is implemented).
The general rule of thumb is that the more individuals who are surveyed or observed, the greater the accuracy of the survey results. Because the majority of thoroughly comprehensive methodologies are either impossible or cost-prohibitive, statisticians and researchers have to make do with surveys that attempt to accurately reflect the populist view.
Traditionally, the goal of most surveys is to contact the minimum number of people necessary to produce the most accurate estimates of sentiments that will reflect the wider target population.
The specific number of respondents needed depends upon the size of the target population and the anticipated variance in data, or sampling error rate. To measure reliability, there are a number of methods from which you can choose, based on your objectives.
Test and retest, split-halves, and internal consistency are only a small sampling of the methods that exist, each of which results in a number between 0.00 and 1.00, with the higher numbers indicative of increased reliability. This number is called a correlation coefficient; if it attained a value of 1.00, the correlation of the data would be 100 percent, or perfect.
Validity and Reliability
Because a survey can be no more valid than it is reliable, these two core components of any survey or study should each receive equal amounts of time and energy to ensure that they are up to code.
F. Survey Areas Needing Attention
In going back to the general design of your survey, there are four principal issues that need to be addressed:
- Respondent attitude. This refers to how much of an imposition a respondent may feel when asked to answer a couple of questions.
- Nature of questions. Bear in mind, some people find it offensive to be asked personal questions. Even those that are not so personal, such as political party affiliation, can come across as being not for public knowledge.
- Cost. Typically, a company or organization has allocated only so much money to designing a particular survey.
- Ability of the research tool to produce results relevant to the objectives of the study. A sensible link needs to be in place between the questions and the study objective. If you need only brief, snippet-type answers, a short questionnaire may work incredibly well. If you are seeking longer, more in-depth responses, you may need to switch your methodology to an in-person interview.
Thus, there are many aspects of conducting survey research, including survey design, useful constructs, gathering of empirical versus categorical information, and ensuring the study's validity and reliability. It is important to take into consideration the range of factors that may potentially affect findings and, based upon such, adjust applicable questions before distributing the survey.
Once the survey has gone out, it is best to stay with your original questions, as changes will create an inordinate number of problems, including a lack of standardization and reliability.