One key aspect of research is measurement. If we want to know if Dutch men are taller than Chinese men, then we can measure them and find out. One tradition within social science research is to measure things. This is based upon a belief that things can be measured, and measured reliably. Of course, some people disagree with this, arguing that certain aspects of human behaviour and experience cannot be objectively measured. These people adopt a qualitative approach to research, more of which in the next chapter. For now, we will focus on quantitative approaches.
So, what kind of measurements can we make? First of all, we can make physical measurements. Examples would include the height or width of something, or how much it weighs. We might even wish to include measuring age in this category. We can also measure less physical things, such as how often something happens, or people’s attitudes towards something. In the first case we use observations, and measure the number of times something occurs (or is observed, as the observer may miss things). In the second case, we might use scales. In this instance we would ask a series of questions about a person’s attitudes, and then convert the answers into a numerical value expressing a positive or negative attitude. Some social scientists claim that we can also measure how much we love someone, or how strongly we feel about an event! The important thing about a measurement is that it is reliable and valid. To be reliable means that if we measure the same thing again on another day (or if another person takes the measurement), then we obtain the same result. If someone is 6’2” today, and only 5’8” the following day, then either something very strange is going on or our measurement is not reliable! Valid means that it measures what it purports to measure. If you develop a series of questions to measure Deaf people’s attitudes towards mainstream education policies, then you need to make sure that those questions do actually measure that. Whilst there are statistical techniques for checking reliability, validity is harder to assess. Most of the time you will assess face validity, i.e. on inspection, does the scale appear to be valid?
A concept that you need to be familiar with is levels of measurement. There are 4 different levels:
1. Nominal. This refers to naming things, hence ‘nominal’. An example would be placing things into categories based upon observations. If you observe that there are 7 cows in the field, of which 3 are spotted and 4 are plain coloured, then you have used a nominal measurement. All we know about any one cow is into which category it falls. We do not know how many spots it has (unless it is in the plain category, of course).
2. Ordinal. This means that we know about the order of things, but not the differences between them. As an example, let us consider measuring performance in a 100-metre sprint race. An ordinal measurement would be who came 1st, 2nd, 3rd and so on. We can rank the runners in terms of their performance, but we do not know how much faster the gold-medal winner was than the silver-medal winner.
3. Interval. Here we have some information about the gap between two scores or measurements. Consider the temperature of a room. We measure 3 rooms, and record temperatures of 10°C (Room A), 20°C (Room B) and 30°C (Room C). As with the ordinal measurement, we know which room is the warmest, and which the coldest – we can rank the rooms in order of temperature. However, we also know that the difference in temperature between Room A and Room B is the same as that between Room B and Room C (i.e. a difference of 10°C).
4. Ratio. This is very similar to interval measurement, with only one difference. With a ratio scale, a score of 0 means the absence of something. Using the example of temperature again, 0°C does not mean an absence of temperature! It is very cold, but it is possible for temperatures to fall as low as -273°C. So measuring something using a Celsius (C) scale is an interval measurement, and not ratio. A weight scale using grammes would be ratio, as 0 gm indicates a complete lack of weight. The difference is subtle, and not one you need to worry about.
The level of measurement you employ will determine the kind of statistics you can use to analyse your data. For statistical reasons it is best to go for an interval or ratio level of measurement – certain tests can only be used of you measure using these levels.
How are you going to make your measurements? The way in which you choose to measure something is your methodology. Quantitative methodologies include experiments, observation and structured interviews. Although there are others, these three are those you are most likely to employ in a quantitative dissertation, so we will focus upon them here.
How can we define an experiment? One helpful definition is as follows:
An experiment tries to measure the effects of X on Y by controlling X and measuring Y, while at the same time keeping everything else constant.
What does this mean? It means that experiments are conducted in very controlled environments. If you want to know the effect of consultation time with a doctor (X) on patient satisfaction (Y), then in an experiment you would control the length of time of the appointment and measure the satisfaction of the patients. At the same time, you would try to keep constant other things that may also affect satisfaction. To do this you may select only males aged between 35 and 40 years who have muscle injuries. Why? Well maybe men and women differ in terms of their satisfaction, as do younger and older people. Those with muscle injuries may need less time with the doctor than those with more serious illnesses. You may also decide to use only one doctor, as doctor personality may also affect satisfaction! At the end of your experiment you want to say whether or not longer appointments lead to more satisfaction. You cannot do this if a doctor’s personality or the illness of the patient also may have affected your results.
This is obviously a contrived example, and shows that experimentation is not always appropriate. To what extent could you control the consultation situation? Experiments are, however, very powerful research tools. So, if it is appropriate, then an experiment is a good bet.
If you decide to run an experimental study, you need to decide what X and Y are. X is usually called the independent variable (IV), and Y is called the dependent variable (DV). You will control the IV. From the example above, you may decide to have consultations of 10 minutes and 20 minutes. These are called the levels of an IV. Here the IV (consultation time) has 2 levels (10 minutes and 20 minutes). These levels will form the conditions of your experiment – condition 1 is a consultation of 10 minutes, and condition 2 is a consultation of 20 minutes. What about the DV? Well, you may decide to create a questionnaire that asks different questions about how satisfied the patient is. After the consultation, you ask the patient to fill in the questionnaire. From this you can then calculate a satisfaction measure.
Sometimes you cannot control a situation, and an experiment is not a viable option. However, it may be possible to observe what goes on. Take the example of children’s play. You may want to know what kind of play behaviours children of different ages engage in. This could be determined by observation. You could place a video camera in a nursery school/playgroup, or observe a child playing from behind a one-way mirror. Every time the child engaged in a certain type of play, you place a tick in an appropriate box on an observation record sheet. After observing several children of different ages, it is then possible to compare them in terms of how many times they engaged in different types of play.
Possibly the most important concept for observational research is that of operational definitions. Let us say that you consider ‘small-scale construction’ to be a type of play. You observe children and note down how many times they engage in ‘small-scale construction’ play, and possibly for how long. But what is ‘small-scale construction’, and how do you know it when you see it? Well, you provide an operational definition. In other words, you say that for the purpose of this research project ‘small-scale construction’ play is when a child ‘combines two or three separate things to make one larger thing’. Every time the child does this, you record the event. If the definition is good, then another (independent) observer should come up with the same data as yours. Statistically, it is possible to check this and confirm whether your operational definition is a reliable one (see the previous discussion on reliability). The validity of the definition is another matter (again, see the previous discussion of validity) and a possible basis upon which your research may be criticised. So you would have to defend your choice of ‘combines two or three separate things to make one larger thing’ as a definition of ‘small-scale construction’ play. Why 2 or 3 things, and not 4 or 5? When does it become ‘medium-scale’ or ‘large-scale’?
If you want to know what people think about a topic, ask them! This is the principle behind interviews. Structured interviews are just that, interviews which have a structure that the interviewer follows. Open-ended interviews are at the other extreme; there is no guide for the interviewer apart from the participant’s responses – the interview flows like a conversation. Open-ended interviews are hard to quantify and are discussed in the chapter on qualitative methods. Structured interviews are usually composed of closed questions. These restrict the range of responses that is allowed, although an ‘other’ option is sometimes included to incorporate responses that do not fit within the categories provided. The responses can then be quantified – such a percentage felt this, and such a percentage felt that; responses from different groups can be compared; responses to questions can be combined to form a scale that measures a construct, such as ‘satisfaction’ or ‘enjoyment of work’.
It is important to pilot all research methodologies, but this holds doubly true for structured interviews. Trying the interview out with a few people, before running the main study, can highlight troublesome questions that are unanswerable or often misinterpreted, or questions that need different responses allowed on the interview form. The interview form can then be amended before running the main study.
[Jim Kyle prepared this section.]
By far the most common starting idea in research projects in the field of Deaf Studies is the proposal to produce a questionnaire. There is a general belief that we can find out answers to our questions with a questionnaire. There are plenty of models in every magazine, every Sunday colour supplement. Answer these questions and you will now your skills, your stress level, your appeal to the dream partner in your life, and so on. We see the results of questionnaires all the time. In Wales, 25% of people said they wanted to have independence, but 56% said so in Scotland. Ninety percent of Deaf people go to the Deaf club twice a week. Questions, which lead to results that tell us the truth. In research projects, in dissertations, the main aim is to obtain results quickly and easily. Just ask people in a questionnaire and there will be the answer. These are common views brought on by the over-use and the misuse of questionnaires.
But what should you ask? And how? And where?
The basic assumption of questionnaires is that people tell the truth, that they will say the same thing no matter their mood or where they are asked the questions or by whom. Such beliefs are false.
Questionnaires are used in each of these settings. They can involve scales, can be open-ended, can involve coding and counting and can be purely descriptive. What makes them one or the other is not chance or personal preference but assumptions about the data and the research question itself. What about these questions - what sort of study might they require? Which can be examined through questionnaires?
How many deaf people attend the Bristol Deaf Club on Wednesdays?
What is the level of satisfaction of deaf people about the bar at the deaf club?
What is the favourite beer of deaf men and women in the UK?
In deaf clubs, what is the proportion of deaf men and deaf women who order drinks for their partner?
Are deaf people or hearing people more likely to say please and thank you when asking for their beer or drink?
How long does the average beer last in a deaf club, when someone buys it?
What improvements would people like to see in the bar area in the Bristol Deaf Club?
How would you deal with these questions? Which are by observation, which are interviews and so on? You can probably expect Q1, Q4, Q5 and Q6 to be best dealt with by observation. They require facts to be established, patterns of behaviour which are observable. You could ask people but you would not expect the outcomes to be accurate. Even Q1, where the observation (or counting) is done by someone else, you could ask the secretary, ‘How many people were registered as entering each Wednesday for the last two months’ - but this is still a form of observation, albeit indirect. We can also say that some of the questions are not clear and this affects the way in which we might obtain results.
However, we can expect Q3 to be done by survey and also probably Q2. Which leaves only the last question as an interview - a set of questions asked in a face-to-face scenario.
For the moment we need to focus on the construction of the questions in a way that will give an answer to the question that has been set. To help with this we will plan a study to investigate …
We have the basic underlying assumption that people can respond to questions consistently and they will find the questions non-threatening.
In order to proceed with this, let us assume we have solved all the questions about sampling and other hypotheses and focus completely on how we might ask the questions. So we need to be direct, concise, clear and unbiased. Simple. But …
Usually people ask the wrong questions, or questions that seemed right but later turn out to be limiting or inconclusive. Many textbooks start by explaining what can go wrong.
"You do have double glazing don't you?
"Do you prefer to have the double glazing explained to you simply and clearly or read it in a book?"
"Do you think deaf people get a fair deal on double glazing if they can't communicate with the salesman?"
These are all likely to push
the person to answer in a specific way.
Make the questions neutral:
" Do you have double glazing?”
“What kind of glazing do you have at home?"
fancy, academic questions:
"What is the coefficient of heat loss in your double glazing system?"
"What are the conditions of the guarantee on your double glazing?"
Although you can ask about
these topics …
"When you bought your double glazing, did you receive information on the coefficient of heat loss?"
"Do you prefer the installers to carry out the work in a short period, say two weekend days, and into the evening, or are you happy to have theme work in short burst over a couple of weeks (such as every morning)?"
annoying questions or obscure responses:
"What helped you decide on the double glazing, for example, mark a cross in the box, 'it was very cheap' and also on the box, 'most important consideration'.”
questions with negatives:
"Are you the type of person who does not like to disagree with the salesman?"
"How strongly do you feel about not choosing double glazing, when you met the salesman?"
two questions when one would do:
"Do you check your double glazing?"
When you can say,
"How often do you check your double glazing?"
vague questions which will be answered unreliably or according to mood:
"What do you think of double glazing salesmen?"
sexist, racist, etc language:
"Do you think it is better to have a man fixing the double glazing?"
"Do you prefer to have fitter who is from this country?"
[Unless you have a clear intention to investigate this area and have told the person filling in the questionnaire.]
two questions in one:
"How long did you save up and plan for the double glazing?"
10. Avoid hypothetical questions:
"If you decided to change your double glazing, would you choose the biggest firm to do it?"
These questions usually produce "don't know" or "hadn't thought about it."
11. Avoid personal questions which
will alter the attitude of the person and lead to termination of the interview
or "switching off."
"Do you think deaf people are too mean with money to have double glazing?"
”Do you think if deaf people could learn to speak more clearly, they would get a better deal from the double glazing salesman?"
12. Avoid casual questions:
"Do you happen to know how heavy the glass might be?"
13. Or questions which are too
matter of fact:
"Nowadays, everybody in your street has double glazing, is there some reason why you haven't got yours in yet?”
14. Make sure people have the knowledge
to answer the question:
"Did you choose Paraflax, Penzagon or double durable for their insulation qualities?"
15. Avoid questions which test
"What was the salesman wearing?"
"When the salesman came, what was the first question you asked him?"
16. Avoid overlapping questions:
"How old are you? Up to 30 years, 30-40 years, …”
"How often did he come to your house: up to once a week, at least once a week, …”
17. Make sure the questions can be
"Do you really believe that deaf people can communicate with salesmen?"
18. Make sure you do not invade
"Did your wife ever break a window when you were arguing?"
"Do you have a police record?"
19. Avoid asking questions just
because you can - make sure they are completely relevant to your study:
"What kind of car do you have?"
20. Avoid options/choices in a
single question which are on different scales:
"Did you choose this glazing because it was the best available or because you were tired of looking?"
"After the installation, were you satisfied, not bothered, or unhappy.”
As you can see there is a long list. There are many mistakes that are apparent in questionnaires used by people and sometimes even with figures in published research. It is not so easy to design a questionnaire.
As well as avoiding all the points above, you need to make some basic decisions and then create the questions appropriately.
A basic decision is between questionnaires that will be sent to people and questionnaires that will be used in an interview. In both cases, you have to follow all the rules above, but when the completion is done at a distance there is less chance to catch errors and to correct misunderstandings.
So questions have to be simple and logical and unbiased if they are to be sent to the person as in a survey.
Often people are drawn to open-ended questions since they seem very simple to ask. Unfortunately, this simplicity may be its weakness when it comes to analysing the results.
"Tell me about what made you want to install double glazing." is not very helpful when it comes to analysis unless the researcher is confident in the use of qualitative methods. There are also problems in collecting qualitative responses at a distance, as people are often reluctant to write long essays on their feelings. It is often the case that we cannot determine whether people are just not prepared to write or have no special opinion.
Closed Questions or Fixed-choice Questions
These are more common in surveys or postal questionnaires. They can be of different types:
Yes/no/don't know responses where the researcher just needs to know if something was present or not. They are rather crude.
Classification questions - male/female; detached house/semi-detached/bungalow/…
Ranking questions where you ask the person to place numbers beside a series of ideas, statements, objects and so on. These can be more complex to analyse.
Scale questions such as:
"How satisfied were
you? Very satisfied / satisfied / OK /
not satisfied / very dissatisfied.”
Scale questions are usually simple and on a single scale. Avoid offering different adjectives in the same question. The convention is to use equal intervals and 3 or 5 or 7 points.
Or “What percentage score would you give to the installers for their efficiency? … %”
Or just simple numbers, “How many windows did you double glaze in the last 12 months? _______”
They can also be rating scales or attitude scales:
All deaf should buy double glazing
Strongly agree / Agree / Don’t Know / Disagree / Strongly Disagree
At the point where you start to generate the questions based on these guidelines, you are highly likely to forget your original intention. So go back to your original aim and make sure every question that you ask has some relevance to the original aim. This is so often a weakness of the whole process.
Now having constructed a questionnaire, try it out. Pilot work is vital and it has to cover both the content and the administration itself. Not only does the questionnaire have to work in terms of the content and the questions, but also you the researcher have to be able to use it effectively, smoothly and with complete confidence. You have to know exactly which question follows which and must be clear about what you can explain and what you cannot.
Usually researchers can repeat questions but are not allowed to give for instances. However, the limits of the information that can be given have to be set out in advance. You have to follow your own guidelines. Make sure the questionnaire is simple to administer.
A common problem is making an interview too long or too short. It can be too long because it is the first time you have used it - you can expect it to take twice as long on the first use. Or it can be too long, because the questions are too complex. Use the pilot study to reduce the overall length.
It can be too short if you find that the questions are irrelevant to the person. Job questions might not be appropriate if a person is a student, still at school or retired. Make sure you target your interviewees properly. More about survey sampling later.
Perhaps the biggest issue for us is how to translate a questionnaire into BSL. We have no fixed rules but after 20 years of doing this, I can explain what we believe to be "best practice".
In order to use a survey, the researcher should have a clear idea of the type of data and the underlying rationale for the approach. In simple terms, surveys are positivistic. This means that they imply a reality in the data. They assume that what you are told is the truth and is an objective and verifiable reality. In some ways, this is too simple but as a starting point it is important.
The approach is scientific as it expects the work to arise in a theory. The theory is a set of expectations based on unifying principles - rules or laws. The scientist using a survey, generates a hypothesis from the research question - Deaf women are more likely to move from one part of the country than Deaf men; or Deaf young people are more likely to smoke if they have been to deaf school than if they have been in mainstream unit. The survey is then constructed around certain assumptions. These are quite simple - that the answers people give are reliable (will be produced in the same way if requested in consecutive weeks) are mostly independent of subject variables like mood and illness and that the differences which occur are due to the real effects of the underlying variables, e.g. gender or job. These are assumptions, not truths. How strongly you can adhere to them varies according to your purpose and the context.
There are 3 principles that we can identify:
This aspect is a whole science on its own. It is highly statistical and often governed by complex mathematical theory and calculation. Much more on it can be found in books in the library, with titles such as "Sampling Theory."
For our purposes, there are some simple divisions. Sampling is designed to ensure representativeness and to allow us to claim reliability.
Where a population is available and easy to access, then a random sample can be constructed. The key here is on the accessibility. In a village, all the householders can be contacted from the voter's roll (the list of people eligible to vote) stored in the public library. The researcher can choose a fixed number of people and then choose them with a set of random numbers and their position on the voter's roll.
Random samples are vital in physical science and are achievable because the variables are under complete control. When we deal with people, the same is not the case and it becomes much harder to use this form of sampling.
In order to get around it, sometimes, stratified sampling is used. This uses information about the population in order to make the randomisation less broad - so perhaps there are more people who are professionals than unemployed or people in unskilled jobs. So the random samples are chosen inside these strata and not across the whole population. The sampling is then balanced to represent the different types of job.
There are various types of non-probability sampling. A common one, and one that is appropriate in the case of the Deaf community, is quota sampling. This is a technique often used in market research, where the background characteristics of the population are known and these are used to extract a quota in order to make specific comparisons. It is like stratified sampling, except specific numbers of each type within the population or the types which are of interest, are chosen and the researcher targets those people alone. An example might be a target of 20 mothers of children aged 5 and under and 20 mothers with children aged 6 years and up - in order to examine their pattern of Christmas present buying. The Sign on Europe and DPIC projects are good examples of quota sampling.
The disadvantage of this method is that the interviewers tend to pick the easiest to reach and the quota is filled up with friends or people known to the researcher.
Another technique that is appropriate for minority groups living in unusual circumstances (not in fixed geographical locations) is to use a spreading search. In this method, targeted key people provide access to other members of the community. For example, Deaf people list the names of the people they know in that group, and those people in turn tell of the people they know. So the process snowballs.
Surveys are of different types. Traditionally, they were enumeration where hard facts were obtained - the Census is a good example, and the General Household Survey is another. These evolved, and surveys began to measure attitudes - people's perceptions and beliefs. These were taken to reflect some inner or social reality and the groups of people who were chosen could be compared in terms of their expressed opinion.
This type of principle evolved further to imply that attitude and behaviour could be linked causally. People who believed one thing were more likely to behave consistently with this belief. These are very strong ideas underpinning the survey approach.
There are three main approaches to take: mail/self-completion; telephone and face to face/interviews.
This approach has the advantage of simplicity and ease of data collection. It is also fraught with many problems. It is very common and is a useful way to obtain a large amount of data.
Its advantages include:
Nevertheless postal questionnaires are common and an important means of obtaining information. One aspect of postal questionnaires is incentives. This is not so common in large-scale surveys but it is a vital component of smaller project work. No matter whether there is an "upfront" reward, it is this sense of obligation that determines the return rate. To create this, the covering letter has to imply the significance and relevance of the activity. Sometimes you use an influential figure, or explain the significance; sometimes you can link to the person's particular situation - someone with a disabled child, or someone with relatives who have had a particular experience and so on. Without these specific tags, it may be possible to provide a reward that is not linked to payment - this won’t work unless it is something that is useful - a pen, or book of stamps. However, the rewards must be tested out carefully in order to check that they have some facilitating effect.
This format is relatively new but has become acceptable. People are chosen according to mailing list/database information that is held, and are telephoned at times which are believed to be suitable. Calls are made in the evenings or on weekends. All calls can be logged or recorded.
Advantages of telephone interviewing include:
There are, of course, disadvantages:
Clearly, it is more difficult to use telephone interviewing in the deafness field, but the DPIC project has begun some initial work on this with Nokia Communicators and text messages.
The advantages are usually taken to be the flexibility and the adaptability. The disadvantages are the interviewer effects and that it is time-consuming.
Once questionnaires have arrived, the data will have to be coded for ease of use and usually will be entered into a computer. There are various database options, but the simplest approach is to use a spreadsheet.
A coding sheet that is formal and thorough is essential when there is more than one encoder. In order to ensure that they enter the data appropriately, and consistently, there has to be an agreed code sheet. There is a range of ways of setting this up. The easiest way is often to include the codes on the questionnaire form itself, so that the encoders can work directly form the questionnaires as they are returned. It is also possible to set up a separate sheet that can guide the people who are carrying out the coding. This should be in a form that is appropriate for entry into a computer spreadsheet.
Surveys need to have all of the above components discussed in order to provide valid and useable data.
Reliability & Validity http://trochim.human.cornell.edu/tutorial/colosi/lcolosi1.htm
Observational Field Research http://trochim.human.cornell.edu/tutorial/brown/LauraTP.htm