RESEARCH DIARIES: ENTRY 2: SOME BASIC TERMS

 1. Research Problem (Problem Statement/Problem Definition):  A research problem statement is the precise identification of what the research is proposed to be all about. It can be based on some problem that needs to be addressed, an assumption that needs to be validated, a curious observation with no explanation (the apple falling from the tree...) etc. One should generally look for some area, while writing a research problem statement, where there is some gap in research. For that, one needs to do literature review, on which, I will write a separate entry / blog. 

The research problem should focus on one theme / area only and avoid trying to combine two areas. 
For example, What is effect of discounts on consumer behavior and brand image? The better one would be (I am not implying this one is perfect, but compared to previous one, it is better) - What is the effect of discounts on brand image?

The research problem should not have any bias towards a particular product, solution, process etc. For example, Why are mutual funds better than PMS? Here the researcher has assumed that MFs are better than PMS, which is not acceptable in any research. The researcher is supposed to discover the truth and not reinforce his/her own assumptions.

The scope of the research problem should also be practically limited. For example, What contributes to the job satisfaction of Unilever employees? Here the problem is, which level of employees or which department's/plant's employees are your referring to? You cannot study thousands of employees of upper, middle and operational level across finance, marketing, HR, across the whole world in one research! The better one would be, What is the job satisfaction level of clerical workers of XYZ plant of Unilever limited?

2. Reliability: Whether the instrument (like a questionnaire) provides consistent and stable results over a period of time is a measure of reliability. 

2.1 Test-Retest reliability:  For example, if students are asked to fill up a questionnaire that measures their stress level today. After ten to fifteen days, the same students, should score in the same range, if they reappear for the same stress test (using the same questionnaire). It is called test-retest reliability. 

2.2 Parallel Forms reliability: When the same phenomenon is tested using different versions of the same instruments, the results should be consistent or similar. For example, if a student is interviewed by a company, using a structured questionnaire, the company might want to ask the similar level of difficult questions to other student, so that the results of the both can be compared, but the first one is not able to "share" questions with the other. Here, it is important that both sets of questions, when answered by the same student, should lead to awards of the marks in a very narrow range.

2.3 Inter rater (Interobserver) reliability: When the different observers observe anything, and agree on the ratings of various points of the instruments, it is called inter rater reliability. For example, if the students are being observed for their keenness to learn in the classroom, the different observers might rate the following items: the level of focus the students are paying to what the teacher is saying, the "exit behavior" being displayed by the student (for example, looking at the class door or wrist watch again and again) etc. Now, if the three observers score have high correlation for each of this item, then inter rater reliability is established. But, if the correlation is not high, this reliability is not established.

2.4 Internal consistency : For example, if all the questions that are asked in a job satisfaction survey, should be measuring or asking something related to job satisfaction construct only. Each item of the questionnaire, like the questions on pay, work culture, promotion opportunities, benefits etc. must be strongly correlated with one another. Cronbach's alpha is one measure for this. For that, sometimes researchers may ask three questions to measure one dimension of that construct. For example, to measure the satisfaction with related to pay, they may ask:

1. I am happy with my salary
2. The monthly payout needs to be increased
3. The other firm (competing with the employer) has better pay structure/salary

Here each respondent should answer similarly to all of the above questions. For example, if A is not happy with salary, then he should answer "Not Satisfied" in all three above. and if B is happy with salary, he should answer "Satisfied" in all the three above. 

3. Validity: Validity in research instrument (for example, the questionnaire) is all about, does it measure, what it is supposed to / intended to measure? Does it study what you are actually trying to study? It is about the accuracy of the measurement and results of the research.  There are four types of validity.

3.1  Construct Validity:  A construct is an extension to the idea of a concept. A construct may have dimensions or indicators that may help us measure it/define it. For example, the construct of job satisfaction may have indicators like pay, benefits, incentives etc. Now, when you design the questionnaire to measure the construct of job satisfaction, then it should be aptly clear, that, that the questionnaire is measuring only job satisfaction and not organizational commitment or OCB.

3.2 Content Validity:  For example, if any question paper is set, then it should have questions related to that subject only. The "content" of the questions must not be from other subject. Like, a paper of accountancy should not have questions related to law or communication.

3.3 Face Validity: It is a type of validity that exists at surface level only. For example, you and your mentor, may check through the questionnaire and satisfy yourselves that the questions measure the construct that it needs / is intended to measure. It is a weak type of validity but it provides a good start when you are trying to establish validity of your instrument.

3.4 Criterion Validity: For example, I want to select a few fresher management students. So, I design a test. It is a new test. Its validity has not been established yet. So, what I will do, is I will ask the fresher management students to appear for two tests: the fist one would be the new test I have designed, and the other one would be the one, which has already been a validated test to measure the managerial knowledge of the freshers (and is validated to use to screen the fresh management graduates). Then, I will compare the results of both the tests. If the results are similar, my new test has criterion validity. 

Popular posts from this blog

An Organizational Behaviour Case Study

Cognitive / Behavioral Biases in Investment Journey- Part - I

Training Needs Analysis and BCG Matrix - A Conceptual Write-Up