Tool Development in Quantitative Research

tool development in research

The valid and reliable tool development in research particularly in quantitative study requires a series of steps that can be time-consuming. This article outlines the step-by-step process of creating and evaluating the tool used to collect data.

Tool Development in Research: A Quantitative Approach

Every step in the process relies on carefully adjusting and evaluating the previous steps, ensuring that each one is fully completed before moving on to the next.

Instrument construcution 2
Figure 1: A brief description of each of the five steps follows

Step 1 Understanding Background

The objectives, research questions, and hypothesis of the suggested study are examined at in this first step. Understanding the audience’s demographics, including their educational levels and accessibility, as well as the process of selecting respondents, are important aspects of this step. It is essential to have a comprehensive understanding of the problem by conducting thorough literature searches and readings. Having a solid comprehension of Step 1 is crucial for successfully moving on to Step 2.

Step 2 Conceptualization

The next stage in the process of tool development in research is to create statements or questions for the instrument after having a solid comprehension of the research. During this step, the content from the literature or theoretical framework is converted into statements or questions.

Furthermore, a connection is made between the study’s objectives and how they are reflected in the content. For instance, the researcher should designate what the instrument is measuring, that is, knowledge, attitudes, perceptions, opinions, recalling facts, behavior change, etc. Major variables (independent, dependent, and moderator variables) are identified and defined in this step.

Step 3 Format and Data Analysis

In Step 3, the emphasis is on writing statements of questions, selecting an appropriate scales for measurement, instrument layout, format, question order, font size, front and back cover, and proposed data analysis. However, the order of questions in a questionnaire is important as it affects the quality of information.

There are two categories to order questions. The first is that the questions should be asked in a random order, and the second is that they should follow a logical progression based on the objectives of the study. Scales are tools that help measure a person’s reaction to a specific factor. It is crucial to understand the connection between the measurement level and the data analysis’s suitability.

For instance, if ANOVA (analysis of variance) is used for data analysis, the independent variable needs to be measured on a nominal scale with two or more levels (such as yes, no, or not sure). In contrast, the dependent variable should be measured on an interval/ratio scale (ranging from strongly agree to strongly disagree).

Step 4 Establishing Validity

After completing Steps 1-3, a draft instrument is now prepared to determine its validity. Validity refers to the degree of systematic or inherent error in measurement. Validity is determined through the involvement of professionals and a practical examination. The selection from the types of validity depends on the objectives of the study. Step 4 covers the following questions:

  • Is the questionnaire considered valid? In simple terms, does the questionnaire effectively assess its intended measurements?
  • Does it accurately reflect the content?
  • Is the sample/population appropriate?
  • Is the instrument thorough enough to gather all the necessary information to fulfil the purpose and objectives of the study?

Considering these questions and conducting a readability test improves the validity of the instrument. Obtaining approval from the Institutional Review Board (IRB) is also necessary. After receiving IRB approval, the next step involves conducting a field test with individuals who were not part of the initial sample. Changes in the tool are based on both practical experience and professional advice. The questionnaire is now prepared for the pilot test.

Step 5 Establishing Reliability

During this final step in tool development in research is the instrument’s reliability. It is evaluated through a pilot test. Random error in measurement is concerned with reliability. Reliability refers to the level of accuracy and precision exhibited by the measuring instrument. The pilot test aims to address the question of whether the instrument consistently measures its intended target.

The choice of reliability from its different types, i.e. test-retest, split half, alternate form, internal consistency) is influenced by the characteristics of the data (nominal, ordinal, interval/ratio).For example, internal consistency is appropriate for assessing the reliability of questions measured on an interval/ratio scale. When evaluating the reliability of knowledge-based questions, it is suitable to use test-retest or split-half methods.

Reliability can be determined through a pilot test, where data is collected from 20-30 individuals who are not part of the sample. Data collected from the pilot test is analyzed using SPSS or other software. SPSS offers two significant sources of information. These are the “correlation matrix” and “view alpha if item deleted” columns. Ensure that any items or statements have 0s, 1s, and negatives are removed. Then, view the “alpha if item deleted” column to determine if alpha can be raised by deletion of items.

Delete items that substantially improve reliability. To preserve content, delete no more than 20% of the items. The reliability coefficient (alpha) may fluctuate from 0 to 1, with 0 indicating a high level of error in the instrument and 1 indicating a complete absence of error. An acceptable level of reliability is indicated by a reliability coefficient (alpha) of .70 or higher.

Avoiding Instrumentation Bias

Researchers can take steps to reduce bias in their surveys or questionnaires and ensure the data collected is accurate and unbiased. Here are some guidelines and principles for tool development in research that help to avoid biases in research instrumentation, particularly for surveys or questionnaires:

Use Simple Language

It is one of the principles of tool development in research that you should use simple and clear language, avoiding complex terminology or vocabulary. It is crucial to ensure that participants comprehend the questions and provide accurate responses.

Avoid Leading Questions

It is important in tool development in research that you should avoid asking leading questions that may influence the participant’s response. Leading questions bound the respondents to a specific answer; for example, instead of inquiring, “Do you believe education is necessary?” Inquire “how important is education to you?”

Avoid Loaded Questions

It is also important in tool development in research that you should avoid loaded question. These questions manipulate the emotions of respondents and are based on the assumptions that force them to be defensive. For example, “how can you support an organization that exploit the child labor?” Rather you should ask “what opinion do you have about the organization that employ children as labor?

Use Different Questions

It is essential in the process of tool development in research that you should utilize a range of question types, including multiple-choice, Likert scale, and open-ended questions, in order to minimize any potential bias in responses. This will help create an environment where participants feel comfort and supported to freely share their opinions and thoughts.

Randomize Question Order

It is important in the process of tool development in research to randomize the order of questions in order to prevent any potential order effects or bias that may arise from the placement of the questions. By implementing this approach, it is ensured that the responses remain unbiased and unaffected by the sequence in which the questions are asked.

Avoid Social Desirable Responses

Participants may sometimes respond in a manner that aligns with societal expectations or what is considered desirable, rather than expressing their genuine opinions or experiences. This phenomenon is known as social desirability bias. To address this concern, it is important for tool development in research process to provide participants with reassurance that their responses will be kept confidential and anonymous. Additionally, it is advisable to avoid asking questions that may lead to socially desirable responses.

Pilot-testing

It is important to pilot-test the survey or questionnaire with a small sample of participants in order to identify any potential issues or areas of bias. This step is necessary regardless of whether the survey is based on previously published scholarly publications or developed by the researcher. This will enable the researcher to make any required changes before distributing the survey or questionnaire to a larger group.

Consider Linguistic Differences

It is essential in the process of tool development in research to take into account cultural or linguistic differences. Ensure that the questions are suitable and pertinent for the population under investigation, and if needed, translate the survey or questionnaire into the native language of the respondents.

Monitor Biases

It is important to consistently monitor for bias at every stage of the instrument’s planning, construction, and validation.

Conclusions

Systematic development of the questionnaire for data collection is essential to reduce measurement errors, such as questionnaire content, questionnaire design and format, and respondents. Developing a thoughtful understanding of the material and converting it into well-constructed questions is crucial for reducing any potential inaccuracies in measurement.

Having a keen eye for detail and a deep understanding of the tool development in research process is incredibly valuable for the development of educators, graduate students, and faculty members. Not following appropriate and systematic procedures during tool development in research, testing, and evaluation may undermine the quality and utilization of data. Anyone involved in educational and evaluation research must follow these five steps to establish a valid and reliable questionnaire to enhance the quality of research.

Author

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top