Monday, 15 June 2009

Comparison between Historical Research and Evaluation Research

Reference:
Borg, W.R., and Gall, M.D., (1999). Educational Research: An Introduction (6th ed.). Toronto, ON: Allyn & Bacon.
Chapters 16 and 17

Summary by Nelson Dordelly-Rosales, June 20th, 2009

Historical Research: What is it?

• Historical research is the systematic search for facts relating to questions about the past, and the interpretation of these facts. By studying the past, the historian hopes to achieve a better understanding of present institutions, practices and issues in education.
• There is no single, definable method of historical inquiry (Edson, 1988)

What does historical research mean from the qualitative and quantitative perspectives?

• From the qualitative perspective, historical research means historical inquiry. It proposes to learn from past discoveries and mistakes, and provides a moral framework for understanding the present and predicting future trends.
• From the quantitative perspective, historical research is the systematic collection and objective evaluation of data related to past occurrences in order to test hypotheses concerning causes, effects, or trends of these events that may help to explain present events and anticipate future events.


How to conduct a historical research?


– Definition of a problem: topic (s) or questions to be investigated
– Formulation of questions to be answered, hypotheses to be tested or topics to be investigated.
– Systematic collection and analysis of historical data
– Summary and evaluation of data and the historical sources
– Interpretation: present the pertinent facts within an interpretive framework
– Production of a synthesis of findings or confirmation/disconfirmation of hypotheses or questions (Borg & Gall, 1999, p. 811-831)


What are the types of historical sources?


• Preliminary sources: published aids for identifying the secondary source literature in history. An important requirement is to list key descriptors for one’s problem or topic, e.g., bibliographies and reference works.
• Primary: those documents in which the individual describing the event was present when it occurred, e.g., diaries, manuscripts.
• Secondary: documents in which the individual describing the event was not present but obtained a description from someone else, who may or may not have directly observed the event, e.g., historian’s interpretations (Borg & Gall, 1999, p.815-817).


How to record information from historical sources?


• Examining availability and deciding what information to record from:
- Documents: diaries, memoirs, legal records, court testimony, newspapers, periodicals, business records, notebooks, yearbooks, diplomas, committee reports, memos, institutional files, textbooks, tests.
- Quantitative records: census records, school budgets, school attendance records, test
scores.
- Oral history: i.e., records and interviews.
- Relics: an object whose physical or visual properties provide information about the
past.
• Summarizing quantitative data (Borg & Gall, 1999, p. 818-819)

How to evaluate the worth and meaning of historical sources?


• External criticism: evaluation of the nature of the source, e.g., Is it genuine? Is it the original copy? Who wrote it? Under what conditions?
• Internal criticism: the evaluation of the information contained in the source, e.g., is it probable that people would act in the way described by the author? Do the budget figures mentioned by the writer seem reasonable? (Borg & Gall, 1999, p. 821-823).

How to interpret historical research?


• Use of concepts to interpret historical information:
- Concepts are indispensable for organizing the phenomena that occurred in the past.
- Group together those persons, events, or objects that share a common set of attributes.
- Place limits on the interpretation of the past.
• Being aware of bias, values, and personal interests allow researchers to interpret or “reconstruct” certain aspects of past events, but not others. Also, it allows interpreting past events using concepts and perspectives that originated in more recent cases.

What is the role of the historical researcher?


• Historians cannot ‘prove’ that one event in the past caused another, but they can be aware of, and make explicit, the assumptions that underlie the act of ascribing causality to sequences of historical events (Borg & Gall, 1999, p. 831).
• Generalizing from historical evidence means looking for consistency across subjects or an individual in different circumstances (Borg & Gall, 1999, p. 834).
• Causal inference in historical research is the process of reaching the conclusion that one set of events brought about, directly or indirectly, a subsequent set of events (Borg & Gall, 1999, p. 836).


What is Evaluation Research?


• Educational evaluation: is the process of making judgments about the merit, value, or worth of educational programs (Borg & Gall, 1999, p. 781).
• Evaluation Research: is usually initiated by someone’s need for a decision to be made concerning policy, management, or political strategy. The purpose is to collect data that will facilitate decision-making (Borg & Gall, 1999, p. 782).
• Educational Research: is usually initiated by a hypothesis about the relationship between two or more variables. The research is conducted in order to reach a conclusion about the hypothesis - to accept or reject it (Borg & Gall, 1999, p. 783).


How to conduct an ‘Evaluation Study’?


• Clarifying reasons for doing the evaluation
• Identifying the stakeholders
• Deciding what is to be evaluated
- Program goals
- Resources and procedures
- Program management
- Identifying evaluation questions
- Developing an evaluation design and timeline
- Collecting and analyzing evaluation data
- Reporting the evaluation results (Borg & Gall, 1999, p.744-753).


What are the criteria of a good evaluation study?


• Utility: an evaluation has utility if it is informative, timely, and useful to the affected persons.
• Feasibility: the evaluation design is appropriate to the setting in which the study is to be conducted and that the design is cost-effective.
• Propriety: if the rights of persons affected by the evaluation are protected.
• Accuracy: extent to which an evaluation study has produced valid, reliable, and comprehensible information about the entity being evaluated (Borg & Gall, 1999, p.755).


What is involved in ‘quantitatively oriented evaluation’ models?


• Evaluation of the individual.
• Objectives-based evaluation for determining the merits of a curriculum or an educational program.
• Needs assessment.
• Formative and summative evaluation.

(Borg & Gall, 1999, 758-767).

Evaluation of the individual

• This type of research involves the assessment of students’ individual differences in intelligence and school achievement.
• It also involves evaluation of teachers, administrators, and other school personnel.
• Like assessment of students, personnel evaluation focuses on measurement of individual differences, and judgments are made by comparing the individual with a set of norms or criterion (Borg & Gall, 1999, p.759)


Objectives-based evaluation: Four Models


• Discrepancy evaluation between the objectives of a program and students’ actual achievement of the objectives (Provus, 1971).
• Cost-benefit evaluation to determine the relationship between the costs of a program and the objectives that it has achieved. Comparisons are made to determine which promotes the greatest benefits for each unit of resource expenditure (Levin, 1983).
• Behavioral objectives to measure the learner’s achievemen(Tyler,1960)
• Goal-free evaluation to discover the actual effects of the program in operation that may differ from the program developers’ stated goals (Scriven, 1973).

Needs assessment

• This type of research aims to determine a discrepancy between an existing set of conditions and a desired set of conditions.
• Educational needs can be assessed systematically using quantitative research methods.
• Personal values and standards are important determinants of needs, and they should be assessed to round out one’s understanding of needs among the groups being studied.
• Needs assessment data are usually reported as group trends (Borg & Gall, 1999, p. 763)

Formative and summative evaluation

• The function of formative evaluation is to collect data about educational products while they are still being developed. The evaluative data can be used by developers to design and modify the product (Borg & Gall, 1999).

• The summative function of evaluation occurs after the product has been fully developed. It is conducted to determine how worthwhile the final product is, especially in comparison with other competing products. Summative data are useful to educators who must make purchase or adoption decisions (Borg & Gall, 1999).

Evaluation to guide program management

• It includes context evaluation, input evaluation, process evaluation, and product evaluation (CIPP). The CIPP model shows how evaluation could contribute to the decision-making process in program management (Stufflebeam and others 1971).
• Context evaluation involves identification of problems and needs in a specific setting.
• Input evaluation concerns judgments about the resources and strategies needed to accomplish program goals and objectives.
• Process evaluation involves the collection of evaluative data once the program has been designed and put into operation.
• Product evaluation aims to determine the extent to which the goals of the program have been achieved.

What does a ‘qualitatively oriented evaluation’ model mean?


• The worth of an educational program or product depends heavily on the values and perspectives of those doing the judging.
• For example the three following models:
- Responsive evaluation (Stake, 1967)
- Adversary evaluation (positive and negative judgments about the program) (Wolf, 1975)
- Expertise-based evaluation (Eisner, 1979)

Responsive evaluation


• Focuses on the concerns, issues and values affecting the stakeholders or persons involved in the program (Stake, 1967)
• Egon, Guba & Yvonna (1989) identified four major phases that occur in evaluation:
- Initiation and organization: negotiation between the evaluator and the client.
- Identifying the concern’s issues and values of the stakeholders using questionnaires
and interviews.
- Collection of descriptive evaluation using observations, tests, interviews, etc
- Preparing reports of results and recommendations.


Adversary evaluation


• Adversary evaluation relates in certain respects to responsive evaluation (positive and negative judgments about the program) (Wolf, 1975). It uses a wide array of data.
• Four major stages:
- Generating a broad range of issues, the evaluation team surveys various groups involved in the program (users, managers, funding agencies, etc).
- Reducing the list of issues to a manageable number.
- Forming two opposite evaluation teams (the adversaries) and provides them an opportunity to prepare arguments in favor of or in opposition to the program on each issue.
- Conducting pre-hearing sessions and a formal hearing in which the adversarial teams present their arguments and evidence before the program’s decision makers (p.774).

Expertise-based evaluation

• Expertise-based evaluation or educational connoisseurship and criticism or judgment about the worth of a program made by experts (Eisner, 1979)
• One aspect of connoisseurship is the process of appreciating (in the sense of becoming aware of) the qualities of an educational program and their meaning. This expertise is similar to that of an art critic who has special appreciation of an art work because of intensive study of related art works and of art theory.
• The other aspect of the method is criticism, which is the process of describing and evaluating that which has been appreciated. The validity of educational criticism depends heavily on the expertise of the evaluator.

Differences between Historical and Evaluation Research


• Historical research aims to assess the worth and meaning of historical sources, documents, records, relics, oral history, etc. The search is for facts relating to questions about the past, the interpretation of these facts and its significance for the present.
• Evaluation research aims to assess the merit, value, or worth of educational programs and materials of any level of schooling. It facilitates decision-making concerning policy, management, or political strategy to improve educational matters.

Conclusion
• Each type of research addresses different types of questions, and each one is necessary for advancing the field of education. The decision to undertake one of these types of research will depend primarily on the interests of the study. However, both, historical and evaluation research draw to varying degrees on the qualitative and quantitative traditions of research.
• In quantitative evaluation research, objectives provide the criteria for judging the merits of the product, e.g., publication and cost, physical properties, content, instructional properties, etc. In qualitative research, the worth of an educational program or product depends heavily on the values and perspectives of researchers.
• In historical research the historian discovers objective data but also can interpret and critique, making personal observations on the worth & value of findings.

No comments:

Post a Comment