Summary #1: Statistical Techniques (for processing and analysis of data)by Nelson Dordelly-Rosales
References:
Gall, M. D., Gall, J. P. & Borg, W. R. (1999). Educational research: An introduction (6th ed.). Toronto, ON: Allyn & Bacon.
Taylor, J. K., and Cihon, C. (2007). Statistical techniques for data analysis. (2nd ed.). New York: Chapman & Hall/CRC.
Key terms and definitions
Research Design: refers to all procedures selected by a researcher for studying a particular set of questions or hypotheses. In this process the researcher creates an empirical test to support or refute a hypothesis. The process of designing a research study has several steps: conclusions from previous studies, rationale or theory, questions and hypotheses (or suggested explanation or a reasoned proposal predicting a cause), design, gathering the data, summarizing the data and determining the statistical significance of the results, conclusions and beginning of next study.
Quantitative research: using statistical methods typically begins with the collection of data based on a theory or hypothesis, followed by the application of descriptive or inferential statistical methods. Descriptive statistics: also called summary statistics are used to “describe” the data we have collected on a research sample. The main descriptive statistics are: the mean, median, and standard deviation; they are used to indicate the average score and the variability of scores for the sample. Inferential statistics: are used to make inferences from sample statistics to the population parameters. It includes: sampling distributions and confidence intervals, one and two sample topics (comparison of means, ratio of variances), propagation of error in a derived or calculated value, regression analysis, testing hypotheses, drawing inferences.
Types of Quantitative Research Designs: Descriptive studies and studies aimed at discovering causal relationships (causal-comparison, correlation, or experiment). Causal-Comparison: refers to causal or functional relationships between variables (the way in which variables influence or affect each other). The causal-comparative method is aimed at the discovery of possible causes for the phenomenon being studied by comparing subjects in whom a characteristic is present with similar subjects in whom it is absent or present to a lesser degree. Experimental research design: is ideally suited to establish causal relationships if proper controls are used. The key feature of experimental research is that a treatment variable is manipulated. Correlational studies: include all research projects in which an attempt is made to discover or clarify relationships through the use of correlation coefficients. It tells the researcher the magnitude of the relationship between two variables.
Response to questions
The chapter aims to describe and explain the main statistical techniques for processing and analysis of data. It helps the reader to become familiar not only with the language, principles, reasoning and methodologies of statistical techniques of quantitative (research that is rooted in the positivistic approach to scientific inquiry) but also of qualitative (observation, ethnographic interview, survey) research methods
The specific topic of the chapter is the description of three main types of statistical techniques are descriptive inferential, and tests statistics. The chapter also deals with measurements in educational research usually expressed in different types of scores.
The overall purpose is to help the researchers in understanding four kinds of information about statistical tools: (1) what they should know about statistics and what statistical tools are available? (2) Under what conditions each tool is used? (3) What the statistical results mean? And (4) how the statistical calculations are made?
In general, the author is saying that we need to analyze research results effectively. The authors suggest that we make maximum use of data collected and apply appropriate statistical techniques when analyzing our research data.
This information is interesting because statistical techniques are used to a) describe educational phenomena; c) make inferences from samples to populations, d) identify psychometric properties of tests, e) apply mathematical procedures involved in the use of statistical formulas: measures of central tendency, of variability, correlation, tests, etc. A sound research plan is the one that specifies the statistical tools to be used in the data analysis. Statistical tools should be decided upon before data have been collected because different tools may require that the data be collected in different forms.
Closing summary
As researchers we should know that a statistical research project aims to investigate causality, and in particular to draw a conclusion on the effect of independent variables (predictors) on dependent variables (responses); and that there are two types of causal statistical studies: experimental and observational. An experimental study involves taking measurements, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In observational study we just gather data and investigate correlations between predictors and response. Basic steps of an experiment are: planning, design, summary (descriptive statistics), reaching consensus (inferential statistics), and documenting-presenting results. We should be careful in choosing the right statistical tools to be used in the data analysis (see visual graphic) because occasionally a statistical tool is used when the data to be analyzed do not meet the conditions required for the tool in question. After appropriate statistical tools have been selected and applied to the research data, the next step is to interpret the results. Interpretation must be done with care. Fortunately today, as researchers, we have access to the techniques and technology we need to analyze statistical data. Computers can help with data analysis techniques that were once beyond the calculation reach of even professional statisticians. All we need is practical guidance on how to use them. For example, measurement analysis can be performed through the MINITAB Statistical Software, which improves presentation of the results.
Reflection
The purpose, structure, and general principles of educational research methodology, quantitative and qualitative measurement analysis, and most importantly, the statistical techniques are valuable to everyone who produces, uses, or evaluates data. Descriptive statistics help us to summarize the data we have collected on a research sample, and inferential statistics techniques are important in educational research because they allow us to generalize from a sample or samples, to reach conclusions about large populations. We must be aware of misuses and abuses of statistics in research. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g. instrument precision) which propagate to the combination of variables in the function. Statistical techniques can help us to examine propagation of error if we are not careful. Statistical techniques are useful tools for collecting, classifying and using statistics in research, methods of using numerical facts systematically collected. They are tools for designing research, processing and analyzing data and drawing inferences or conclusions.
A few years ago, I had an excellent research experience using statistics in a survey. I worked in a survey of teaching methods of History teachers at the Ministry of Education in Venezuela. The survey system included the most commonly used survey descriptive statistics, including: percents, medians, means, and standard deviations. The results were presented in tables and we interpreted and drew some conclusions. We established significant differences between data points. Statistical Software was used to improve quality in presenting the final results. We interpreted some similarities and some differences between the teachers of public and private schools. We found that about 60% involved used of the traditional “lecture” method. As a result, the Ministry of Education developed training workshops on a variety of teaching methods.
Summary # 2: Collecting Research Data
References:
Gall, M. D., Gall, J. P. & Borg, W. R. (1999). Educational research: An introduction (6th ed.). Toronto, ON: Allyn & Bacon.
Wallen, E., and Fraenkel, J.C. (2007). Educational Research: A Guide to the Process. (2nd ed.). London: Lawrence Erlbaum Associates.
Key terms and definitions
Data-Collection Tools: questionnaires, interviews, and observations aimed at gathering similar kinds of data, are the most common instruments for data collection in survey research. Other techniques for collecting survey information are tests, self report measures, and examination of records.
Survey research is a distinctive research methodology for systematic data collection. Surveys often are used to simply to collect information, such as the percentage of respondents who hold or do not hold a certain opinion. Surveys can also be used to explore relationships between different variables. The cross-sectional survey: standard information is collected at one point in time from a sample drawn from a predetermined population. When information is collected from the entire population, the survey is called a census. The longitudinal survey: in the longitudinal survey, data are collected from respondents at different points in time in order to study changes or explore time-ordered associations. Three longitudinal designs are commonly employed in survey research: trend studies, cohort studies, and panel studies. Trend studies: in this design a given general population is sampled at each data-collection point. The same individuals are not surveyed, but each sample represents the same population. For example, each year we survey teachers of History and would compare from year to year. Cohort studies: in this design a specific population is followed over a period of time. Panel studies: in this design the researcher selects a sample at the outset of the study and then at each subsequent data-collection point the same individuals are surveyed.
Survey Interview: involves the collection of data through direct verbal interaction between individuals. It permits direct follow-up (in person, telephone, computer, and recording); obtain more data using self-check test, and greater clarity than questionnaires.
Collecting observational data: three types of observational variables may be distinguished: descriptive, inferential and evaluative.
Content Synthesis
The aim of this chapter is to explain different techniques for collecting research data or information. Some of these methods depend on the methodology and the theoretical assumptions used in the research. There is a tendency for researchers in the functionalist, positivist or ‘scientific’ paradigm to collect hard objective numbers by observation, experimentation, and extraction from published sources, questionnaires and structured interviews. They emphasise quantitative techniques over qualitative methods. Law and humanistic researchers in the interpretative and radical humanist paradigms use qualitative methods. However, matching methodologies and methods is the current tendency in educational research. Mixed method research paradigm and triangulation studies are ways to make research studies more robust and rigorous by verifying results through different methods, thus ensuring that the results are not a function of the research method.
The specific topic of the chapter is data-collection tools in surveys to obtain standardized information from all subjects in the sample. The focus is on survey research, a distinctive research methodology for systematic data collection. Information to be collected is assumed to be quantifiable. The chapter helps graduate students in education learn steps needed to carry out the collection of data process. The overall purpose is the improvement of educational research through an appropriate collection of data.
What are the authors saying? The chapter provides an explanation of the techniques for preparing and using tools of survey research, considering the various types of knowledge that can be generated by analysis of survey data. Collecting research data properly is worth doing. Survey research leads to new knowledge and this knowledge contributes to improve education in different ways.
By selecting and using gathering techniques and survey research in an appropriate way, we can avoid mistakes sometimes made by researchers. Cautions include several threats to the validity of the instrumentation process. For example, an extraneous event may cause the respondents to answer differently. It also warns that for our theses we need to obtain university IRB approval for the collection of data from human subjects.
Closing summary
In general, research data may be categorised as primary and secondary data. Primary data are data generated by the researcher using data gathering techniques (questionnaires, interviews, etc). Secondary data are those that have been generated by others and are included in data-sets, case materials, computer or manual databases or published by various private (e.g. Annual Reports of companies), public organisations or government departments (official statistics by the Statistical Office) and International Organisations such the International Monetary Fund and the World Bank and the United Nations, among others. The chapter mainly focuses on what is survey research, what are the data collection tools and what are the types of research survey. It describes the cross-sectional survey and the longitudinal survey. It explains three ways for collecting research data through longitudinal survey, specifically trend studies, cohort studies and panel studies. It also provides excellent examples to illustrate the major characteristics of each type of research survey and describes advantages and disadvantages of each one.
Reflection
The major purpose of surveys is to describe certain characteristics or variables of a population. Some characteristics are: 1) information is collected from a group of people (rather than from every member of the population) in order to describe some aspects (such as abilities, opinions, attitudes, beliefs, and/or knowledge) of the population of which that group is a part.
2) The main way in which the information is collected is through asking questions through questionnaires or/and interviews. The answers by respondents constitute the data of the study. Among major advantages of survey research are reduced cost and that information collected can be of various types. Among major disadvantages are biases inherent in the data collection process and possible security or confidentiality concerns. I was able to participate in 2001 in a longitudinal survey (cohort studies survey) at Catholic University in Venezuela. In this design, the College sampled the graduating class throughout a couple of years using questionnaires. I realized that there are unique problems and pressures that affect longitudinal studies because of the extended period of time over which data is collected in comparison to cross-sectional studies. One danger is that the issues studied, and the measures and theories used, may become obsolete over the course of the study. Also, the survey was too long and a number of participants left the last questions without response. Reading Borg & Gall (1999) made me reflect on the importance of carefully planning research surveys (and short-term uses for the data should be planned ahead). Indeed, the success is dependent on clearly defining long term goals, specific variables, limitations and delimitations in the generalizability of findings. In order to guard against obsolesced longitudinal research should be theoretically broad-minded and mixed. It is important a carefully selected sample of respondents and selecting a large enough sample size. In order to make legitimate conclusions about the specified population, sampling must be representative and valid statistical assumptions must be present.
Summary # 3: Collecting Research Data with Questionnaires
References:
Gall, M. D., Gall, J. P. & Borg, W. R. (1999). Educational research: An introduction (6th ed.). Toronto, ON: Allyn & Bacon.
Fleming, C. M., and Bowden, M. (2009) “Web-based surveys as an alternative to traditional mail methods” Journal of Environmental Management, 90, 1, pp. 284-292
Key terms and definitions
Questionnaire: can be defined as a set of questions to which participants record their answers, usually within largely closely defined alternatives (Fleming and Bowden, 2009). There are mainly three types: postal or mail questionnaire, online questionnaires and personally administered questionnaires.
Mail Questionnaires: the questionnaires are sent (using the post office) to the sample participants, usually with a pre-paid self-addressed envelope to encourage response. Some advantages are low cost, and anonymity; the respondents can give more thought to the questions; researcher bias is less as compared to administered questionnaires. Some disadvantages are possible misinterpretation of questions, possible problems with language, and lower level response rate (usually is small which requires a second or even a third mailing)
Hand-delivering or personal administered questionnaires: the researchers personally administer the questionnaire to the participants, usually at the participants’ workplace, residence or any other adequate location. Some advantages are faster response as compared to the mail questionnaires, the research can clarify questions to the participant, the researcher can motivate honest answers by emphasising the participants’ contribution, and personal persuasion increases response rate. A possible disadvantage is that the researcher may introduce his/her personal bias and the responses may vary as compared to mail questionnaires.
Online questionnaires: Using online questionnaires enables the researcher to collect large volumes of data quickly and at low cost; direct access to research populations; it is possible to make them friendly and attractive, thus encouraging higher response rates and data entry errors are often low. Some disadvantages are sample bias, technology knowledge of respondents, anonymity, privacy and confidentiality (Fleming and Bowden, 2009).
Content synthesis
The chapter aims to help the reader to understand the main steps in conducting questionnaire surveys, the rules that researchers should follow to guarantee high-quality surveys and the importance of careful planning and sound methodology.
The steps in conducting a questionnaire survey are the following: (1) defining objectives, (2) selecting a sample, (3) writing items, (4) constructing the questionnaire, (5) pretesting, (6) preparing a letter of transmittal, (7) sending out questionnaire and follow-ups, and (8) analysis of the results and preparing the research report. Surveyors should try to make questionnaires attractive, and easy to complete. Also, they should number the questionnaire items and pages; put name and address of person to whom form should be returned; include brief, clear instructions; use examples before any items, etc.
The overall purpose of the chapter is to guide, especially to graduate students of education, and teachers, in how to apply the necessary tools, procedures and techniques for effectively designing and conducting a survey questionnaire in educational research.
The authors are saying that given the objectives of a survey, we as graduate students should know the rules related to questionnaire format and how to write both closed-form and open-ended questionnaire items to measure them.
Information provided is interesting because questionnaires are useful instruments to obtaining access to organisations and, more specifically, to obtain evidence of consensus among the respondents on different issues. With careful planning and sound methodology, the mail questionnaire can be a very valuable research tool in education.
Closing summary
Chapter 8 provides a clear explanation of the steps in conducting a questionnaire survey and the set of rules that researchers should apply when conducting it. Among the rules that we should apply when conducting a questionnaire survey are: to define the problem clearly, list objectives, construct neat items, make the questionnaire attractive, put name and address of person to whom form should be returned. Regarding the form, the questionnaire as such should include brief, clear instructions, use examples before any items, organize the questions in logical sequence and it should be easy to complete. In relation to the organization of content, the questionnaire when moving to a new topic, it should include a transitional sentence to help respondents switch their trains of thought, begin with a few interesting and nonthreatening items, do not put important items at the end and put threatening or difficult questions near the end; and items should meaningful to the respondents. Finally, if there is attitude measurements you should investigate respondents’ familiarity (prove the questionnaire previously with a small sample), and watch out anonymity because non-respondent individuals cannot be identified (but all depends if anonymity is necessary to achieve the specific goals). The authors recommend pre-testing the questionnaire, which requires doing the following: select a sample of individuals from a population similar to our subjects and ask them to repeat their understanding of the meaning of the question in their own words to make sure items are clearly stated; apply the questionnaire to a sample to check the % of replies. Read the subjects’ comments and make necessary changes to improve it; make a brief analysis of the pre-test results. Make necessary changes (adding questions, correcting words, etc). Prepare a letter of transmittal; the authors say that it is important to pre-contact a sample to assure cooperation. Letter must be brief, precise, explain good reasons, assure privacy and confidentiality, if possible, and associate it with some professional institution or organization (authority symbol).
Reflection
The questionnaire can be a very valuable research tool in education. It is a data collection tool in which written questions are presented that are to be answered by a selected sample of respondents. Collecting research data with questionnaires requires careful planning and sound methodology. The authors describe in detail all the steps that must be taken to carry out a successful questionnaire survey.
The key in carrying out a satisfactory questionnaire study is to begin by clearly defining the research problem and list specific objectives or hypotheses. That is the researcher needs to have a clear understanding of what s/he hopes to obtain from the results. Otherwise, it will be very difficult to make right decisions regarding selection of a sample, construction of the questionnaire, and methods for analyzing the data. Identifying the target population and selecting a sample is also a key to guarantee success in conducting a questionnaire survey.
The researcher also must be very careful in designing or constructing items. The qualities of a good questionnaire survey are the following: Clarity, short items, avoid items that include two separate ideas in the same item, do not use technical terms, jargon or confuse words, ask general questions first and then specific questions and it is important to avoid biased or leading questions (the subject is eager to please), and avoid questions that may be psychologically threatening (low moral). The authors suggest sending questionnaires and follow-ups by using special delivery mail. The questionnaires must be neat, and carefully planned.
Below, there are two diagrams that synthesize statistical techniques and a glossary that can help in understanding some of the terms that are useful in conducting research.
Reference:
Vern Lindberg (2000) “Uncertainties and Error Propagation” http://www.rit.edu/cos/uphysics/uncertainties/Uncertaintiespart1.html#range
Saturday, 23 May 2009
About Surveys
Surveys provide the chance to express participants’ ideas and we can get precise answers to our questions. I think the secret of an excellent survey is the structure of the questions and having very clear the goals of the study. One of the best surveys I have reviewed is the following:
Rosales-Dordelly, C.L. and Short, Edmund C. (1985) Curriculum Professors’ Specialized Knowledge. Lanham, MD: University Press of America. This book reports a survey among professors between Canada and the US.
# 1 Validity, Reliability, Trustworthiness
A. Summary by Nelson Dordelly-Rosales: McMillan (2007) summarizes and provides suggestions to monitor the threats that put at risk internal validity in randomized field trial studies or control trials or, on quasi-experiments in which there is “equating” of pre-test differences: unit of randomization and local history (equivalence of the groups), intervention-treatment (fidelity, consistency with theory), differential attrition (mortality-tracking participants), testing (instrumentation variations and procedures), subject effects (selection-maturation interaction), diffusion of intervention (treatment, asking appropriate questions), experimenter effects (checking values, biases, needs) and novelty effects (changes to normal routines). It is the responsibility of researchers “to include design features that will lessen the probability that the threat is plausible” (p.5)
B. Experience: As graduate student, my recent research activity is in the area of interpretation/construction (my theses). But, I recently reviewed an excellent research by Chauncey Monte-Sano (2005) which resembles another study done at the Ministry of Education in Venezuela on comparative studies. The researchers monitored and supervised the threats to validity and trustworthiness through intervention fidelity (pre- and post-test essays, interviews, observations, teacher feedback, assignments, and readings; analysis of students’ progress within each classroom and between both classrooms, assessing any changed observed in the students’ work).
C. Suggestions: I think that in studies in which there are pre-test differences, it is necessary to include design features to monitor all plausible threats. Particularly, I would suggest increasing the number of homogeneous comparative groups.
# 2 Different types of sampling methods
Summary: Cui, Wei Wei (2003)aims to help the readers to understand what is sampling (a technique of selecting a representative part of a population for the purpose of drawing conclusions of the whole population), the different types of sampling (probabilistic, non-probabilistic, simple, systematic, stratified, cluster, purposeful), potential sources of error (sampling error, non-coverage error, non-response error, and measurement error) and how to reduce error in mail surveys and interviews (avoid unrepresentative number, enlarging the sample size; avoid bias of interviewer or survey researcher in favouring the selection of units that have specific characteristics; improving survey return rates, etc).
The value for me as an educator and as a consumer of educational research: I should use appropriate sampling methods and an adequate response rate for a representative sample. However, I should also evaluate different factors that may affect the quality of data from a research study, for example, procedures, questions asked, validity of questionnaire, among others.
# 3 Two concepts regarding data analysis
a. Helberg (1996) warns the researchers on the paradox that statistics can produce dissimilar or contradictory results. The author provides suggestions about how to cope with sources of bias, errors in methodology, and misinterpretation of results. To that end, he explains how to assuring representative sampling and valid statistical assumptions; recommends using methods available for taking measurement error into account in some statistical models and applying more precision and accuracy in interpretation of results.
b. Oliver-Hoyo snd Dee Dee (2006) reviewed data collection through three qualitative methods to study qualitative variables (meaning constructed by individuals): surveys, journal responses and field notes. The authors are persuasive in that relying on more than two methods is invaluable to avoid gross errors when drawing conclusions in surveys.
c. How the articles’ information could be of value to you as an educator and consumer: From these readings, I learned that multiple methods of data collection and analysis (quantitative and qualitative) help us to develop a more complete view of the problem and the solution. So, in the example provided regarding accountability, I think that the right approach is integrating different “assessment strategies” so that educators can take advantage of all the information.
Surveys provide the chance to express participants’ ideas and we can get precise answers to our questions. I think the secret of an excellent survey is the structure of the questions and having very clear the goals of the study. One of the best surveys I have reviewed is the following:
Rosales-Dordelly, C.L. and Short, Edmund C. (1985) Curriculum Professors’ Specialized Knowledge. Lanham, MD: University Press of America. This book reports a survey among professors between Canada and the US.
# 1 Validity, Reliability, Trustworthiness
A. Summary by Nelson Dordelly-Rosales: McMillan (2007) summarizes and provides suggestions to monitor the threats that put at risk internal validity in randomized field trial studies or control trials or, on quasi-experiments in which there is “equating” of pre-test differences: unit of randomization and local history (equivalence of the groups), intervention-treatment (fidelity, consistency with theory), differential attrition (mortality-tracking participants), testing (instrumentation variations and procedures), subject effects (selection-maturation interaction), diffusion of intervention (treatment, asking appropriate questions), experimenter effects (checking values, biases, needs) and novelty effects (changes to normal routines). It is the responsibility of researchers “to include design features that will lessen the probability that the threat is plausible” (p.5)
B. Experience: As graduate student, my recent research activity is in the area of interpretation/construction (my theses). But, I recently reviewed an excellent research by Chauncey Monte-Sano (2005) which resembles another study done at the Ministry of Education in Venezuela on comparative studies. The researchers monitored and supervised the threats to validity and trustworthiness through intervention fidelity (pre- and post-test essays, interviews, observations, teacher feedback, assignments, and readings; analysis of students’ progress within each classroom and between both classrooms, assessing any changed observed in the students’ work).
C. Suggestions: I think that in studies in which there are pre-test differences, it is necessary to include design features to monitor all plausible threats. Particularly, I would suggest increasing the number of homogeneous comparative groups.
# 2 Different types of sampling methods
Summary: Cui, Wei Wei (2003)aims to help the readers to understand what is sampling (a technique of selecting a representative part of a population for the purpose of drawing conclusions of the whole population), the different types of sampling (probabilistic, non-probabilistic, simple, systematic, stratified, cluster, purposeful), potential sources of error (sampling error, non-coverage error, non-response error, and measurement error) and how to reduce error in mail surveys and interviews (avoid unrepresentative number, enlarging the sample size; avoid bias of interviewer or survey researcher in favouring the selection of units that have specific characteristics; improving survey return rates, etc).
The value for me as an educator and as a consumer of educational research: I should use appropriate sampling methods and an adequate response rate for a representative sample. However, I should also evaluate different factors that may affect the quality of data from a research study, for example, procedures, questions asked, validity of questionnaire, among others.
# 3 Two concepts regarding data analysis
a. Helberg (1996) warns the researchers on the paradox that statistics can produce dissimilar or contradictory results. The author provides suggestions about how to cope with sources of bias, errors in methodology, and misinterpretation of results. To that end, he explains how to assuring representative sampling and valid statistical assumptions; recommends using methods available for taking measurement error into account in some statistical models and applying more precision and accuracy in interpretation of results.
b. Oliver-Hoyo snd Dee Dee (2006) reviewed data collection through three qualitative methods to study qualitative variables (meaning constructed by individuals): surveys, journal responses and field notes. The authors are persuasive in that relying on more than two methods is invaluable to avoid gross errors when drawing conclusions in surveys.
c. How the articles’ information could be of value to you as an educator and consumer: From these readings, I learned that multiple methods of data collection and analysis (quantitative and qualitative) help us to develop a more complete view of the problem and the solution. So, in the example provided regarding accountability, I think that the right approach is integrating different “assessment strategies” so that educators can take advantage of all the information.
Article Review by Nelson Dordelly-Rosales:
1. Krauss, S., (2005) Research Paradigms and Meaning Making: A Primer.
The Qualitative Report, 10 (4), 758-770.
The paper Research Paradigms and Meaning Making: A Primer provides an introduction to some of the basic issues in attempting to work with both quantitative and qualitative research methods. It explains how qualitative data analysis can be used to organize and categorize different levels and forms of meaning. It argues that the heart of the quantitative vs. qualitative “debate” is philosophical, not methodological, and it offers an overview of the epistemological differences of quantitative and qualitative research methodologies. The article introduces the notion of meaning making in social sciences research and how it actually occurs through qualitative data analysis. It defines meaning as “the underlying motivation behind thoughts, actions and even the interpretation and application of knowledge” (Krauss, 2005, p. 763). The task of constructing meaning through qualitative data analysis is explained through a variety of perspectives and approaches.
Problem/Issue and the Importance/Significance
The focus is on the task of constructing meaning through qualitative data analysis. This paper is significant because it examines the concept of the philosophical realist paradigm and introduces the notion of “meaning making” in research methods and how meaning is generated from qualitative data analysis specifically. To that end, it explains epistemological differences between quantitative and qualitative research that allows us to understand phenomena and to get more realistic results. Some examples are also provided.
Research Question(s)
What will be the epistemological similarities and differences between quantitative and qualitative research paradigms? How can the realist philosophical paradigm accommodate both, quantitative and qualitative research paradigms? How meaning can be constructed and organized using a qualitative data analysis approach?
Sample and sample selection process
Qualitative data selection in qualitative research is intuitive to discover (not measure) potentially important insights. The author explains the need to make use of multiple research methods to optimize the data selection process, to increase both the breadth and width of data selection.
Data Collection Methods
Krauss (2005) found that qualitative data collection meaning is constructed on “a variety of levels of daily life through the exchange of ideas, interaction, and agreement between the researcher and the participants” (p. 764). The author supports his point of view through some examples from the literature, within social sciences, about how meaning can be constructed and organized using a qualitative data analysis approach (interpretivism). The author also cites a multi-year religiosity initiative as a case in which he was involved in conducting both qualitative and quantitative research to assess religiosity in the lives of young people.
Data Analysis Method
Krauss (2005) argues that qualitative data analyses in qualitative research are guided by a reflective paradigm in an attempt to acquire social knowledge. In this sense, according to the author, meaning is constructed in a variety of ways, that is, “through construction, the researcher is not a blank slate; rather s/he is an active participant in the process” (Krauss, 2005, p. 767). This means that, epistemologically, the researcher is engaged in the setting, participating in the act of ‘being with’ the respondents in their lives to generate meaning of them” (p. 769). In addition, developing themes and storylines featuring the words and experiences of participants themselves adds richness to the findings.
Limitations/Delimitations/Assumptions
Krauss (2005) explains that the realist paradigm has the unique goal of facilitating the meaning-making process, which is an important learning facilitator that has the power to encourage transformative learning. The realist philosophical paradigm attempts to accommodate quantitative and qualitative research methods. In the area of religion for example: “the result of the process was a major study that tapped into the richness of individual religious experience, along with a broader understanding of religious behaviors and knowledge levels across large groups of young people” (p.758). As a whole, the realist paradigm has less limitations than each one separated.
Trustworthiness/Validity Considerations
According to the author, realist researchers reject the framework of validity that is commonly accepted in more quantitative research in the social sciences. Nevertheless, realist research inherently assumes that there is some reality that can be observed with greater or less accuracy or validity. In this sense, “rigor in qualitative data analysis is a necessary element for maximizing the potential for generating meaning” (Krauss, 2005, p.765). This rigor provides trustworthiness to the results.
Ethical Issues
Qualitative researchers can operate under different epistemological assumptions from quantitative researchers. Ethical issues can sometimes result in confusion and uncertainty among researchers. In qualitative research, as well as in quantitative research, researchers are expected to employ high standards of academic rigor, and to behave with honesty and integrity. Ethics can emerge from value conflicts. I think that being a ‘purist’ researcher looking only at one small portion of a reality that cannot be split or unitized “without losing the importance of the whole phenomenon brings an ethical issue to the research process” (Krauss 2005, p. 767).
Reflective Assessment
The concept of meaning making in research methods and how meaning is generated from qualitative data analysis are the most important contributions of this paper. This article discusses the philosophical differences between quantitative and qualitative research. Quantitative research is positivist, objective and scientific that can be accomplished by statistical software packages commonly used for quantitative data (descriptive data). Qualitative researchers operate under naturalist, constructivist, eclectic and subjective assumptions (researcher interpretation). Qualitative research is a highly intuitive activity that contributes greatly to the construction of meaning. As researchers, we should focus on “the significance of different levels of meaning such as worldviews or philosophies of life, and the importance of meaning as a critical element to human existence and learning” (Krauss, 2005, p.767). I think that the author makes a good point regarding the need to make use of multiple research methods to optimize the data collection and the analysis process; that is, a mixed approach increases both the breadth and width of data collection and data analysis. The author provides an excellent overview of the basic issues in attempting to work with both quantitative and qualitative research methods toward the goal of generating meaning. Using both methods together contributes to a better understanding of phenomenon. I think that, it is important to make use of multiple research methods because it means broader understanding of behaviors and knowledge levels across large groups of people. Different philosophical assumptions or theoretical paradigms about the nature of reality are essential to understanding the overall perspective from which the study is designed and carried out. Within this holistic approach, critical realism framework, both qualitative and quantitative methodologies together are appropriate toward the goal of generating meaning and understanding thinking, behavior and worldwide formation. Indeed, the heart of the quantitative-qualitative “debate” is philosophical, not methodological.
Article Review
2. Gephart, R. (1999, 11 14). Paradigms and Research Methods. Retrieved 05 21, 2009, from Academy of Management, Research Methods Division: http://division.aomonline.org/rm/1999_RMD_Forum_Paradigms_and_Research_Methods.htm
Gephart (1999) explains three prevailing paradigms or views of the world which are currently shaping research: Positivism, Post-positivism or Interpretivism and Critical Postmodernism. The author introduces the concept of each paradigm, describes the key features of each worldview or “forms of scholarship,” the nature of knowledge pursued, and the different means by which knowledge is produced and assessed within each paradigm or worldview. Very synthetically, the three strands of thinking and researching are different in the following way: Positivism assumes an objective world which scientific methods (for example, experimental and survey) can more or less readily represent and measure statistically, seeking to predict and explain causal relations among key variables. Post positivists or interpretivists assert that these methods “impose a view of the world on subjects;” their concern is the interplay between objective and subjective meanings. Critical postmodernists, on the other hand, argue that these imposed views or measures implicitly “support forms of scientific knowledge that explicitly reproduce capitalist structures and associated hierarchies of inequality” (p.5). In this sense, the goal is “social transformation involving the displacement of existing structures of domination” (p. 6). The author concludes saying that these paradigms or theories are somewhat “separate but not greatly distant from one another” (p.7). In general the goal of research is adequate reflection of peoples’ experience in making inquiries from one or more theoretical frameworks.
Problem/Issue and the Importance/Significance
The need to help readers understand some of the basic assumptions underlying forms of research present in the field: Positivism, Post-positivism or Interpretivism and Critical Postmodernism. The author attempts to make us understand the key features, the usefulness of each paradigm, and how the three paradigms can be interwoven into research.
Research Question(s)
What will be the epistemological similarities and differences between positivism, interpretivism, and critical postmodernism? What are the main assumptions, key ideas, theories, figures, goals, theories, and criteria, unit of analysis and research methods of each paradigm?
Sample and Sample Selection Process
Sample and sample selection varies according to specific paradigm. Positivism uses quantitative criteria; interpretivism and critical postmodernism apply qualitative criteria and therefore, are more intuitive and flexible; however, rigor and specific principles are essential for good research.
Data Collection Methods
In this article, the author uses grounded theory development and suggests the mixture of quantitative and qualitative paradigms. The author explains, however, that data collection and the research methods, goals, criteria and unit of analysis of each paradigm are different: (a) Positivist research uses experiments; questionnaires; secondary data analysis; quantitatively coded documents, Likert scaling, and structural equation modeling, Qualitative uses grounded theory testing, among others. (b) Interpretivism uses: ethnography, participant observation, interviews, conversational analysis, and grounded theory development; case studies, conversational and textual analysis, and expansion analysis. (c) Critical Postmodernism uses: field research, historical analysis, dialectical analysis.
Data Analysis Method
The author explains that the positivist research unit of analysis is the variable. The unit of analysis of interpretivism is the meaning. Critical theory-Postmodernism (PM) uses as unit of analysis: deconstruction, textual analysis.
Limitations/Delimitations/Assumptions
Among the limitations and delimitations of each paradigm are the following: Positivist research assumes as its goals uncovered truth and facts as quantitatively specified relations among variables. Interpretivism/Constructivism is a related approach, which is based on analysis and look for persuasion using ‘sensitizing’ concepts. Its goals are: describe meanings, understand members' definitions of the situation, and examine how objective realities are produced. Critical theory-Postmodernism (PM) aims to investigate/uncover hidden interests, enable more informed consciousness, displace ideology with scientific insights, and change.
Trustworthiness/Validity Considerations
Criteria of validity vary according to each paradigm; the author explains that positivist research, in place of prediction uses explanation, rigor, internal and external validity and reliability. Interpretivism uses the trustworthiness and authenticity. Critical theory-Postmodernism (PM) uses theoretical consistency, historical insights, transcendent interpretations, basis for action, change potential and mobilization. Its unit of analysis are: contradictions, ‘incidents of exploitation’ and the sign. In general, realist researchers reject the framework of validity that is commonly accepted in just one method of research.
Ethical Issues
Currently there is a reexamination of ethical standards to protect better the rights of research participants within each paradigm. Among principles are: voluntary participation, informed consent, confidentiality, anonymity, and prevention of risk of harm.
Reflective Assessment
Gephart (2005) provides an excellent overview of the three important and prevailing paradigms, views of the world, or philosophies of research, namely, Positivism, Post positivism or Interpretivism, and Critical Postmodernism. I think that we might consider that each school is eclectic in the sense that they choose the best from all sources. I also think that the three strands are different but they can be interwoven, integrated or mixed into the research with the purpose of improving the process of inquiry and its results.
I regard the mixed paradigm as a more reasonable strategy for research. Why? Because I think that by understanding the main differences, and weakness of each of the current leading theories, and by demonstrating that each one, separately, provides just one view of the world, the promise of impartiality of each one might be criticized as illusory. An integrated perspective is a strategic open-mindedness approach, which can tolerate all different theories and points of view coming from all different directions that helps us better understand research problems and their solutions. Positivism provides a partial view of reality (its inductive view of quantitative data). Interpretivism and critical studies are also a partial view of life; each one is subjective (a deductive view of qualitative data). Given the progressive changes evidenced in contemporary society, it is imperative a holistic understanding of phenomena (and view each problem from different angles or from an interdisciplinary view). In this sense, “toleration of others requires broken confidence in the finality of our own truth” (Tuck, 1988, p.21). So, I think is important synthesizing into a unity all of the best components in the various philosophical theories. My idea of being integrative is to retain the scientific commitment, but also to combine it with the assumptions of qualitative research.
Article Review
3. Mackenzie, N., & Sally, K. (2006). Research Dilemmas: Paradigms, methods and methodology. Retrieved 05 21,2009, from Issues in Educational Research: http://www.iier.org.au/iier16/mackenzie.html
Mckenzie and Knipe (2006) criticize the perceived dichotomy between qualitative and quantitative research methods in research textbooks and journal articles: “considerable literature supports the use of mixed methods” (p.1). The authors begin with a definition of leading paradigms in educational research, which are the following: positivist, post-positivist, interpretive/constructivist paradigm, transformative, and pragmatic paradigm. They discuss the language commonly associated with each major research paradigm. The focus is in the basic issues in attempting to work with mixed research methods and how “the research paradigm and methodology work together to form a research study” (p.1). To that end, they clarify the difference between methodology and method: “The most common definitions suggest that methodology is the overall approach to research linked to the paradigm or theoretical framework while the method refers to systematic modes, procedures or tools used for collection and analysis of data” (p.4). The authors also clarify the difference between paradigms, methodologies and the traditional “dichotomy” of quantitative and qualitative research methods and data collection tools. They conclude with a discussion and explanation of how to combine paradigms and methods.
Problem/Issue and the Importance/Significance
The paper examines the features of each paradigm and their main argument is to “demystify” the role of paradigms in research. It questions the “dichotomy” quantitative-qualitative as a way to teach research methodology. Research texts and university courses “can create confusion to undergraduate, graduate and early career researchers” (McKenzie and Knipe, 2006, p.2). It suggests teaching a combination of both, quantitative and qualitative methods of research, making use of the most valuable features of each. That is, “research methods in research texts and university courses should include mixed methods and should address the perceived dichotomy between qualitative and quantitative research methodology” (p.1).
Research Questions
Why qualitative and quantitative methods should be combined? How the research paradigm and methodology work together to form a research study? Is there a difference between methodology and methods? How to match paradigms and methods?
Sample/Sample selection process and Data Collection/Analysis Method
According McKenzie and Knipe (2006) each research paradigm, framework, or methodology applies different sample/sample selection process and data collection/analysis methods. Each overall framework or methodology of research is consistent with the definition of each paradigm and hold unique features, which are specific to their particular approach. For example, the positivist and postpositivist paradigm usually applies experimental, quasi-experimental, and correlational, among others. The interpretivist/ constructivist paradigm applies naturalistic, phenomenological, hermeneutic, interpretivist, ethnographic, multiple participant meanings, and social and historical construction. The transformative paradigm applies critical theory, neo-Marxist, feminist, critical race theory, Freirean, participatory or emancipatory, among others. The pragmatic paradigm applies among others, consequences of actions, problem-centered, pluralistic and political. In a mixed research paradigm the decision-making process does not necessarily follow a linear path, the process is more realistically cyclical, and the researcher can mix methods or can make changes as the research progresses.
Limitations/Delimitations/Assumptions
In this article, the authors assume that educational research should be taught as a mixed paradigm. They argue that a combination of both, paradigm and methods and tools is the right way to teach research methodology. Mixed method is itself a statement of what could be, rather than a groundbreaking notion, especially in the instance of educational research.
Trustworthiness/Validity Considerations
The rejection of reliability and validity in qualitative research has resulted in a shift for “ensuring rigor” from the researcher’s actions during the course of the research. Each researcher's theoretical orientation has implications for every decision made in the research process. This obviously has implications for trustworthiness/validity considerations. The emphasis on strategies that are implemented during the research process has been replaced by strategies for evaluating trustworthiness and utility, which are implemented once a study is completed.
Ethical Issues
According to the authors, many writers fail to adequately define research terminology and sometimes use terminology in a way that is not compatible in its intent, omitting significant concepts and leaving the reader with only part of the picture. In this sense, confusion can be created when authors use different terms with different meanings to discuss paradigms; for instance, methodology and methods are usually used interchangeably but they have different meanings. The authors conclude stating that mixed method, like all research approaches, needs to be viewed through a critical lens while at the same time recognizing as valid its contribution to the field of research.
Reflective Assessment
From philosophical perspective, showing how much reflection it takes to start up an investigation, the article discusses different types of research and the language associated with them. The terms qualitative and quantitative refer to the data collection methods, analysis and reporting modes, instead of the theoretical approach to the research, which is the methodology (the overall approach to research linked to the paradigm or theoretical framework, while the method refers to systematic modes, procedures or tools used for collection and analysis of data).
This article applies almost directly to our situation as young researchers and is very useful for us to distinguish the type of methodology to apply in our own research. It explains the strengths of each leading methodology or theory of research, which are the following: positivism and post-positivism, interpretivism/constructionism, transformative, pragmatism, and a mix-methods approach to research, which are excellent theoretical frameworks. In a mixed research paradigm the decision-making process does not necessarily follow a linear path, the process is more realistically cyclical, and the researcher can mix methods or can make changes as the research progresses. This view applies to constitutional interpretation. This past year I proposed the mix paradigm in constitutional interpretation in the Annual Graduate Conference in King’s College, University of London:
http://www.iglrc.com/2008website/sessions.html
I'm delighted to say that my paper on “Eclecticism in Constitutional Interpretation” was selected for publication in the book Law and Outsiders Norms, Processes and 'Othering' in the 21st Century, edited by Cian C. Murphy and Penny Green, (Oxford: Hart Publishing, 2009)
http://www.hartpub.co.uk/books/search.asp?s=Legal+Theory&st=0&cp=6
4. Article Review
Monte-Sano, Chauncey (2008) Qualities of Historical Writing Instruction: A Comparative Case Study of Two Teachers’ Practices. American Educational Research Journal, 45 (4), p. 1045-1079.
The author analyzed qualitative and quantitative differences of the practices of two high school teachers of U.S. History, in real-world History and writing instruction over time (seven months). The analysis included forty two students’ performances that resulted from these practices rather than researcher interventions. The author demonstrated that both, teachers and students need training in the work of “writing evidence-based historical essays that involves sifting through evidence and constructing an interpretation in writing” (Monte-Sano, 2008, p. 1046). The results show that the teacher that supported students’ development in writing evidence-based historical essays was more successful in improving students’ growth than the other. The author aims to help high school teachers of U.S. history become more acquainted of different qualities of instruction to help students to learn how to read, write and think historically. She explains that there are different qualities of instruction that support students’ growth in writing evidence-based historical essays. These qualities are the following: approaching history as evidence-based interpretation; reading historical texts and considering them as interpretations; supporting reading comprehension and historical thinking; asking students to develop interpretations and support them with evidence; and using direct instruction, guided practice, independent practice, and feedback to teach evidence-based writing. According to the author, “the act of writing alone is not sufficient for growth in evidence-based historical writing” (Monte-Sano, 2008, p.1045).
Problem/Issue and the Importance/Significance
Students and teachers tend to have difficulties integrating documentary evidence into written accounts of past events. The author proved that the above mentioned qualities of teaching would foster growth in evidence-based historical writing. This study is important because history educators still know little about the relationships between teaching and learning with regard to evidence-based writing and reasoning (Monte-Sano, 2008).
Research Question(s)
How do teachers prepare students to write evidence-based historical essays? What messages about history, evidence, and writing do teachers’ practices convey? What opportunities to think and write historically do these teachers provide? How do teachers think about their subject matter, students, and pedagogy? In what ways do teachers’ practices coincide with improvements in students’ evidence-based historical writing?
Sample and sample selection process
Two teachers were selected in two urban high schools in Northern California. One class period per teacher – selection was based on class size in U.S history course. A total of 42 students from these classes participated in pre and post assessments of their historical learning. Over 7 months the researcher identified patterns of growth (or lack thereof).
Data Collection Methods
Data were collected from four sources: interviews, observations, feedback, and classroom artifacts (assignments and materials). Interview questions asked teachers their view of students’ progress and needs, and the reasoning behind their instructional decisions. Observations focused on what students did during class, how the teacher represented history and what opportunities there were to learn evidence-based reasoning, argumentation and writing. Field notes and data summary charts where completed during and after every observation. Feedback included teachers’ oral assessment on homework and essays.
Data Analysis Method
The author used mixed methods in an embedded multiple-case design that included teacher and student data analysis. Regarding teacher data, she organized field notes and interview data chronologically, transcribed them into codes, and used memos to track key ideas, to highlight illustrative excerpts of class, and to note what to look for in future observations. Data showed the amount of time that each teacher devoted to a particular topic, the agreement in the number of assignments and the number of readings per topics and key components of assignments. With respect to student data, the author measured, through pre and post test instruments, how students composed arguments that recognize historical perspectives from multiple documents.
Limitations/Delimitations/Assumptions
The reduced number of teachers and students was one of the limitations. The researcher had to create a matrix of questions and possible answers and to ensure that both instruments were appropriate (in terms of age of participants). Each instrument presented several points of agreement between sources and so allowed for multiple responses to the questions. Each one asked a why question that prompts students to make a supporting argument explaining why an action was taken in the past.
Trustworthiness/Validity Considerations
The author created specific instruments to study historical reasoning and writing in history. In terms of content validity, the pre- and post-test instruments were consistent with the following variables: the notions of historical reasoning as analysis of evidence, the use of evidence to construct interpretations of the past, and communication of arguments in writing. According to the author, the strength of these instruments lied in their ecological validity (Monte-Sano, 2008, p. 1051). Even so, the author noticed that contextual changes over the course of multiple administrations of the tests can influence results. For example, the constraints of working in the classrooms led to certain agreement on the essay topics.
Ethical Issues
Comparing two teachers with different approaches (one teacher worked in groups, the other worked with lectures in which students listened to lectures and worked independently), “was not entirely fair” (Monte-Sano, 2008, p.1079); however, the author explains that comparison is instructive when considering how to develop students’ historical thinking and writing.
Reflective Assessment
The report is a comparative case study of teaching and it uses student performance as a backdrop for claims of teaching effectiveness. The target was to examine two teachers’ practices with regard to the learning outcome of writing evidence-based essays. The strength of the body of the article lies in four main aspects, (1) the notions of historical reasoning as analysis of evidence; use of evidence to construct interpretations of the past, and communication of arguments in writing, (2) the list of qualities of instruction that support students’ growth in writing evidence-based historical essays, (3) the list of questions that teachers of history can ask of research, and (4) the use of multiple research methods to optimize the data collection and the analysis process. The results show the usefulness of qualitative and quantitative comparisons of students’ work to determine how each class improves in writing evidence-based history essays.
Traditionally, teachers and students tend to view history as established fact (literal meaning of documents), not analysis or interpretation. Monte-Sano shows that there are creative ways that teachers implement approaches to history writing. This entails synthesizing and organizing information to suit the writer’s purposes; problem-based writing tasks, encouraging historical thinking, and transformation of knowledge already in the mind.
This article is an excellent example of how to work with both quantitative and qualitative research methods toward the goal of generating a new way to teach and learn History writing. To explore further, I read other articles and books, which expand the basic theme and I came to the conclusion that teachers that embark on such a study of History must feel the passion of teaching and be prepared to devote time and energy to the endeavor. In my future endeavor, I will be teaching Law History in Venezuela through Court cases, using high Court precedents. I think it affords creative alternatives such as reading historical texts and considering them as interpretations; asking students to develop interpretations and support them with evidence, etc.
1. Krauss, S., (2005) Research Paradigms and Meaning Making: A Primer.
The Qualitative Report, 10 (4), 758-770.
The paper Research Paradigms and Meaning Making: A Primer provides an introduction to some of the basic issues in attempting to work with both quantitative and qualitative research methods. It explains how qualitative data analysis can be used to organize and categorize different levels and forms of meaning. It argues that the heart of the quantitative vs. qualitative “debate” is philosophical, not methodological, and it offers an overview of the epistemological differences of quantitative and qualitative research methodologies. The article introduces the notion of meaning making in social sciences research and how it actually occurs through qualitative data analysis. It defines meaning as “the underlying motivation behind thoughts, actions and even the interpretation and application of knowledge” (Krauss, 2005, p. 763). The task of constructing meaning through qualitative data analysis is explained through a variety of perspectives and approaches.
Problem/Issue and the Importance/Significance
The focus is on the task of constructing meaning through qualitative data analysis. This paper is significant because it examines the concept of the philosophical realist paradigm and introduces the notion of “meaning making” in research methods and how meaning is generated from qualitative data analysis specifically. To that end, it explains epistemological differences between quantitative and qualitative research that allows us to understand phenomena and to get more realistic results. Some examples are also provided.
Research Question(s)
What will be the epistemological similarities and differences between quantitative and qualitative research paradigms? How can the realist philosophical paradigm accommodate both, quantitative and qualitative research paradigms? How meaning can be constructed and organized using a qualitative data analysis approach?
Sample and sample selection process
Qualitative data selection in qualitative research is intuitive to discover (not measure) potentially important insights. The author explains the need to make use of multiple research methods to optimize the data selection process, to increase both the breadth and width of data selection.
Data Collection Methods
Krauss (2005) found that qualitative data collection meaning is constructed on “a variety of levels of daily life through the exchange of ideas, interaction, and agreement between the researcher and the participants” (p. 764). The author supports his point of view through some examples from the literature, within social sciences, about how meaning can be constructed and organized using a qualitative data analysis approach (interpretivism). The author also cites a multi-year religiosity initiative as a case in which he was involved in conducting both qualitative and quantitative research to assess religiosity in the lives of young people.
Data Analysis Method
Krauss (2005) argues that qualitative data analyses in qualitative research are guided by a reflective paradigm in an attempt to acquire social knowledge. In this sense, according to the author, meaning is constructed in a variety of ways, that is, “through construction, the researcher is not a blank slate; rather s/he is an active participant in the process” (Krauss, 2005, p. 767). This means that, epistemologically, the researcher is engaged in the setting, participating in the act of ‘being with’ the respondents in their lives to generate meaning of them” (p. 769). In addition, developing themes and storylines featuring the words and experiences of participants themselves adds richness to the findings.
Limitations/Delimitations/Assumptions
Krauss (2005) explains that the realist paradigm has the unique goal of facilitating the meaning-making process, which is an important learning facilitator that has the power to encourage transformative learning. The realist philosophical paradigm attempts to accommodate quantitative and qualitative research methods. In the area of religion for example: “the result of the process was a major study that tapped into the richness of individual religious experience, along with a broader understanding of religious behaviors and knowledge levels across large groups of young people” (p.758). As a whole, the realist paradigm has less limitations than each one separated.
Trustworthiness/Validity Considerations
According to the author, realist researchers reject the framework of validity that is commonly accepted in more quantitative research in the social sciences. Nevertheless, realist research inherently assumes that there is some reality that can be observed with greater or less accuracy or validity. In this sense, “rigor in qualitative data analysis is a necessary element for maximizing the potential for generating meaning” (Krauss, 2005, p.765). This rigor provides trustworthiness to the results.
Ethical Issues
Qualitative researchers can operate under different epistemological assumptions from quantitative researchers. Ethical issues can sometimes result in confusion and uncertainty among researchers. In qualitative research, as well as in quantitative research, researchers are expected to employ high standards of academic rigor, and to behave with honesty and integrity. Ethics can emerge from value conflicts. I think that being a ‘purist’ researcher looking only at one small portion of a reality that cannot be split or unitized “without losing the importance of the whole phenomenon brings an ethical issue to the research process” (Krauss 2005, p. 767).
Reflective Assessment
The concept of meaning making in research methods and how meaning is generated from qualitative data analysis are the most important contributions of this paper. This article discusses the philosophical differences between quantitative and qualitative research. Quantitative research is positivist, objective and scientific that can be accomplished by statistical software packages commonly used for quantitative data (descriptive data). Qualitative researchers operate under naturalist, constructivist, eclectic and subjective assumptions (researcher interpretation). Qualitative research is a highly intuitive activity that contributes greatly to the construction of meaning. As researchers, we should focus on “the significance of different levels of meaning such as worldviews or philosophies of life, and the importance of meaning as a critical element to human existence and learning” (Krauss, 2005, p.767). I think that the author makes a good point regarding the need to make use of multiple research methods to optimize the data collection and the analysis process; that is, a mixed approach increases both the breadth and width of data collection and data analysis. The author provides an excellent overview of the basic issues in attempting to work with both quantitative and qualitative research methods toward the goal of generating meaning. Using both methods together contributes to a better understanding of phenomenon. I think that, it is important to make use of multiple research methods because it means broader understanding of behaviors and knowledge levels across large groups of people. Different philosophical assumptions or theoretical paradigms about the nature of reality are essential to understanding the overall perspective from which the study is designed and carried out. Within this holistic approach, critical realism framework, both qualitative and quantitative methodologies together are appropriate toward the goal of generating meaning and understanding thinking, behavior and worldwide formation. Indeed, the heart of the quantitative-qualitative “debate” is philosophical, not methodological.
Article Review
2. Gephart, R. (1999, 11 14). Paradigms and Research Methods. Retrieved 05 21, 2009, from Academy of Management, Research Methods Division: http://division.aomonline.org/rm/1999_RMD_Forum_Paradigms_and_Research_Methods.htm
Gephart (1999) explains three prevailing paradigms or views of the world which are currently shaping research: Positivism, Post-positivism or Interpretivism and Critical Postmodernism. The author introduces the concept of each paradigm, describes the key features of each worldview or “forms of scholarship,” the nature of knowledge pursued, and the different means by which knowledge is produced and assessed within each paradigm or worldview. Very synthetically, the three strands of thinking and researching are different in the following way: Positivism assumes an objective world which scientific methods (for example, experimental and survey) can more or less readily represent and measure statistically, seeking to predict and explain causal relations among key variables. Post positivists or interpretivists assert that these methods “impose a view of the world on subjects;” their concern is the interplay between objective and subjective meanings. Critical postmodernists, on the other hand, argue that these imposed views or measures implicitly “support forms of scientific knowledge that explicitly reproduce capitalist structures and associated hierarchies of inequality” (p.5). In this sense, the goal is “social transformation involving the displacement of existing structures of domination” (p. 6). The author concludes saying that these paradigms or theories are somewhat “separate but not greatly distant from one another” (p.7). In general the goal of research is adequate reflection of peoples’ experience in making inquiries from one or more theoretical frameworks.
Problem/Issue and the Importance/Significance
The need to help readers understand some of the basic assumptions underlying forms of research present in the field: Positivism, Post-positivism or Interpretivism and Critical Postmodernism. The author attempts to make us understand the key features, the usefulness of each paradigm, and how the three paradigms can be interwoven into research.
Research Question(s)
What will be the epistemological similarities and differences between positivism, interpretivism, and critical postmodernism? What are the main assumptions, key ideas, theories, figures, goals, theories, and criteria, unit of analysis and research methods of each paradigm?
Sample and Sample Selection Process
Sample and sample selection varies according to specific paradigm. Positivism uses quantitative criteria; interpretivism and critical postmodernism apply qualitative criteria and therefore, are more intuitive and flexible; however, rigor and specific principles are essential for good research.
Data Collection Methods
In this article, the author uses grounded theory development and suggests the mixture of quantitative and qualitative paradigms. The author explains, however, that data collection and the research methods, goals, criteria and unit of analysis of each paradigm are different: (a) Positivist research uses experiments; questionnaires; secondary data analysis; quantitatively coded documents, Likert scaling, and structural equation modeling, Qualitative uses grounded theory testing, among others. (b) Interpretivism uses: ethnography, participant observation, interviews, conversational analysis, and grounded theory development; case studies, conversational and textual analysis, and expansion analysis. (c) Critical Postmodernism uses: field research, historical analysis, dialectical analysis.
Data Analysis Method
The author explains that the positivist research unit of analysis is the variable. The unit of analysis of interpretivism is the meaning. Critical theory-Postmodernism (PM) uses as unit of analysis: deconstruction, textual analysis.
Limitations/Delimitations/Assumptions
Among the limitations and delimitations of each paradigm are the following: Positivist research assumes as its goals uncovered truth and facts as quantitatively specified relations among variables. Interpretivism/Constructivism is a related approach, which is based on analysis and look for persuasion using ‘sensitizing’ concepts. Its goals are: describe meanings, understand members' definitions of the situation, and examine how objective realities are produced. Critical theory-Postmodernism (PM) aims to investigate/uncover hidden interests, enable more informed consciousness, displace ideology with scientific insights, and change.
Trustworthiness/Validity Considerations
Criteria of validity vary according to each paradigm; the author explains that positivist research, in place of prediction uses explanation, rigor, internal and external validity and reliability. Interpretivism uses the trustworthiness and authenticity. Critical theory-Postmodernism (PM) uses theoretical consistency, historical insights, transcendent interpretations, basis for action, change potential and mobilization. Its unit of analysis are: contradictions, ‘incidents of exploitation’ and the sign. In general, realist researchers reject the framework of validity that is commonly accepted in just one method of research.
Ethical Issues
Currently there is a reexamination of ethical standards to protect better the rights of research participants within each paradigm. Among principles are: voluntary participation, informed consent, confidentiality, anonymity, and prevention of risk of harm.
Reflective Assessment
Gephart (2005) provides an excellent overview of the three important and prevailing paradigms, views of the world, or philosophies of research, namely, Positivism, Post positivism or Interpretivism, and Critical Postmodernism. I think that we might consider that each school is eclectic in the sense that they choose the best from all sources. I also think that the three strands are different but they can be interwoven, integrated or mixed into the research with the purpose of improving the process of inquiry and its results.
I regard the mixed paradigm as a more reasonable strategy for research. Why? Because I think that by understanding the main differences, and weakness of each of the current leading theories, and by demonstrating that each one, separately, provides just one view of the world, the promise of impartiality of each one might be criticized as illusory. An integrated perspective is a strategic open-mindedness approach, which can tolerate all different theories and points of view coming from all different directions that helps us better understand research problems and their solutions. Positivism provides a partial view of reality (its inductive view of quantitative data). Interpretivism and critical studies are also a partial view of life; each one is subjective (a deductive view of qualitative data). Given the progressive changes evidenced in contemporary society, it is imperative a holistic understanding of phenomena (and view each problem from different angles or from an interdisciplinary view). In this sense, “toleration of others requires broken confidence in the finality of our own truth” (Tuck, 1988, p.21). So, I think is important synthesizing into a unity all of the best components in the various philosophical theories. My idea of being integrative is to retain the scientific commitment, but also to combine it with the assumptions of qualitative research.
Article Review
3. Mackenzie, N., & Sally, K. (2006). Research Dilemmas: Paradigms, methods and methodology. Retrieved 05 21,2009, from Issues in Educational Research: http://www.iier.org.au/iier16/mackenzie.html
Mckenzie and Knipe (2006) criticize the perceived dichotomy between qualitative and quantitative research methods in research textbooks and journal articles: “considerable literature supports the use of mixed methods” (p.1). The authors begin with a definition of leading paradigms in educational research, which are the following: positivist, post-positivist, interpretive/constructivist paradigm, transformative, and pragmatic paradigm. They discuss the language commonly associated with each major research paradigm. The focus is in the basic issues in attempting to work with mixed research methods and how “the research paradigm and methodology work together to form a research study” (p.1). To that end, they clarify the difference between methodology and method: “The most common definitions suggest that methodology is the overall approach to research linked to the paradigm or theoretical framework while the method refers to systematic modes, procedures or tools used for collection and analysis of data” (p.4). The authors also clarify the difference between paradigms, methodologies and the traditional “dichotomy” of quantitative and qualitative research methods and data collection tools. They conclude with a discussion and explanation of how to combine paradigms and methods.
Problem/Issue and the Importance/Significance
The paper examines the features of each paradigm and their main argument is to “demystify” the role of paradigms in research. It questions the “dichotomy” quantitative-qualitative as a way to teach research methodology. Research texts and university courses “can create confusion to undergraduate, graduate and early career researchers” (McKenzie and Knipe, 2006, p.2). It suggests teaching a combination of both, quantitative and qualitative methods of research, making use of the most valuable features of each. That is, “research methods in research texts and university courses should include mixed methods and should address the perceived dichotomy between qualitative and quantitative research methodology” (p.1).
Research Questions
Why qualitative and quantitative methods should be combined? How the research paradigm and methodology work together to form a research study? Is there a difference between methodology and methods? How to match paradigms and methods?
Sample/Sample selection process and Data Collection/Analysis Method
According McKenzie and Knipe (2006) each research paradigm, framework, or methodology applies different sample/sample selection process and data collection/analysis methods. Each overall framework or methodology of research is consistent with the definition of each paradigm and hold unique features, which are specific to their particular approach. For example, the positivist and postpositivist paradigm usually applies experimental, quasi-experimental, and correlational, among others. The interpretivist/ constructivist paradigm applies naturalistic, phenomenological, hermeneutic, interpretivist, ethnographic, multiple participant meanings, and social and historical construction. The transformative paradigm applies critical theory, neo-Marxist, feminist, critical race theory, Freirean, participatory or emancipatory, among others. The pragmatic paradigm applies among others, consequences of actions, problem-centered, pluralistic and political. In a mixed research paradigm the decision-making process does not necessarily follow a linear path, the process is more realistically cyclical, and the researcher can mix methods or can make changes as the research progresses.
Limitations/Delimitations/Assumptions
In this article, the authors assume that educational research should be taught as a mixed paradigm. They argue that a combination of both, paradigm and methods and tools is the right way to teach research methodology. Mixed method is itself a statement of what could be, rather than a groundbreaking notion, especially in the instance of educational research.
Trustworthiness/Validity Considerations
The rejection of reliability and validity in qualitative research has resulted in a shift for “ensuring rigor” from the researcher’s actions during the course of the research. Each researcher's theoretical orientation has implications for every decision made in the research process. This obviously has implications for trustworthiness/validity considerations. The emphasis on strategies that are implemented during the research process has been replaced by strategies for evaluating trustworthiness and utility, which are implemented once a study is completed.
Ethical Issues
According to the authors, many writers fail to adequately define research terminology and sometimes use terminology in a way that is not compatible in its intent, omitting significant concepts and leaving the reader with only part of the picture. In this sense, confusion can be created when authors use different terms with different meanings to discuss paradigms; for instance, methodology and methods are usually used interchangeably but they have different meanings. The authors conclude stating that mixed method, like all research approaches, needs to be viewed through a critical lens while at the same time recognizing as valid its contribution to the field of research.
Reflective Assessment
From philosophical perspective, showing how much reflection it takes to start up an investigation, the article discusses different types of research and the language associated with them. The terms qualitative and quantitative refer to the data collection methods, analysis and reporting modes, instead of the theoretical approach to the research, which is the methodology (the overall approach to research linked to the paradigm or theoretical framework, while the method refers to systematic modes, procedures or tools used for collection and analysis of data).
This article applies almost directly to our situation as young researchers and is very useful for us to distinguish the type of methodology to apply in our own research. It explains the strengths of each leading methodology or theory of research, which are the following: positivism and post-positivism, interpretivism/constructionism, transformative, pragmatism, and a mix-methods approach to research, which are excellent theoretical frameworks. In a mixed research paradigm the decision-making process does not necessarily follow a linear path, the process is more realistically cyclical, and the researcher can mix methods or can make changes as the research progresses. This view applies to constitutional interpretation. This past year I proposed the mix paradigm in constitutional interpretation in the Annual Graduate Conference in King’s College, University of London:
http://www.iglrc.com/2008website/sessions.html
I'm delighted to say that my paper on “Eclecticism in Constitutional Interpretation” was selected for publication in the book Law and Outsiders Norms, Processes and 'Othering' in the 21st Century, edited by Cian C. Murphy and Penny Green, (Oxford: Hart Publishing, 2009)
http://www.hartpub.co.uk/books/search.asp?s=Legal+Theory&st=0&cp=6
4. Article Review
Monte-Sano, Chauncey (2008) Qualities of Historical Writing Instruction: A Comparative Case Study of Two Teachers’ Practices. American Educational Research Journal, 45 (4), p. 1045-1079.
The author analyzed qualitative and quantitative differences of the practices of two high school teachers of U.S. History, in real-world History and writing instruction over time (seven months). The analysis included forty two students’ performances that resulted from these practices rather than researcher interventions. The author demonstrated that both, teachers and students need training in the work of “writing evidence-based historical essays that involves sifting through evidence and constructing an interpretation in writing” (Monte-Sano, 2008, p. 1046). The results show that the teacher that supported students’ development in writing evidence-based historical essays was more successful in improving students’ growth than the other. The author aims to help high school teachers of U.S. history become more acquainted of different qualities of instruction to help students to learn how to read, write and think historically. She explains that there are different qualities of instruction that support students’ growth in writing evidence-based historical essays. These qualities are the following: approaching history as evidence-based interpretation; reading historical texts and considering them as interpretations; supporting reading comprehension and historical thinking; asking students to develop interpretations and support them with evidence; and using direct instruction, guided practice, independent practice, and feedback to teach evidence-based writing. According to the author, “the act of writing alone is not sufficient for growth in evidence-based historical writing” (Monte-Sano, 2008, p.1045).
Problem/Issue and the Importance/Significance
Students and teachers tend to have difficulties integrating documentary evidence into written accounts of past events. The author proved that the above mentioned qualities of teaching would foster growth in evidence-based historical writing. This study is important because history educators still know little about the relationships between teaching and learning with regard to evidence-based writing and reasoning (Monte-Sano, 2008).
Research Question(s)
How do teachers prepare students to write evidence-based historical essays? What messages about history, evidence, and writing do teachers’ practices convey? What opportunities to think and write historically do these teachers provide? How do teachers think about their subject matter, students, and pedagogy? In what ways do teachers’ practices coincide with improvements in students’ evidence-based historical writing?
Sample and sample selection process
Two teachers were selected in two urban high schools in Northern California. One class period per teacher – selection was based on class size in U.S history course. A total of 42 students from these classes participated in pre and post assessments of their historical learning. Over 7 months the researcher identified patterns of growth (or lack thereof).
Data Collection Methods
Data were collected from four sources: interviews, observations, feedback, and classroom artifacts (assignments and materials). Interview questions asked teachers their view of students’ progress and needs, and the reasoning behind their instructional decisions. Observations focused on what students did during class, how the teacher represented history and what opportunities there were to learn evidence-based reasoning, argumentation and writing. Field notes and data summary charts where completed during and after every observation. Feedback included teachers’ oral assessment on homework and essays.
Data Analysis Method
The author used mixed methods in an embedded multiple-case design that included teacher and student data analysis. Regarding teacher data, she organized field notes and interview data chronologically, transcribed them into codes, and used memos to track key ideas, to highlight illustrative excerpts of class, and to note what to look for in future observations. Data showed the amount of time that each teacher devoted to a particular topic, the agreement in the number of assignments and the number of readings per topics and key components of assignments. With respect to student data, the author measured, through pre and post test instruments, how students composed arguments that recognize historical perspectives from multiple documents.
Limitations/Delimitations/Assumptions
The reduced number of teachers and students was one of the limitations. The researcher had to create a matrix of questions and possible answers and to ensure that both instruments were appropriate (in terms of age of participants). Each instrument presented several points of agreement between sources and so allowed for multiple responses to the questions. Each one asked a why question that prompts students to make a supporting argument explaining why an action was taken in the past.
Trustworthiness/Validity Considerations
The author created specific instruments to study historical reasoning and writing in history. In terms of content validity, the pre- and post-test instruments were consistent with the following variables: the notions of historical reasoning as analysis of evidence, the use of evidence to construct interpretations of the past, and communication of arguments in writing. According to the author, the strength of these instruments lied in their ecological validity (Monte-Sano, 2008, p. 1051). Even so, the author noticed that contextual changes over the course of multiple administrations of the tests can influence results. For example, the constraints of working in the classrooms led to certain agreement on the essay topics.
Ethical Issues
Comparing two teachers with different approaches (one teacher worked in groups, the other worked with lectures in which students listened to lectures and worked independently), “was not entirely fair” (Monte-Sano, 2008, p.1079); however, the author explains that comparison is instructive when considering how to develop students’ historical thinking and writing.
Reflective Assessment
The report is a comparative case study of teaching and it uses student performance as a backdrop for claims of teaching effectiveness. The target was to examine two teachers’ practices with regard to the learning outcome of writing evidence-based essays. The strength of the body of the article lies in four main aspects, (1) the notions of historical reasoning as analysis of evidence; use of evidence to construct interpretations of the past, and communication of arguments in writing, (2) the list of qualities of instruction that support students’ growth in writing evidence-based historical essays, (3) the list of questions that teachers of history can ask of research, and (4) the use of multiple research methods to optimize the data collection and the analysis process. The results show the usefulness of qualitative and quantitative comparisons of students’ work to determine how each class improves in writing evidence-based history essays.
Traditionally, teachers and students tend to view history as established fact (literal meaning of documents), not analysis or interpretation. Monte-Sano shows that there are creative ways that teachers implement approaches to history writing. This entails synthesizing and organizing information to suit the writer’s purposes; problem-based writing tasks, encouraging historical thinking, and transformation of knowledge already in the mind.
This article is an excellent example of how to work with both quantitative and qualitative research methods toward the goal of generating a new way to teach and learn History writing. To explore further, I read other articles and books, which expand the basic theme and I came to the conclusion that teachers that embark on such a study of History must feel the passion of teaching and be prepared to devote time and energy to the endeavor. In my future endeavor, I will be teaching Law History in Venezuela through Court cases, using high Court precedents. I think it affords creative alternatives such as reading historical texts and considering them as interpretations; asking students to develop interpretations and support them with evidence, etc.
Wednesday, 6 May 2009
Quantitative and Qualitative Research by Nelson Dordelly-Rosales
Reference # 1 Krauss, Steven Eric (2005) Research Paradigms and Meaning Making: A Primer. The Qualitative Report, 10 (4) December, p.758-770.
In this article Krauss offers an overview of the epistemological differences of quantitative and qualitative research methodologies and proposes the realist philosophical paradigm. The realist paradigm was discussed as a “middle ground” between the poles of positivism and constructivism. Within a critical realism framework, both qualitative and quantitative methodologies are seen as appropriate. Krauss used mixed methods (and illustrated through examples) toward the goal of generating meaning. It introduces the notion of ‘meaning making’ in research methods, within the social sciences, and looks at how it actually occurs through qualitative data analysis. Krauss highlights the task of constructing meaning through qualitative data analysis was described citing a variety of perspectives and approaches. Overall, the article provides an introduction to some of the basic issues in attempting to work with both quantitative and qualitative research methods, and explains how qualitative data analysis can be used to organize and categorize different levels and forms of meaning.
Reference # 2 R. Burke Johnson and Anthony J. Onwuegbuzie (2004)Mixed Methods Research: A Research Paradigm Whose Time Has Come, Educational Researcher, Vol. 33, No. 7, pp. 14–26
The authors provide arguments against the polarization between quantitative and qualitative research, which is a “fallacious” dichotomy. This 'paradigm wars' “is not meaningful or productive for education research.” It distorts the conception of education and has serious implications for the quality of present educational research practice. The “subjective” vs. the “objective” is a wrong question. Consequently, the authors propose an “integrated approach to education research inquiry.” How we can integrate both paradigms? The researchers should focus on the construction of “good research questions and conducting of good research.” The questions asked should determine “the modes of inquiry that are used to answer them.”
I certainly believe the authors make a good enough case to suggest integrated methods. Indeed, phenomena are quantitative and qualitative at the same time; researchers follow similar interpretation processes for all educational research. I think integration is the authentic paradigm. We may accomplish “the integration of different modes of inquiry by collaborating with researchers with expertise.”
Comparison/contrast: there are similarities as well as differences between article 1 and article 2. Both agree that there is a third option to make good research. Either the “purists” or the ‘fallacious dichotomy” does not help in guiding productive research. Both articles argue that educational research practice should not focus in just one paradigm but in the mix or integration of both paradigms. As to the differences, the first article focuses on the commonalities and proposes the mix of methods: the second focuses on the strengths of both paradigms and suggests the integration. I think the second one complements the first article. Thanks!
In this article Krauss offers an overview of the epistemological differences of quantitative and qualitative research methodologies and proposes the realist philosophical paradigm. The realist paradigm was discussed as a “middle ground” between the poles of positivism and constructivism. Within a critical realism framework, both qualitative and quantitative methodologies are seen as appropriate. Krauss used mixed methods (and illustrated through examples) toward the goal of generating meaning. It introduces the notion of ‘meaning making’ in research methods, within the social sciences, and looks at how it actually occurs through qualitative data analysis. Krauss highlights the task of constructing meaning through qualitative data analysis was described citing a variety of perspectives and approaches. Overall, the article provides an introduction to some of the basic issues in attempting to work with both quantitative and qualitative research methods, and explains how qualitative data analysis can be used to organize and categorize different levels and forms of meaning.
Reference # 2 R. Burke Johnson and Anthony J. Onwuegbuzie (2004)Mixed Methods Research: A Research Paradigm Whose Time Has Come, Educational Researcher, Vol. 33, No. 7, pp. 14–26
The authors provide arguments against the polarization between quantitative and qualitative research, which is a “fallacious” dichotomy. This 'paradigm wars' “is not meaningful or productive for education research.” It distorts the conception of education and has serious implications for the quality of present educational research practice. The “subjective” vs. the “objective” is a wrong question. Consequently, the authors propose an “integrated approach to education research inquiry.” How we can integrate both paradigms? The researchers should focus on the construction of “good research questions and conducting of good research.” The questions asked should determine “the modes of inquiry that are used to answer them.”
I certainly believe the authors make a good enough case to suggest integrated methods. Indeed, phenomena are quantitative and qualitative at the same time; researchers follow similar interpretation processes for all educational research. I think integration is the authentic paradigm. We may accomplish “the integration of different modes of inquiry by collaborating with researchers with expertise.”
Comparison/contrast: there are similarities as well as differences between article 1 and article 2. Both agree that there is a third option to make good research. Either the “purists” or the ‘fallacious dichotomy” does not help in guiding productive research. Both articles argue that educational research practice should not focus in just one paradigm but in the mix or integration of both paradigms. As to the differences, the first article focuses on the commonalities and proposes the mix of methods: the second focuses on the strengths of both paradigms and suggests the integration. I think the second one complements the first article. Thanks!
Thursday, 5 March 2009
Educational Research - Analysis of Research Journals by Nelson Dordelly-Rosales
The idea is to promote educational research and its practices..
What is educational research? It refers to research conducted to investigate behavioral, social and strategic patterns in students, teachers and other participants in schools and other educational institutions.
Suggested Textbook:
Gall, M. D., Gall, J. P. & Borg, W. R. (2007). Educational research: An introduction (8th ed.) Toronto, ON: Allyn & Bacon.
Research Journals:
In Education there is particular value in journals published by the American Educational Research Association including, but not limited to: American Educational Research Journal, Review of Educational Research, and Educational Research. Examples of Canadian educational research journals include: The Canadian Journal of Education, The Alberta Journal of Educational Research, Curriculum Inquiry, and The Canadian Journal of School Psychology.
Example of educational research:
Book Review by Nelson Dordelly-Rosales
Katheleen M. Iverson, E-Learning Games Interactive Learning Strategies for Digital Delivery (NJ: Pearson Prentice Hall, 2005)
This book is about (main discussion)
• Classes of interaction: learner-interface interaction, learner-content interaction, learner-facilitator interaction, learner-learner interactions.
• Constructivist E-learning design steps: (1) identify course goals and objectives, (2) assess learner pre-knowledge and characteristics (use the appropriate language, consider learner preparation, adjust course pace, provide additional support, assess pre-training environment and learner motivation, assess available technology, consider learner’s capability of working in virtual teams or groups), (3) build motivational elements, (4) select a grounded instructional strategy (Gagne’s nine events of instruction), (5) define events, (6) select appropriate technological delivery tools (asynchronous delivery, synchronous delivery, delivery media) and interactive approach(es).
• Use of e-learning “session openers” to make a positive first impression and set course expectations, and to facilitate confidence in using new technology. Examples of icebreakers: the use of the personal blog, talk about each learner’s particular area of expertise or about favourite picture, sports, songs, movies, etc.
• Use of “scenario-based” e-learning that consists of highly engaging, authentic learning environment that allows trainees to solve authentic, work-based problems collaboratively anytime, anywhere. It involves key role play, including case studies, problem-based learning and goal-based scenarios; i.e., our course 874
• Use of “peer learning” support: belonging to a network, or community of learners is vital in a virtual environment. Opportunities for connection must be embedded in the course design to overcome the feelings of loneliness, i.e., working in pairs.
• Use of “content review and practice” to engage learners in higher order thinking tasks or in doing things and thinking about what they are doing, such as analysis, synthesis and evaluation, interpretation, problem solving, enhancing affective area, i.e., multimedia scrapbook, virtual field trip, webquests, and blog.
• Use of “group discussions” to explore issues and topics relating to the course content, express opinions, draw upon prior knowledge and construct a new one, i.e., jigsaw (online chat, e-mail, board), the projector and screen, the fishbowl, etc
• Use of “idea generation” or brainstorming to quickly develop and communicate new ideas for problem development, process revision, and problem resolution; i.e., the tope ten lists, defining excellence as it relates to the topic under study, etc.
• Use of “closers” which is a bit of ceremony at the end that allow learners to revisit the course, record their ideas, and provide a link to the workplace; i.e., websites or webpages with guest book, E-mail check up, virtual reunion, etc.
____________________________________________________________________
The author argues that:
• Until recently, most interaction in web-based training environments was technologically driven. Intelligent tutors, video, audio, and animated graphics were the accepted vehicles for adding interest and excitement to otherwise bland and boring script-based training. Although these advances are valuable, they come with a price in both development time and dollars.
• E-Learning Games contains ideas and practices that will add excitement to courseware without considerable expenditure of resources. Relying primarily on low-tech vehicles such as synchronous and asynchronous chat, e-mail, and instant messaging, the activities described in this textbook can be implemented in web-based training and educational courses alike.
______________________________________________________________________
The author makes the following statements or sites the following references on support of his/her argument (provide 2-3 quotes):
• What exactly is interaction in e-learning? Interaction is an interplay and exchange in which individuals and groups influence each other. Thus, “interaction is when there are reciprocal events requiring two objects and two actions.” (G. Moore, “Three Types of Interaction,” The American Journal of Distance Education 3 (1989):6)
• Our role as instructional designer is to move from merely sequencing material to creating highly interactive online environments in which constructivist learning may occur by creating rich contexts, authentic tasks, collaboration and abundance of tools to enhance communication and access to real world examples and problem solving, and mentoring relationships to guide learning. (T. Duffy & D. Jonassen, Constructives and Technology of Instruction: A Conversation (Hilldale: NJ: Lawrence Erlbaum Associates, 1996) p. 67
______________________________________________________________________
The author concludes that:
• It is much more effective to place learners in groups where they receive guidance on how to use web resources to explore the topic, discuss their findings with others, work together to locate answers, create their own model of motivation, and receive feedback and further guidance from facilitator. “Building ties to highly connected, central other is more efficient than links to peripheral others who are not well connected” (Iverson, 2005, p. 187)
• The author includes a long list of software resources facilitate the delivery of some activities included in the book for virtual greeting cards, webloghosting desktop collaboration, MOOs, visual diagramming, digital photo album, storyboarding, multimedia scrapbooks, virtual field trips, guest books, virtual meetings, and miscellaneous free software trials” (Iverson, 2005, p. 175-178).
• The following strategies are useful in e-learning for digital delivery (1) use e-learning design checklist (in p.179-180) (2) use a checklist to adapt and create e-learning games that fit the needs of learners (model in p. 181-183) and (3) use a variety of examples of learning activities (such as the ones that are provided in the book and in addendum D (pages185-188).
Article Review by Nelson Dordelly-Rosales
Janice Redish and Dana Chisnell, (2004) “Designing Web Sites for Older Adults:
A Review of Recent Research,” AARP, Washington D.C. 67 pages. Online:
http://search.yahoo.com/search?p=book+designing+web+sites+instructional+design&fr=yfp-t-501&toggle=1&cop=mss&ei=UTF-8&fp_ip=CA&vc=
_________________________________
This article is about (main discussion)
• Review recent, relevant research about Web site design and older adults or users. From the research reviewed in this article, the authors developed a set of heuristics to use in person-based, task-based reviews of 50 sites that older adult users are likely to go to.
• It concentrates on research from the disciplines of interaction and navigation, information, architecture, presentation or visual design, and information design. Article includes three sections: firstly, it discusses issues such as who is an “older adult”, what factors besides age must be considered? How these factors been used in research studies? What must be keeping in mind about older adults? Secondly, it deals with “Interaction Design: Designing the way users work with the site.” Thirdly focuses on “Information Architecture: Organizing the content” on Visual Design: Designing the pages, Information Design: Writing and formatting the content, and finally, fifth, it explains how “Conducting Research and Usability Studies with Older Adults.”
• The authors conducted this literature review to (a) better understand the “older adult” audience, (b) identify common usability and design issues specific to older Web users, (c) provide guidance to designers and developers of any Web site or Web-based application who have older adults in their audiences, (d) add information about –e-commerce Web sites and Web transactions to AARP’s Older Wiser Wired (OWW) Web site (www.aarp.org/olderwiserwired)
_________________________________________________________________
The authors argue that:
• Adults are more diverse than younger people are. Within this group, older adults have different experiences and different needs, habits, thoughts, and beliefs. Because of this diversity, it is extremely difficult to generalize performance, behaviours, and preferences to the million of people in a state. Some older adults take technology for granted, but for others using the Web is new territory. People in their 50s and 60s are more likely to have used computers at work. But many older adults – even those who are middle aged – are learning to use computers and the Web on their own.
• The authors propose a new tool that could be used by Web design teams to help them make decisions about where their users fall along these dimensions and thus how best to serve their audiences. The authors’ approach looks at the four factors: (a) age: including chronological age, but taking into account life experiences (b) ability: cognitive and physical (c) aptitude: expertise with the technology (d) attitude: confidence levels and emotional state of mind.
• The implications of those attributes are: those attributes can be used to judge the need for support and training and the level of complexity of features and functions that different users can be expected to handle. That is, increased age is likely to require less complexity, but increased aptitude allows for more complexity. Higher ability (that is, physical and mental fitness) allows for more complexity, and higher ability is likely to also correlate with lower age.
• “User experience” seems to include these qualities: • clear understanding by the site designers and content providers of who the users are (including demographics, domain knowledge, technical expertise, and frame of mind) and why they come to the Web site (tasks, triggers, and motivations) • plain and immediate communication of the purpose and scope of the Web site (as shown through the visual design, information architecture, and interaction design) • compelling, usable, desirable, useful, and possibly delightful content (including tone, style, and depth of content)
______________________________________________________________________
The authors make the following statements or sites the following references on support of their argument (2-3 quotes):
• It takes many roles to design a web site for older adults: DUX, a conference organized by a convergence of professional organizations, suggests that all of these roles (and probably more) contribute to designing the user experience: Authors suggest to view the following site: www.dux2005.org
• The authors suggest viewing Interaction Design Group at http://interactiondesigners.com. Interaction design is “defining the complex dialogues that occur between people and interactive devices of many types— from computers to mobile communications devices to appliances.” Humans and technology act on each other. In the case of Web sites, interaction design determines how a Web site behaves. This behaviour manifests as navigation elements: scrolling, links, buttons, and other widgets, along with how they are placed on a page, what their relationships are to each other on the page, and how easily users can recognize the elements and what the elements will do for them.
• Older participants were very likely to include widgets that were obviously clickable and visually looked like buttons (Chadwick-Dias, Ann with Michelle McNulty and Tom Tullis. “Web usability and age: Howdesign changes can improve performance.” Conference paper, ACM SIGCAPH Computersand the Physically Handicapped, Proceedings of the 2003 conference on universal usability, Issue 73-74).
• The authors quoted 57 references. Among them: Bailey, Koyani, et al. (Bailey, Bob with Sanjay Koyani, Michael Ahmadi, Marcia Changkit, and Kim Harley (NCI). “Older Users and the Web.” Article, Usability University July 2004; jointly sponsored by GSA, HHS and AARP) that found that older users tended to get lost on Web sites much more quickly than younger users “because they were penalized much more by poor labels and headers than were the younger users” and seemed less able to recover from these types of selection mistakes. Because their research shows that Web users skim or scan pages and are attracted to visual elements such as links, Theofanos and Redish suggest using highly descriptive link labels, ensuring that a link will be understandable and useful on its own. They also suggest starting links with relevant keywords and avoiding multiple links that start with the same words. This should help all types of users, not only those who use screen readers or talking versions of Web sites. Theofanos, Mary and Janice Redish. “Guidelines for accessible and usable websites: Observing users who work with screen readers.” Article, Interactions, X (6), November- December 2003, pp 38-51. ACM, the Association for Computing Machinery.
______________________________________________________________
The author concludes that:
Further research is needed to assess the relative importance of the different dimensions in designing Web sites. Older adults exhibit different usage behaviours. Realize that many older adults have cognitive and other medical limitations.
________________________________________________________________
PROGRAM EVALUATION
Modules:
Module 1 September 5 in Room 2001 at the College of Education. It will focus on the basics of program evaluation.
Module 2 September 26 in Room 2001 at the College of Education. The specific techniques involved in conducting an evaluation. Pre-planning, logic models and resources will be discussed.
Module 3 October 17 in Room 2001 at the College of Education. This module will focus data collection and analysis. The understanding and application of focus groups and online survey techniques will also be addressed.
Module 4 on November 21 in Room 2001 at the College of Education and deal with ethics in evaluation and review of the final project in the course
Assignments
One: Choose a completed evaluation; any kind, your choice. Explain the model or process used in the evaluation and identify in your mind the strengths and weaknesses of the evaluation and the approach that was taken. The finished piece should be 500 words in length. You will share your ideas on your blog.
Due Sept 12 Value - 10 marks
Two: a simulated program case study. You will choose a model or approach that you feel is appropriate to evaluate this program and explain why you think it would work. This will be a one-page document that you will post on your blog.
Due Sept 19 Value – 10 Marks
Three: Using your test organization or program you will perform an evaluation assessment. This step is used to determine the feasibility and direction of your evaluation. You will post your assessment on your blog.
Due October 15? Value - 10 marks
Four: Objectives: to become familiar with logic models as a method for understanding the workings of an organization. You will map out and get a thorough overview of your chosen organization or program you need to create a logic model. It can be in the form of a flow chart or any of the other models we have reviewed in the course. The assignment will consist of a logic model (generally a single page) and a description of the model. This will also be posted on your blog. It is due October 15. Value - 10 marks
Assignment Five: You will design and test a short survey. Include a variety of question types such as scale rating, short answer, and open-ended. You will submit the original version and the modified version based on the testing of the survey with four individuals. You will post your information on your blog.
This assignment is due on November 20. Value - 10 marks
Major assignment: Evaluation Plan
Objective: To demonstrate the ability to integrate the different tools and theories addressed in the class into an evaluation plan.
You will design an evaluation plan for the organization or program of your choice. Your final assignment will be a culmination of all we have done in the course. The plan will be a theoretical paper that outlines the program to be evaluated and the goals or objectives to be evaluated. It will demonstrate your ability to analyze a program, determine a suitable evaluation plan and create the instruments you would use to conduct the analysis. Essentially the purpose of an evaluation plan is to convince someone that you should be the evaluator for the evaluation. Hence, you want to convince an agency/institution/individual that you have the “best” team to perform the evaluation. So, an important piece of the evaluation plan is for you to describe, or elaborate upon, your reasons for selecting particular foci and approaches. We will address the specifics of this plan later in the course.
Due December 11, 2009. Value - 50 marks
Introduction to Program Evaluation
Course Description:
This course examines current models for the evaluation of educational programs. The emphasis is on exploring the range of options that is available to the program evaluator and on developing an awareness of the strengths and limitations of the models and techniques. Problems in carrying out educational evaluations are also studied: examples of such problems are the utilization of evaluation results and the ethics of evaluation. The course will use the Blackboard learning management system. You can access the course material by logging into http://webct6.usask.ca. Students will be required to create and maintain a blog to share their experiences and assignments with the others in the class (We will review suitable blog choices on the first class day).
Class Times, Appointments and Office Hours
This course will be taught in modules. If you are unable to attend any of the module you will be able to join via the Internet using a program called Elluminate. Please contact the instructor for details.
The first module will be held on September 5 in Room 2001 at the College of Education. It will focus on the basics of program evaluation.
The second module will be held on September 26 in Room 2001 at the College of Education. The specific techniques involved in conducting an evaluation. Pre-planning, logic models and resources will be discussed.
The third module will be held on October 17 in Room 2001 at the College of Education. This module will focus data collection and analysis. The understanding and application of focus groups and online survey techniques will also be addressed.
The fourth and final module will be held on November 21 in Room 2001 at the College of Education and deal with ethics in evaluation and review of the final project in the course.
I will be available to see you at any time by appointment. I will always be available to you through e-mail without an appointment.
Text: The course will not have a required textbook. If you wish to supplement the resources I have offered you in the course Owen and Roger’s book or McDavid and Hawthorne’s text would be useful additions to your professional library.
http://www.amazon.ca/Program-Evaluation-Approaches-John-Owen/dp/076196178X
Program Evaluation and Performance Measurement: An Introduction to Practice (McDavid and Hawthorne, 2005
Course Objectives
• To define and understand “What is program evaluation?”
• To understand the historical foundations of program evaluation.
• To identify and develop appropriate evaluation assessment techniques used in educational and other program settings.
• To understand appropriate data gathering techniques for evaluation purposes.
• To demonstrate the ability to create data gathering instruments.
• To understand the process and procedures involved in data analysis.
• To understand the unique roles and responsibilities of the various members of an evaluation team.
• To become aware of the ethical responsibilities of evaluators and the political implications of evaluations.
• To prepare for learning in a variety of authentic situations
http://www.schoolofed.nova.edu/arc/research_courses/sylpep.pdf
http://www.epa.gov/evaluate/whatis.htm
www.epa.gov/evaluate/whatis.pdf
http://cde.athabascau.ca/syllabi/mdde617.php
www.gsociology.icaap.org/methods/evaluationbeginnersguide.pdf
www.ocde.k12.ca.us/downloads/assessment/WHAT_IS_Program_Evaluation.pdf
www.en.wikipedia.org/wiki/Program_evaluation
Assignments
Assignment One: Choose a completed evaluation; any kind, your choice. Explain the model or process used in the evaluation and identify in your mind the strengths and weaknesses of the evaluation and the approach that was taken. The finished piece should be 500 words in length. You will share your ideas on your blog.
Due Sept 12.
Value - 10 marks
Assignment Two: I will e-mail you a simulated program case study. You will choose a model or approach that you feel is appropriate to evaluate this program and explain why you think it would work. This will be a one-page document that you will post on your blog.
Due Sept 19.
Value – 10 Marks
Assignment Three:
Using your test organization or program you will perform an evaluation assessment. This step is used to determine the feasibility and direction of your evaluation. You will post your assessment on your blog.
Due October 15.
Value - 10 marks
Assignment Four:
Objectives: to become familiar with logic models as a method for understanding the workings of an organization.
You will map out and get a thorough overview of your chosen organization or program you need to create a logic model. It can be in the form of a flow chart or any of the other models we have reviewed in the course. The assignment will consist of a logic model (generally a single page) and a description of the model. This will also be posted on your blog. It is due October 15. Value - 10 marks
Assignment Five:
You will design and test a short survey. Include a variety of question types such as scale rating, short answer, and open-ended. You will submit the original version and the modified version based on the testing of the survey with four individuals. You will post your information on your blog.
This assignment is due on November 20. Value - 10 marks
Major assignment: Evaluation Plan
Objective: To demonstrate the ability to integrate the different tools and theories addressed in the class into an evaluation plan.
You will design an evaluation plan for the organization or program of their choice. Your final assignment will be a culmination of all we have done in the course. The plan will be a theoretical paper that outlines the program to be evaluated and the goals or objectives to be evaluated. It will demonstrate your ability to analyze a program, determine a suitable evaluation plan and create the instruments you would use to conduct the analysis. Essentially the purpose of an evaluation plan is to convince someone that you should be the evaluator for the evaluation. Hence, you want to convince an agency/institution/individual that you have the “best” team to perform the evaluation. So, an important piece of the evaluation plan is for you to describe, or elaborate upon, your reasons for selecting particular foci and approaches. We will address the specifics of this plan later in the course.
Due December 11, 2009. Value - 50 marks
Module 1 What is evaluation?
What is Program Evaluation?
Please review the following material before we meet on Sept 5. They will give you a grounding in the concepts behind program evaluation.
http://www.managementhelp.org/evaluatn/fnl_eval.htm
http://pathwayscourses.samhsa.gov/eval101/eval101_toc.htm
This module is intended to introduce you to the concepts of Program Evaluation.It is not program or content specific. It does not matter what area you are most knowledgeable PE is a tool that you can apply to generate a better understanding of what is happening. The program evaluation you choose may based on your personal approach to a situation or the situation itself may point to a particular method. There are a number of approaches that will fit any given setting. Most program evaluations are short term. They are a snapshot of what is happening at a particular point in time. Longitudinal evaluations are difficult to conduct as they are more time consuming and costly. Essentially you are trying to answer the question, "Does the program do what it says it does?". Because evaluation is on-going your evaluation may steer your client in a particular direction and it will also be used to inform the next evaluation.
PE is essentially research into an organization program or process.
As you will learn when we study logic modelling four aspects of evaluation may include:
1. Input
2. Output
3. Outcome
4. Impact
The Canadian government views evaluation as:
1. Planning
2. Evaluating
3. Reccomending
You will develop your own approach to evaluation. It may be based on an existing model a combination of different factors that suit they type of evaluator you are and the situation you are involved in. The following section introduces you to some of the formatlized appraoches to evaluation.
Major theoretical concepts behind Program Evaluation
Evaluations can be formative, intended to provide feedback on the modification of an on-going program or summative, designed to determine if a process or program was effective not necessarily to change it. Here is a comparison of the two approaches.
http://jan.ucc.nau.edu/edtech/etc667/proposal/evaluation/summative_vs._formative.htm
Many modules have been developed by those who have studied PE over the years. A quick overview of the major models and the theorists who developed them is presented in this pdf document by Michael Scriven, one of the leading academics in the area of program evaluation. It is important to understand that a variety of models exist and the program evaluation has evolved in much the same way that research models in general have changed. Some of the more well-known models are the CIPP, Discrepancy, Adversary, goal-free, transactional. Here is an overview of the history and the major theoretical models in program evaluation.
Resources
809 Delicious account: http://delicious.com/wi11y0/809
Canadian Evaluation Society
http://www.evaluationcanada.ca/site.cgi?s=1
American evaluation association
http://www.eval.org/
Helpful textbooks
Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R.(2004). Program evaluation: Alternative approaches and practical guidelines. White Plains, NY: Longman.
Owen, J. M., & Rogers, P. J. (1999). Program evaluation: Forms and approaches. Thousand Oaks, CA: Sage.
Posovac, E., & Carey, R. (2003). Program Evaluation รข€“ Methods and Case Studies. (6th edition). New Jersey: Prentice Hall
ISBN #: 0130409669
Evaluation cookbook
http://www.icbl.hw.ac.uk/ltdi/cookbook/
Assignments for this module
1. Choose a completed evaluation; any kind, your choice. Determine the model used and identify in your mind the strengths and weaknesses of the evaluation and the approach that was taken. The finished piece should be 500 words in length. You will share your ideas on your blog. Due Sept 12
2. I will e-mail you a simulated program case study. You will choose a model or approach that you feel is appropriate to evaluate this program and explain why you think it would work. This will be a one page document that you will post on your blog. Due Sept 19
Module 2 - The process of evaluation
Before you conduct an evaluation you need to have as complete of an understanding of the focus of your evaluation as possible. You need to learn all that you can about the program, the purpose and the people that you will be working with. This means generating a thorough understanding of the organization that is connected to your evaluation. A good place to start is with any previous evaluations. This information will let you know how the organization has dealt with evaluations in the past and may help you determine if there is a willingness to put into practice the results of a study.
The following resources is a systematic look at the steps that are involved in an evaluation.
http://www.uwex.edu/ces/pdande/evaluation/index.html
Designing Evaluations : http://www.wmich.edu/evalctr/jc/DesigningEval.htm
Pdf version from the University of Wisconsin
Here is the checklist to get you through the process as a Word file.
A next step is to design a flow chart or a model of the organization you are working with that shows how the organization operates and how what you are evaluating fits into the big picture. This is done to cast a wide net to see where you will look to for input as well as to determine who will be effected by the outcomes of your evaluation. This can be done with a flow chart or what is known as a logic model. Logic models give a thorough breakdown of an organization.
Follow this link to learn about logic models
http://www.tbs-sct.gc.ca/eval/tools_outils/RBM_GAR_cour/Bas/module_02/module_0201_e.asp
http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html
Here is a helpful checklist for preparing to begin your evaluation
http://www.managementhelp.org/evaluatn/chklist.htm
Working with your clients
It is important for those you are working with to understand what you will and will not do. They must also understand what is needed from them and their organization. This is where the art and science meet. You will need to carefully judge the political climate and the willingness of the organization to actually change. The case may be that the higher-ups in an organization are implementing an evaluation without the support of the members of the organization. You may be seen as a threat and it may make sense for you to spend time working on the relationship component of the evaluation.
Assignments for this module
Case study
You will need to select an organization or program to use as a model for the rest of the course. It can be an educational program, a government program, or a particular organzation that has a specific mandate. It may be beneficial to choose an organization in your local community so that you can access individuals for input in your school work. Once you have decided on who you would like to use please e-mail me your choice and why you chose the program or organization.
Assignment #3
Using your test organization or program you will perform an evaluation assessment. This step is used to determine the feasability and direction of your evaluation. You will posted your assessment on your blog. Due October.
Assignment #4
To map out and get a thorough overview of your chosen organization or program you need to create a logic model. It can be in the form of a flow chart or any of the other models we have reviewed in the course. This will also be posted on your blog. It is due October
Module 3 - Gathering and evaluating data
You will now be confident that you can proceed with the evaluation based on the results of your evaluation assessment. At this point you will need to create a set of instruments to generate data that will answer your questions about the chosen program or organization.
Designing your evaluation
Once you have done the preliminary work with the client and the focus of the evaluation you need to develop the measures and instruments that you will use to answer your questions. This means choosing the format, type and then testing the instruments to ensure that they will work properly. Here is an overview of some of the different options you have for gathering data. You may want to begin by looking at any information that has already been gathered by an organization. This may be survey data, graduation rates, or financial records. You will likely create a survey of the major stakeholders or interview them individually or in a focus group. This file will give you a good grounding in designing surveys and working with focus groups.
Creating surveys
A survey is a common way to generate data from stakeholders, employees and clients connected to a program or policy. Having clear well-written questions presented in a variety of formats will go along way to generating a reliable means to generate data. You can use existing surveys and modify them to work with the specifics of your particular evaluation. Here is a sample survey for you to review.Traditionally this has been done using a paper form. This has worked well but there is now the option of using the Internet. Using the Internet allows for data to be in a format that can be more easily collected and analyzed. The U of S has an online survey tool available for you to use. It can be accessed at http://www.usask.ca/its/services/websurvey_tool/
Here are two other useful resources for creating surveys.
Creating a paper survey
Getting better results from online surveys
Focus groups
FG allow you to meet with many people at once to discuss and collectively generate data. Focus groups where you gather people together to discuss issues are also useful. They will allow you to get homogenous or mixed groups to share and feed off one another.
Here is an example of a document used to organize and conduct a focus group.
Validity of your instruments
Having confidence in your data gathering instruments is very important. You cannot have any useful results if they have been based on flawed data. This is why evaluators will often use instruments that have been used and tested by others. If possible taking an existing survey and modifying it slightly to fit your client's needs will give you peace of mind and will be a better judge of what you are trying to measure. If you are designing a survey from scratch you need to make sure that what you are asking and how you are asking it is correct. This means sharing your instrument with others in the know or experts in measurement. Pilot testing and useability testing your survey with a group similar to the one you will be surveying is also very important.
Once you are confident that your instruments are valid and reliable then you can gather your data.
Data analysis
Results must be shared for your evaluation to of use to anyone. You should make recommendations to those who offer the program. This cannot be done without a careful analysis of the data that you have collected. Once you have gathered enough data you will have to compile and compare the results with the original objectives. This link gives you some insights into the process of data analysis http://www.uwex.edu/ces/tobaccoeval/resources/surveynotes28aug2001.html#defs
http://hsc.uwe.ac.uk/dataanalysis/qualTextDataEx.asp
(qualitative analysis)
Here is an example of an interview transcript that has been analyzed. Read through it and then test your own skills with what the researcher discovered. http://hsc.uwe.ac.uk/dataanalysis/qualTextDataEx.asp
From the same website here is a look at quantitative data analysis. http://hsc.uwe.ac.uk/dataanalysis/quantWhat.asp
Don't be scared, you will not have to become an expert in this type of analysis (At least not for this class).
Assignment #5
You will design and test a short survey. I have included an example for you to use as a guide. Include a variety of question types such as scale rating, short answer, and open-ended. You will submit the original version and the modified version based on the testing of the survey with a group of 4 different individuals. You will post your information on your blog. This assignment is due on November 16, 2009.
Module 4 - Ethics of evaluation
It is important that as an evaluator you need to be objective. Your primary purpose is to serve the needs of your client. That being said you must design and conduct your evalaution with the needs and protection of all those impacted by your results. There is often fear associated with the evaluation of one's performance. This is especially true when the evaluator is someone who is coming from the outside and does not have the chance to have a longitudinal look at programs or organizations.
Program evaluation standards are put forth by the American Evaluation Society to guide evaluators in their conduct.
This powerpoint presentation looks at the guiding principles of evaluation.
Sharing the Evaluation Results ( I found this and modified it for our class). http://www.busreslab.com/ESATsharingresults.htm
It is critical to share results in a timely manner for at least two reasons:
1. Everyone must know where the organization as a whole and their individual areas stand if you are going to fully leverage the creativity and efforts of the employee base.
2. Participants need to know that the time they spent in completing the survey was worthwhile.
Each organization has its own information-sharing culture in place. While in some cases, particularly if the survey showed communication to be a problem, the process will need some adjustment, we recognize that each organization will have an approach to information dissemination that it typically leverages. As such, modifications to our recommended approach may be in order to account for an organization's information-sharing culture.
The Basic Principles of Sharing Survey Results
1. Be honest. An organization must be willing to share both its strengths and its areas in need of improvement. Employees will see through attempts to hide or "spin" information.
2. Be timely. The sooner you release results, the sooner the organization can begin to move toward positive change.
3. Share appropriate information at each level. Senior management will need encapsulated results and access to detailed results for the organization as a whole and differences between divisions/departments. Division managers will need to know how their division compares to the organization as a whole and how departments in the division compare to each other. Department managers will need to know how their results compare to the organization as a whole and to the division to which they belong.
4. Don't embarrass people in front of their peers. Teamwork and morale can be harmed if, for example. Rather than pointing out low-scoring departments to all department managers, let all department managers know how they fared compared to other departments via one-on-one meetings.
5. Discuss what happens next. After the results have bee presented, let the audience know what steps will be taken to improve those items in need of improvement.
6. Respect confidentiality. Don't present information that would make people feel that their responses are not confidential. For example, it would not be appropriate for anyone in the organization to have access to comments for a small department, since some people may be able to accurately guess who made what comment. Your research supplier should assist in this by not providing information that could breach, or could be perceived to breach, confidentiality.
Process Considerations
Have a plan in place to disseminate information before the survey has been completed.
1. The CEO/president should be briefed by the internal project manager and/or the research supplier.
2. The CEO/president should share the results with division managers. Overall results should be shared in a group setting. Individual results should be shared in one-on-one meetings.
3. Key findings and implications should be highlighted in each presentation. Detailed results also should be presented. However, take care to avoid drowning people in information. This can be done by relying more heavily on graphics than on detailed tables to communicate.
4. Give employees an overview of overall results through the best means possible. For some organizations, this will be in a group setting. For others, it will be via email, Intranet, or newsletter. Consider using multiple methods.
5. Department managers should share departmental results with employees in a group meeting. It may be helpful to have an HR manager assist in the presentation. If HR managers will be part of this process, planning ahead will help the meetings to proceed smoothly and take place in a timely manner.
In all communications, make sure the communication is "two way." Questions should be encouraged.
Assignment
The final assignment in this class will be a proposed evaluation of the program of your choosing. Consult the sylabus and other material I have shared with you for details.
Here are some sample proposals.
Proposal 1
Proposal 2
These are examples of requests for proposals.
This is when organizations solicit input for evaluations.
Request for proposal 1
RFP 2
RFP 3
Evaluation reports
Sask Aboriginal Literacy Report pdf
Sask Literacy Report pdf
Saskhealth evaluations http://docs.google.com/gview?a=v&q=cache:V5tQS5tiZQgJ:www.health.gov.sk.ca/hen-newsletter-072006+evaluation+proposal+government+saskatchewan&hl=en
http://www.nrcan.gc.ca/evaluation/reprap/2006/e06003-eng.php
________________________________________________________________
What is educational research? It refers to research conducted to investigate behavioral, social and strategic patterns in students, teachers and other participants in schools and other educational institutions.
Suggested Textbook:
Gall, M. D., Gall, J. P. & Borg, W. R. (2007). Educational research: An introduction (8th ed.) Toronto, ON: Allyn & Bacon.
Research Journals:
In Education there is particular value in journals published by the American Educational Research Association including, but not limited to: American Educational Research Journal, Review of Educational Research, and Educational Research. Examples of Canadian educational research journals include: The Canadian Journal of Education, The Alberta Journal of Educational Research, Curriculum Inquiry, and The Canadian Journal of School Psychology.
Example of educational research:
Book Review by Nelson Dordelly-Rosales
Katheleen M. Iverson, E-Learning Games Interactive Learning Strategies for Digital Delivery (NJ: Pearson Prentice Hall, 2005)
This book is about (main discussion)
• Classes of interaction: learner-interface interaction, learner-content interaction, learner-facilitator interaction, learner-learner interactions.
• Constructivist E-learning design steps: (1) identify course goals and objectives, (2) assess learner pre-knowledge and characteristics (use the appropriate language, consider learner preparation, adjust course pace, provide additional support, assess pre-training environment and learner motivation, assess available technology, consider learner’s capability of working in virtual teams or groups), (3) build motivational elements, (4) select a grounded instructional strategy (Gagne’s nine events of instruction), (5) define events, (6) select appropriate technological delivery tools (asynchronous delivery, synchronous delivery, delivery media) and interactive approach(es).
• Use of e-learning “session openers” to make a positive first impression and set course expectations, and to facilitate confidence in using new technology. Examples of icebreakers: the use of the personal blog, talk about each learner’s particular area of expertise or about favourite picture, sports, songs, movies, etc.
• Use of “scenario-based” e-learning that consists of highly engaging, authentic learning environment that allows trainees to solve authentic, work-based problems collaboratively anytime, anywhere. It involves key role play, including case studies, problem-based learning and goal-based scenarios; i.e., our course 874
• Use of “peer learning” support: belonging to a network, or community of learners is vital in a virtual environment. Opportunities for connection must be embedded in the course design to overcome the feelings of loneliness, i.e., working in pairs.
• Use of “content review and practice” to engage learners in higher order thinking tasks or in doing things and thinking about what they are doing, such as analysis, synthesis and evaluation, interpretation, problem solving, enhancing affective area, i.e., multimedia scrapbook, virtual field trip, webquests, and blog.
• Use of “group discussions” to explore issues and topics relating to the course content, express opinions, draw upon prior knowledge and construct a new one, i.e., jigsaw (online chat, e-mail, board), the projector and screen, the fishbowl, etc
• Use of “idea generation” or brainstorming to quickly develop and communicate new ideas for problem development, process revision, and problem resolution; i.e., the tope ten lists, defining excellence as it relates to the topic under study, etc.
• Use of “closers” which is a bit of ceremony at the end that allow learners to revisit the course, record their ideas, and provide a link to the workplace; i.e., websites or webpages with guest book, E-mail check up, virtual reunion, etc.
____________________________________________________________________
The author argues that:
• Until recently, most interaction in web-based training environments was technologically driven. Intelligent tutors, video, audio, and animated graphics were the accepted vehicles for adding interest and excitement to otherwise bland and boring script-based training. Although these advances are valuable, they come with a price in both development time and dollars.
• E-Learning Games contains ideas and practices that will add excitement to courseware without considerable expenditure of resources. Relying primarily on low-tech vehicles such as synchronous and asynchronous chat, e-mail, and instant messaging, the activities described in this textbook can be implemented in web-based training and educational courses alike.
______________________________________________________________________
The author makes the following statements or sites the following references on support of his/her argument (provide 2-3 quotes):
• What exactly is interaction in e-learning? Interaction is an interplay and exchange in which individuals and groups influence each other. Thus, “interaction is when there are reciprocal events requiring two objects and two actions.” (G. Moore, “Three Types of Interaction,” The American Journal of Distance Education 3 (1989):6)
• Our role as instructional designer is to move from merely sequencing material to creating highly interactive online environments in which constructivist learning may occur by creating rich contexts, authentic tasks, collaboration and abundance of tools to enhance communication and access to real world examples and problem solving, and mentoring relationships to guide learning. (T. Duffy & D. Jonassen, Constructives and Technology of Instruction: A Conversation (Hilldale: NJ: Lawrence Erlbaum Associates, 1996) p. 67
______________________________________________________________________
The author concludes that:
• It is much more effective to place learners in groups where they receive guidance on how to use web resources to explore the topic, discuss their findings with others, work together to locate answers, create their own model of motivation, and receive feedback and further guidance from facilitator. “Building ties to highly connected, central other is more efficient than links to peripheral others who are not well connected” (Iverson, 2005, p. 187)
• The author includes a long list of software resources facilitate the delivery of some activities included in the book for virtual greeting cards, webloghosting desktop collaboration, MOOs, visual diagramming, digital photo album, storyboarding, multimedia scrapbooks, virtual field trips, guest books, virtual meetings, and miscellaneous free software trials” (Iverson, 2005, p. 175-178).
• The following strategies are useful in e-learning for digital delivery (1) use e-learning design checklist (in p.179-180) (2) use a checklist to adapt and create e-learning games that fit the needs of learners (model in p. 181-183) and (3) use a variety of examples of learning activities (such as the ones that are provided in the book and in addendum D (pages185-188).
Article Review by Nelson Dordelly-Rosales
Janice Redish and Dana Chisnell, (2004) “Designing Web Sites for Older Adults:
A Review of Recent Research,” AARP, Washington D.C. 67 pages. Online:
http://search.yahoo.com/search?p=book+designing+web+sites+instructional+design&fr=yfp-t-501&toggle=1&cop=mss&ei=UTF-8&fp_ip=CA&vc=
_________________________________
This article is about (main discussion)
• Review recent, relevant research about Web site design and older adults or users. From the research reviewed in this article, the authors developed a set of heuristics to use in person-based, task-based reviews of 50 sites that older adult users are likely to go to.
• It concentrates on research from the disciplines of interaction and navigation, information, architecture, presentation or visual design, and information design. Article includes three sections: firstly, it discusses issues such as who is an “older adult”, what factors besides age must be considered? How these factors been used in research studies? What must be keeping in mind about older adults? Secondly, it deals with “Interaction Design: Designing the way users work with the site.” Thirdly focuses on “Information Architecture: Organizing the content” on Visual Design: Designing the pages, Information Design: Writing and formatting the content, and finally, fifth, it explains how “Conducting Research and Usability Studies with Older Adults.”
• The authors conducted this literature review to (a) better understand the “older adult” audience, (b) identify common usability and design issues specific to older Web users, (c) provide guidance to designers and developers of any Web site or Web-based application who have older adults in their audiences, (d) add information about –e-commerce Web sites and Web transactions to AARP’s Older Wiser Wired (OWW) Web site (www.aarp.org/olderwiserwired)
_________________________________________________________________
The authors argue that:
• Adults are more diverse than younger people are. Within this group, older adults have different experiences and different needs, habits, thoughts, and beliefs. Because of this diversity, it is extremely difficult to generalize performance, behaviours, and preferences to the million of people in a state. Some older adults take technology for granted, but for others using the Web is new territory. People in their 50s and 60s are more likely to have used computers at work. But many older adults – even those who are middle aged – are learning to use computers and the Web on their own.
• The authors propose a new tool that could be used by Web design teams to help them make decisions about where their users fall along these dimensions and thus how best to serve their audiences. The authors’ approach looks at the four factors: (a) age: including chronological age, but taking into account life experiences (b) ability: cognitive and physical (c) aptitude: expertise with the technology (d) attitude: confidence levels and emotional state of mind.
• The implications of those attributes are: those attributes can be used to judge the need for support and training and the level of complexity of features and functions that different users can be expected to handle. That is, increased age is likely to require less complexity, but increased aptitude allows for more complexity. Higher ability (that is, physical and mental fitness) allows for more complexity, and higher ability is likely to also correlate with lower age.
• “User experience” seems to include these qualities: • clear understanding by the site designers and content providers of who the users are (including demographics, domain knowledge, technical expertise, and frame of mind) and why they come to the Web site (tasks, triggers, and motivations) • plain and immediate communication of the purpose and scope of the Web site (as shown through the visual design, information architecture, and interaction design) • compelling, usable, desirable, useful, and possibly delightful content (including tone, style, and depth of content)
______________________________________________________________________
The authors make the following statements or sites the following references on support of their argument (2-3 quotes):
• It takes many roles to design a web site for older adults: DUX, a conference organized by a convergence of professional organizations, suggests that all of these roles (and probably more) contribute to designing the user experience: Authors suggest to view the following site: www.dux2005.org
• The authors suggest viewing Interaction Design Group at http://interactiondesigners.com. Interaction design is “defining the complex dialogues that occur between people and interactive devices of many types— from computers to mobile communications devices to appliances.” Humans and technology act on each other. In the case of Web sites, interaction design determines how a Web site behaves. This behaviour manifests as navigation elements: scrolling, links, buttons, and other widgets, along with how they are placed on a page, what their relationships are to each other on the page, and how easily users can recognize the elements and what the elements will do for them.
• Older participants were very likely to include widgets that were obviously clickable and visually looked like buttons (Chadwick-Dias, Ann with Michelle McNulty and Tom Tullis. “Web usability and age: Howdesign changes can improve performance.” Conference paper, ACM SIGCAPH Computersand the Physically Handicapped, Proceedings of the 2003 conference on universal usability, Issue 73-74).
• The authors quoted 57 references. Among them: Bailey, Koyani, et al. (Bailey, Bob with Sanjay Koyani, Michael Ahmadi, Marcia Changkit, and Kim Harley (NCI). “Older Users and the Web.” Article, Usability University July 2004; jointly sponsored by GSA, HHS and AARP) that found that older users tended to get lost on Web sites much more quickly than younger users “because they were penalized much more by poor labels and headers than were the younger users” and seemed less able to recover from these types of selection mistakes. Because their research shows that Web users skim or scan pages and are attracted to visual elements such as links, Theofanos and Redish suggest using highly descriptive link labels, ensuring that a link will be understandable and useful on its own. They also suggest starting links with relevant keywords and avoiding multiple links that start with the same words. This should help all types of users, not only those who use screen readers or talking versions of Web sites. Theofanos, Mary and Janice Redish. “Guidelines for accessible and usable websites: Observing users who work with screen readers.” Article, Interactions, X (6), November- December 2003, pp 38-51. ACM, the Association for Computing Machinery.
______________________________________________________________
The author concludes that:
Further research is needed to assess the relative importance of the different dimensions in designing Web sites. Older adults exhibit different usage behaviours. Realize that many older adults have cognitive and other medical limitations.
________________________________________________________________
PROGRAM EVALUATION
Modules:
Module 1 September 5 in Room 2001 at the College of Education. It will focus on the basics of program evaluation.
Module 2 September 26 in Room 2001 at the College of Education. The specific techniques involved in conducting an evaluation. Pre-planning, logic models and resources will be discussed.
Module 3 October 17 in Room 2001 at the College of Education. This module will focus data collection and analysis. The understanding and application of focus groups and online survey techniques will also be addressed.
Module 4 on November 21 in Room 2001 at the College of Education and deal with ethics in evaluation and review of the final project in the course
Assignments
One: Choose a completed evaluation; any kind, your choice. Explain the model or process used in the evaluation and identify in your mind the strengths and weaknesses of the evaluation and the approach that was taken. The finished piece should be 500 words in length. You will share your ideas on your blog.
Due Sept 12 Value - 10 marks
Two: a simulated program case study. You will choose a model or approach that you feel is appropriate to evaluate this program and explain why you think it would work. This will be a one-page document that you will post on your blog.
Due Sept 19 Value – 10 Marks
Three: Using your test organization or program you will perform an evaluation assessment. This step is used to determine the feasibility and direction of your evaluation. You will post your assessment on your blog.
Due October 15? Value - 10 marks
Four: Objectives: to become familiar with logic models as a method for understanding the workings of an organization. You will map out and get a thorough overview of your chosen organization or program you need to create a logic model. It can be in the form of a flow chart or any of the other models we have reviewed in the course. The assignment will consist of a logic model (generally a single page) and a description of the model. This will also be posted on your blog. It is due October 15. Value - 10 marks
Assignment Five: You will design and test a short survey. Include a variety of question types such as scale rating, short answer, and open-ended. You will submit the original version and the modified version based on the testing of the survey with four individuals. You will post your information on your blog.
This assignment is due on November 20. Value - 10 marks
Major assignment: Evaluation Plan
Objective: To demonstrate the ability to integrate the different tools and theories addressed in the class into an evaluation plan.
You will design an evaluation plan for the organization or program of your choice. Your final assignment will be a culmination of all we have done in the course. The plan will be a theoretical paper that outlines the program to be evaluated and the goals or objectives to be evaluated. It will demonstrate your ability to analyze a program, determine a suitable evaluation plan and create the instruments you would use to conduct the analysis. Essentially the purpose of an evaluation plan is to convince someone that you should be the evaluator for the evaluation. Hence, you want to convince an agency/institution/individual that you have the “best” team to perform the evaluation. So, an important piece of the evaluation plan is for you to describe, or elaborate upon, your reasons for selecting particular foci and approaches. We will address the specifics of this plan later in the course.
Due December 11, 2009. Value - 50 marks
Introduction to Program Evaluation
Course Description:
This course examines current models for the evaluation of educational programs. The emphasis is on exploring the range of options that is available to the program evaluator and on developing an awareness of the strengths and limitations of the models and techniques. Problems in carrying out educational evaluations are also studied: examples of such problems are the utilization of evaluation results and the ethics of evaluation. The course will use the Blackboard learning management system. You can access the course material by logging into http://webct6.usask.ca. Students will be required to create and maintain a blog to share their experiences and assignments with the others in the class (We will review suitable blog choices on the first class day).
Class Times, Appointments and Office Hours
This course will be taught in modules. If you are unable to attend any of the module you will be able to join via the Internet using a program called Elluminate. Please contact the instructor for details.
The first module will be held on September 5 in Room 2001 at the College of Education. It will focus on the basics of program evaluation.
The second module will be held on September 26 in Room 2001 at the College of Education. The specific techniques involved in conducting an evaluation. Pre-planning, logic models and resources will be discussed.
The third module will be held on October 17 in Room 2001 at the College of Education. This module will focus data collection and analysis. The understanding and application of focus groups and online survey techniques will also be addressed.
The fourth and final module will be held on November 21 in Room 2001 at the College of Education and deal with ethics in evaluation and review of the final project in the course.
I will be available to see you at any time by appointment. I will always be available to you through e-mail without an appointment.
Text: The course will not have a required textbook. If you wish to supplement the resources I have offered you in the course Owen and Roger’s book or McDavid and Hawthorne’s text would be useful additions to your professional library.
http://www.amazon.ca/Program-Evaluation-Approaches-John-Owen/dp/076196178X
Program Evaluation and Performance Measurement: An Introduction to Practice (McDavid and Hawthorne, 2005
Course Objectives
• To define and understand “What is program evaluation?”
• To understand the historical foundations of program evaluation.
• To identify and develop appropriate evaluation assessment techniques used in educational and other program settings.
• To understand appropriate data gathering techniques for evaluation purposes.
• To demonstrate the ability to create data gathering instruments.
• To understand the process and procedures involved in data analysis.
• To understand the unique roles and responsibilities of the various members of an evaluation team.
• To become aware of the ethical responsibilities of evaluators and the political implications of evaluations.
• To prepare for learning in a variety of authentic situations
http://www.schoolofed.nova.edu/arc/research_courses/sylpep.pdf
http://www.epa.gov/evaluate/whatis.htm
www.epa.gov/evaluate/whatis.pdf
http://cde.athabascau.ca/syllabi/mdde617.php
www.gsociology.icaap.org/methods/evaluationbeginnersguide.pdf
www.ocde.k12.ca.us/downloads/assessment/WHAT_IS_Program_Evaluation.pdf
www.en.wikipedia.org/wiki/Program_evaluation
Assignments
Assignment One: Choose a completed evaluation; any kind, your choice. Explain the model or process used in the evaluation and identify in your mind the strengths and weaknesses of the evaluation and the approach that was taken. The finished piece should be 500 words in length. You will share your ideas on your blog.
Due Sept 12.
Value - 10 marks
Assignment Two: I will e-mail you a simulated program case study. You will choose a model or approach that you feel is appropriate to evaluate this program and explain why you think it would work. This will be a one-page document that you will post on your blog.
Due Sept 19.
Value – 10 Marks
Assignment Three:
Using your test organization or program you will perform an evaluation assessment. This step is used to determine the feasibility and direction of your evaluation. You will post your assessment on your blog.
Due October 15.
Value - 10 marks
Assignment Four:
Objectives: to become familiar with logic models as a method for understanding the workings of an organization.
You will map out and get a thorough overview of your chosen organization or program you need to create a logic model. It can be in the form of a flow chart or any of the other models we have reviewed in the course. The assignment will consist of a logic model (generally a single page) and a description of the model. This will also be posted on your blog. It is due October 15. Value - 10 marks
Assignment Five:
You will design and test a short survey. Include a variety of question types such as scale rating, short answer, and open-ended. You will submit the original version and the modified version based on the testing of the survey with four individuals. You will post your information on your blog.
This assignment is due on November 20. Value - 10 marks
Major assignment: Evaluation Plan
Objective: To demonstrate the ability to integrate the different tools and theories addressed in the class into an evaluation plan.
You will design an evaluation plan for the organization or program of their choice. Your final assignment will be a culmination of all we have done in the course. The plan will be a theoretical paper that outlines the program to be evaluated and the goals or objectives to be evaluated. It will demonstrate your ability to analyze a program, determine a suitable evaluation plan and create the instruments you would use to conduct the analysis. Essentially the purpose of an evaluation plan is to convince someone that you should be the evaluator for the evaluation. Hence, you want to convince an agency/institution/individual that you have the “best” team to perform the evaluation. So, an important piece of the evaluation plan is for you to describe, or elaborate upon, your reasons for selecting particular foci and approaches. We will address the specifics of this plan later in the course.
Due December 11, 2009. Value - 50 marks
Module 1 What is evaluation?
What is Program Evaluation?
Please review the following material before we meet on Sept 5. They will give you a grounding in the concepts behind program evaluation.
http://www.managementhelp.org/evaluatn/fnl_eval.htm
http://pathwayscourses.samhsa.gov/eval101/eval101_toc.htm
This module is intended to introduce you to the concepts of Program Evaluation.It is not program or content specific. It does not matter what area you are most knowledgeable PE is a tool that you can apply to generate a better understanding of what is happening. The program evaluation you choose may based on your personal approach to a situation or the situation itself may point to a particular method. There are a number of approaches that will fit any given setting. Most program evaluations are short term. They are a snapshot of what is happening at a particular point in time. Longitudinal evaluations are difficult to conduct as they are more time consuming and costly. Essentially you are trying to answer the question, "Does the program do what it says it does?". Because evaluation is on-going your evaluation may steer your client in a particular direction and it will also be used to inform the next evaluation.
PE is essentially research into an organization program or process.
As you will learn when we study logic modelling four aspects of evaluation may include:
1. Input
2. Output
3. Outcome
4. Impact
The Canadian government views evaluation as:
1. Planning
2. Evaluating
3. Reccomending
You will develop your own approach to evaluation. It may be based on an existing model a combination of different factors that suit they type of evaluator you are and the situation you are involved in. The following section introduces you to some of the formatlized appraoches to evaluation.
Major theoretical concepts behind Program Evaluation
Evaluations can be formative, intended to provide feedback on the modification of an on-going program or summative, designed to determine if a process or program was effective not necessarily to change it. Here is a comparison of the two approaches.
http://jan.ucc.nau.edu/edtech/etc667/proposal/evaluation/summative_vs._formative.htm
Many modules have been developed by those who have studied PE over the years. A quick overview of the major models and the theorists who developed them is presented in this pdf document by Michael Scriven, one of the leading academics in the area of program evaluation. It is important to understand that a variety of models exist and the program evaluation has evolved in much the same way that research models in general have changed. Some of the more well-known models are the CIPP, Discrepancy, Adversary, goal-free, transactional. Here is an overview of the history and the major theoretical models in program evaluation.
Resources
809 Delicious account: http://delicious.com/wi11y0/809
Canadian Evaluation Society
http://www.evaluationcanada.ca/site.cgi?s=1
American evaluation association
http://www.eval.org/
Helpful textbooks
Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R.(2004). Program evaluation: Alternative approaches and practical guidelines. White Plains, NY: Longman.
Owen, J. M., & Rogers, P. J. (1999). Program evaluation: Forms and approaches. Thousand Oaks, CA: Sage.
Posovac, E., & Carey, R. (2003). Program Evaluation รข€“ Methods and Case Studies. (6th edition). New Jersey: Prentice Hall
ISBN #: 0130409669
Evaluation cookbook
http://www.icbl.hw.ac.uk/ltdi/cookbook/
Assignments for this module
1. Choose a completed evaluation; any kind, your choice. Determine the model used and identify in your mind the strengths and weaknesses of the evaluation and the approach that was taken. The finished piece should be 500 words in length. You will share your ideas on your blog. Due Sept 12
2. I will e-mail you a simulated program case study. You will choose a model or approach that you feel is appropriate to evaluate this program and explain why you think it would work. This will be a one page document that you will post on your blog. Due Sept 19
Module 2 - The process of evaluation
Before you conduct an evaluation you need to have as complete of an understanding of the focus of your evaluation as possible. You need to learn all that you can about the program, the purpose and the people that you will be working with. This means generating a thorough understanding of the organization that is connected to your evaluation. A good place to start is with any previous evaluations. This information will let you know how the organization has dealt with evaluations in the past and may help you determine if there is a willingness to put into practice the results of a study.
The following resources is a systematic look at the steps that are involved in an evaluation.
http://www.uwex.edu/ces/pdande/evaluation/index.html
Designing Evaluations : http://www.wmich.edu/evalctr/jc/DesigningEval.htm
Pdf version from the University of Wisconsin
Here is the checklist to get you through the process as a Word file.
A next step is to design a flow chart or a model of the organization you are working with that shows how the organization operates and how what you are evaluating fits into the big picture. This is done to cast a wide net to see where you will look to for input as well as to determine who will be effected by the outcomes of your evaluation. This can be done with a flow chart or what is known as a logic model. Logic models give a thorough breakdown of an organization.
Follow this link to learn about logic models
http://www.tbs-sct.gc.ca/eval/tools_outils/RBM_GAR_cour/Bas/module_02/module_0201_e.asp
http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html
Here is a helpful checklist for preparing to begin your evaluation
http://www.managementhelp.org/evaluatn/chklist.htm
Working with your clients
It is important for those you are working with to understand what you will and will not do. They must also understand what is needed from them and their organization. This is where the art and science meet. You will need to carefully judge the political climate and the willingness of the organization to actually change. The case may be that the higher-ups in an organization are implementing an evaluation without the support of the members of the organization. You may be seen as a threat and it may make sense for you to spend time working on the relationship component of the evaluation.
Assignments for this module
Case study
You will need to select an organization or program to use as a model for the rest of the course. It can be an educational program, a government program, or a particular organzation that has a specific mandate. It may be beneficial to choose an organization in your local community so that you can access individuals for input in your school work. Once you have decided on who you would like to use please e-mail me your choice and why you chose the program or organization.
Assignment #3
Using your test organization or program you will perform an evaluation assessment. This step is used to determine the feasability and direction of your evaluation. You will posted your assessment on your blog. Due October.
Assignment #4
To map out and get a thorough overview of your chosen organization or program you need to create a logic model. It can be in the form of a flow chart or any of the other models we have reviewed in the course. This will also be posted on your blog. It is due October
Module 3 - Gathering and evaluating data
You will now be confident that you can proceed with the evaluation based on the results of your evaluation assessment. At this point you will need to create a set of instruments to generate data that will answer your questions about the chosen program or organization.
Designing your evaluation
Once you have done the preliminary work with the client and the focus of the evaluation you need to develop the measures and instruments that you will use to answer your questions. This means choosing the format, type and then testing the instruments to ensure that they will work properly. Here is an overview of some of the different options you have for gathering data. You may want to begin by looking at any information that has already been gathered by an organization. This may be survey data, graduation rates, or financial records. You will likely create a survey of the major stakeholders or interview them individually or in a focus group. This file will give you a good grounding in designing surveys and working with focus groups.
Creating surveys
A survey is a common way to generate data from stakeholders, employees and clients connected to a program or policy. Having clear well-written questions presented in a variety of formats will go along way to generating a reliable means to generate data. You can use existing surveys and modify them to work with the specifics of your particular evaluation. Here is a sample survey for you to review.Traditionally this has been done using a paper form. This has worked well but there is now the option of using the Internet. Using the Internet allows for data to be in a format that can be more easily collected and analyzed. The U of S has an online survey tool available for you to use. It can be accessed at http://www.usask.ca/its/services/websurvey_tool/
Here are two other useful resources for creating surveys.
Creating a paper survey
Getting better results from online surveys
Focus groups
FG allow you to meet with many people at once to discuss and collectively generate data. Focus groups where you gather people together to discuss issues are also useful. They will allow you to get homogenous or mixed groups to share and feed off one another.
Here is an example of a document used to organize and conduct a focus group.
Validity of your instruments
Having confidence in your data gathering instruments is very important. You cannot have any useful results if they have been based on flawed data. This is why evaluators will often use instruments that have been used and tested by others. If possible taking an existing survey and modifying it slightly to fit your client's needs will give you peace of mind and will be a better judge of what you are trying to measure. If you are designing a survey from scratch you need to make sure that what you are asking and how you are asking it is correct. This means sharing your instrument with others in the know or experts in measurement. Pilot testing and useability testing your survey with a group similar to the one you will be surveying is also very important.
Once you are confident that your instruments are valid and reliable then you can gather your data.
Data analysis
Results must be shared for your evaluation to of use to anyone. You should make recommendations to those who offer the program. This cannot be done without a careful analysis of the data that you have collected. Once you have gathered enough data you will have to compile and compare the results with the original objectives. This link gives you some insights into the process of data analysis http://www.uwex.edu/ces/tobaccoeval/resources/surveynotes28aug2001.html#defs
http://hsc.uwe.ac.uk/dataanalysis/qualTextDataEx.asp
(qualitative analysis)
Here is an example of an interview transcript that has been analyzed. Read through it and then test your own skills with what the researcher discovered. http://hsc.uwe.ac.uk/dataanalysis/qualTextDataEx.asp
From the same website here is a look at quantitative data analysis. http://hsc.uwe.ac.uk/dataanalysis/quantWhat.asp
Don't be scared, you will not have to become an expert in this type of analysis (At least not for this class).
Assignment #5
You will design and test a short survey. I have included an example for you to use as a guide. Include a variety of question types such as scale rating, short answer, and open-ended. You will submit the original version and the modified version based on the testing of the survey with a group of 4 different individuals. You will post your information on your blog. This assignment is due on November 16, 2009.
Module 4 - Ethics of evaluation
It is important that as an evaluator you need to be objective. Your primary purpose is to serve the needs of your client. That being said you must design and conduct your evalaution with the needs and protection of all those impacted by your results. There is often fear associated with the evaluation of one's performance. This is especially true when the evaluator is someone who is coming from the outside and does not have the chance to have a longitudinal look at programs or organizations.
Program evaluation standards are put forth by the American Evaluation Society to guide evaluators in their conduct.
This powerpoint presentation looks at the guiding principles of evaluation.
Sharing the Evaluation Results ( I found this and modified it for our class). http://www.busreslab.com/ESATsharingresults.htm
It is critical to share results in a timely manner for at least two reasons:
1. Everyone must know where the organization as a whole and their individual areas stand if you are going to fully leverage the creativity and efforts of the employee base.
2. Participants need to know that the time they spent in completing the survey was worthwhile.
Each organization has its own information-sharing culture in place. While in some cases, particularly if the survey showed communication to be a problem, the process will need some adjustment, we recognize that each organization will have an approach to information dissemination that it typically leverages. As such, modifications to our recommended approach may be in order to account for an organization's information-sharing culture.
The Basic Principles of Sharing Survey Results
1. Be honest. An organization must be willing to share both its strengths and its areas in need of improvement. Employees will see through attempts to hide or "spin" information.
2. Be timely. The sooner you release results, the sooner the organization can begin to move toward positive change.
3. Share appropriate information at each level. Senior management will need encapsulated results and access to detailed results for the organization as a whole and differences between divisions/departments. Division managers will need to know how their division compares to the organization as a whole and how departments in the division compare to each other. Department managers will need to know how their results compare to the organization as a whole and to the division to which they belong.
4. Don't embarrass people in front of their peers. Teamwork and morale can be harmed if, for example. Rather than pointing out low-scoring departments to all department managers, let all department managers know how they fared compared to other departments via one-on-one meetings.
5. Discuss what happens next. After the results have bee presented, let the audience know what steps will be taken to improve those items in need of improvement.
6. Respect confidentiality. Don't present information that would make people feel that their responses are not confidential. For example, it would not be appropriate for anyone in the organization to have access to comments for a small department, since some people may be able to accurately guess who made what comment. Your research supplier should assist in this by not providing information that could breach, or could be perceived to breach, confidentiality.
Process Considerations
Have a plan in place to disseminate information before the survey has been completed.
1. The CEO/president should be briefed by the internal project manager and/or the research supplier.
2. The CEO/president should share the results with division managers. Overall results should be shared in a group setting. Individual results should be shared in one-on-one meetings.
3. Key findings and implications should be highlighted in each presentation. Detailed results also should be presented. However, take care to avoid drowning people in information. This can be done by relying more heavily on graphics than on detailed tables to communicate.
4. Give employees an overview of overall results through the best means possible. For some organizations, this will be in a group setting. For others, it will be via email, Intranet, or newsletter. Consider using multiple methods.
5. Department managers should share departmental results with employees in a group meeting. It may be helpful to have an HR manager assist in the presentation. If HR managers will be part of this process, planning ahead will help the meetings to proceed smoothly and take place in a timely manner.
In all communications, make sure the communication is "two way." Questions should be encouraged.
Assignment
The final assignment in this class will be a proposed evaluation of the program of your choosing. Consult the sylabus and other material I have shared with you for details.
Here are some sample proposals.
Proposal 1
Proposal 2
These are examples of requests for proposals.
This is when organizations solicit input for evaluations.
Request for proposal 1
RFP 2
RFP 3
Evaluation reports
Sask Aboriginal Literacy Report pdf
Sask Literacy Report pdf
Saskhealth evaluations http://docs.google.com/gview?a=v&q=cache:V5tQS5tiZQgJ:www.health.gov.sk.ca/hen-newsletter-072006+evaluation+proposal+government+saskatchewan&hl=en
http://www.nrcan.gc.ca/evaluation/reprap/2006/e06003-eng.php
________________________________________________________________
Subscribe to:
Posts (Atom)