Thursday 5 March 2009

Educational Research - Analysis of Research Journals by Nelson Dordelly-Rosales

The idea is to promote educational research and its practices..
What is educational research? It refers to research conducted to investigate behavioral, social and strategic patterns in students, teachers and other participants in schools and other educational institutions.

Suggested Textbook:
Gall, M. D., Gall, J. P. & Borg, W. R. (2007). Educational research: An introduction (8th ed.) Toronto, ON: Allyn & Bacon.

Research Journals:
In Education there is particular value in journals published by the American Educational Research Association including, but not limited to: American Educational Research Journal, Review of Educational Research, and Educational Research. Examples of Canadian educational research journals include: The Canadian Journal of Education, The Alberta Journal of Educational Research, Curriculum Inquiry, and The Canadian Journal of School Psychology.


Example of educational research:

Book Review by Nelson Dordelly-Rosales
Katheleen M. Iverson, E-Learning Games Interactive Learning Strategies for Digital Delivery (NJ: Pearson Prentice Hall, 2005)

This book is about (main discussion)

• Classes of interaction: learner-interface interaction, learner-content interaction, learner-facilitator interaction, learner-learner interactions.

• Constructivist E-learning design steps: (1) identify course goals and objectives, (2) assess learner pre-knowledge and characteristics (use the appropriate language, consider learner preparation, adjust course pace, provide additional support, assess pre-training environment and learner motivation, assess available technology, consider learner’s capability of working in virtual teams or groups), (3) build motivational elements, (4) select a grounded instructional strategy (Gagne’s nine events of instruction), (5) define events, (6) select appropriate technological delivery tools (asynchronous delivery, synchronous delivery, delivery media) and interactive approach(es).

• Use of e-learning “session openers” to make a positive first impression and set course expectations, and to facilitate confidence in using new technology. Examples of icebreakers: the use of the personal blog, talk about each learner’s particular area of expertise or about favourite picture, sports, songs, movies, etc.

• Use of “scenario-based” e-learning that consists of highly engaging, authentic learning environment that allows trainees to solve authentic, work-based problems collaboratively anytime, anywhere. It involves key role play, including case studies, problem-based learning and goal-based scenarios; i.e., our course 874

• Use of “peer learning” support: belonging to a network, or community of learners is vital in a virtual environment. Opportunities for connection must be embedded in the course design to overcome the feelings of loneliness, i.e., working in pairs.

• Use of “content review and practice” to engage learners in higher order thinking tasks or in doing things and thinking about what they are doing, such as analysis, synthesis and evaluation, interpretation, problem solving, enhancing affective area, i.e., multimedia scrapbook, virtual field trip, webquests, and blog.

• Use of “group discussions” to explore issues and topics relating to the course content, express opinions, draw upon prior knowledge and construct a new one, i.e., jigsaw (online chat, e-mail, board), the projector and screen, the fishbowl, etc

• Use of “idea generation” or brainstorming to quickly develop and communicate new ideas for problem development, process revision, and problem resolution; i.e., the tope ten lists, defining excellence as it relates to the topic under study, etc.
• Use of “closers” which is a bit of ceremony at the end that allow learners to revisit the course, record their ideas, and provide a link to the workplace; i.e., websites or webpages with guest book, E-mail check up, virtual reunion, etc.
____________________________________________________________________
The author argues that:

• Until recently, most interaction in web-based training environments was technologically driven. Intelligent tutors, video, audio, and animated graphics were the accepted vehicles for adding interest and excitement to otherwise bland and boring script-based training. Although these advances are valuable, they come with a price in both development time and dollars.

• E-Learning Games contains ideas and practices that will add excitement to courseware without considerable expenditure of resources. Relying primarily on low-tech vehicles such as synchronous and asynchronous chat, e-mail, and instant messaging, the activities described in this textbook can be implemented in web-based training and educational courses alike.
______________________________________________________________________
The author makes the following statements or sites the following references on support of his/her argument (provide 2-3 quotes):

• What exactly is interaction in e-learning? Interaction is an interplay and exchange in which individuals and groups influence each other. Thus, “interaction is when there are reciprocal events requiring two objects and two actions.” (G. Moore, “Three Types of Interaction,” The American Journal of Distance Education 3 (1989):6)

• Our role as instructional designer is to move from merely sequencing material to creating highly interactive online environments in which constructivist learning may occur by creating rich contexts, authentic tasks, collaboration and abundance of tools to enhance communication and access to real world examples and problem solving, and mentoring relationships to guide learning. (T. Duffy & D. Jonassen, Constructives and Technology of Instruction: A Conversation (Hilldale: NJ: Lawrence Erlbaum Associates, 1996) p. 67
______________________________________________________________________
The author concludes that:

• It is much more effective to place learners in groups where they receive guidance on how to use web resources to explore the topic, discuss their findings with others, work together to locate answers, create their own model of motivation, and receive feedback and further guidance from facilitator. “Building ties to highly connected, central other is more efficient than links to peripheral others who are not well connected” (Iverson, 2005, p. 187)

• The author includes a long list of software resources facilitate the delivery of some activities included in the book for virtual greeting cards, webloghosting desktop collaboration, MOOs, visual diagramming, digital photo album, storyboarding, multimedia scrapbooks, virtual field trips, guest books, virtual meetings, and miscellaneous free software trials” (Iverson, 2005, p. 175-178).

• The following strategies are useful in e-learning for digital delivery (1) use e-learning design checklist (in p.179-180) (2) use a checklist to adapt and create e-learning games that fit the needs of learners (model in p. 181-183) and (3) use a variety of examples of learning activities (such as the ones that are provided in the book and in addendum D (pages185-188).

Article Review by Nelson Dordelly-Rosales

Janice Redish and Dana Chisnell, (2004) “Designing Web Sites for Older Adults:
A Review of Recent Research,” AARP, Washington D.C. 67 pages. Online:
http://search.yahoo.com/search?p=book+designing+web+sites+instructional+design&fr=yfp-t-501&toggle=1&cop=mss&ei=UTF-8&fp_ip=CA&vc=
_________________________________

This article is about (main discussion)
• Review recent, relevant research about Web site design and older adults or users. From the research reviewed in this article, the authors developed a set of heuristics to use in person-based, task-based reviews of 50 sites that older adult users are likely to go to.
• It concentrates on research from the disciplines of interaction and navigation, information, architecture, presentation or visual design, and information design. Article includes three sections: firstly, it discusses issues such as who is an “older adult”, what factors besides age must be considered? How these factors been used in research studies? What must be keeping in mind about older adults? Secondly, it deals with “Interaction Design: Designing the way users work with the site.” Thirdly focuses on “Information Architecture: Organizing the content” on Visual Design: Designing the pages, Information Design: Writing and formatting the content, and finally, fifth, it explains how “Conducting Research and Usability Studies with Older Adults.”
• The authors conducted this literature review to (a) better understand the “older adult” audience, (b) identify common usability and design issues specific to older Web users, (c) provide guidance to designers and developers of any Web site or Web-based application who have older adults in their audiences, (d) add information about –e-commerce Web sites and Web transactions to AARP’s Older Wiser Wired (OWW) Web site (www.aarp.org/olderwiserwired)
_________________________________________________________________
The authors argue that:

• Adults are more diverse than younger people are. Within this group, older adults have different experiences and different needs, habits, thoughts, and beliefs. Because of this diversity, it is extremely difficult to generalize performance, behaviours, and preferences to the million of people in a state. Some older adults take technology for granted, but for others using the Web is new territory. People in their 50s and 60s are more likely to have used computers at work. But many older adults – even those who are middle aged – are learning to use computers and the Web on their own.
• The authors propose a new tool that could be used by Web design teams to help them make decisions about where their users fall along these dimensions and thus how best to serve their audiences. The authors’ approach looks at the four factors: (a) age: including chronological age, but taking into account life experiences (b) ability: cognitive and physical (c) aptitude: expertise with the technology (d) attitude: confidence levels and emotional state of mind.
• The implications of those attributes are: those attributes can be used to judge the need for support and training and the level of complexity of features and functions that different users can be expected to handle. That is, increased age is likely to require less complexity, but increased aptitude allows for more complexity. Higher ability (that is, physical and mental fitness) allows for more complexity, and higher ability is likely to also correlate with lower age.
• “User experience” seems to include these qualities: • clear understanding by the site designers and content providers of who the users are (including demographics, domain knowledge, technical expertise, and frame of mind) and why they come to the Web site (tasks, triggers, and motivations) • plain and immediate communication of the purpose and scope of the Web site (as shown through the visual design, information architecture, and interaction design) • compelling, usable, desirable, useful, and possibly delightful content (including tone, style, and depth of content)
______________________________________________________________________
The authors make the following statements or sites the following references on support of their argument (2-3 quotes):
• It takes many roles to design a web site for older adults: DUX, a conference organized by a convergence of professional organizations, suggests that all of these roles (and probably more) contribute to designing the user experience: Authors suggest to view the following site: www.dux2005.org
• The authors suggest viewing Interaction Design Group at http://interactiondesigners.com. Interaction design is “defining the complex dialogues that occur between people and interactive devices of many types— from computers to mobile communications devices to appliances.” Humans and technology act on each other. In the case of Web sites, interaction design determines how a Web site behaves. This behaviour manifests as navigation elements: scrolling, links, buttons, and other widgets, along with how they are placed on a page, what their relationships are to each other on the page, and how easily users can recognize the elements and what the elements will do for them.
• Older participants were very likely to include widgets that were obviously clickable and visually looked like buttons (Chadwick-Dias, Ann with Michelle McNulty and Tom Tullis. “Web usability and age: Howdesign changes can improve performance.” Conference paper, ACM SIGCAPH Computersand the Physically Handicapped, Proceedings of the 2003 conference on universal usability, Issue 73-74).
• The authors quoted 57 references. Among them: Bailey, Koyani, et al. (Bailey, Bob with Sanjay Koyani, Michael Ahmadi, Marcia Changkit, and Kim Harley (NCI). “Older Users and the Web.” Article, Usability University July 2004; jointly sponsored by GSA, HHS and AARP) that found that older users tended to get lost on Web sites much more quickly than younger users “because they were penalized much more by poor labels and headers than were the younger users” and seemed less able to recover from these types of selection mistakes. Because their research shows that Web users skim or scan pages and are attracted to visual elements such as links, Theofanos and Redish suggest using highly descriptive link labels, ensuring that a link will be understandable and useful on its own. They also suggest starting links with relevant keywords and avoiding multiple links that start with the same words. This should help all types of users, not only those who use screen readers or talking versions of Web sites. Theofanos, Mary and Janice Redish. “Guidelines for accessible and usable websites: Observing users who work with screen readers.” Article, Interactions, X (6), November- December 2003, pp 38-51. ACM, the Association for Computing Machinery.
______________________________________________________________
The author concludes that:

Further research is needed to assess the relative importance of the different dimensions in designing Web sites. Older adults exhibit different usage behaviours. Realize that many older adults have cognitive and other medical limitations.
________________________________________________________________
PROGRAM EVALUATION
Modules:
Module 1 September 5 in Room 2001 at the College of Education. It will focus on the basics of program evaluation.

Module 2
September 26 in Room 2001 at the College of Education. The specific techniques involved in conducting an evaluation. Pre-planning, logic models and resources will be discussed.

Module 3
October 17 in Room 2001 at the College of Education. This module will focus data collection and analysis. The understanding and application of focus groups and online survey techniques will also be addressed.

Module 4 on November 21 in Room 2001 at the College of Education and deal with ethics in evaluation and review of the final project in the course
Assignments
One: Choose a completed evaluation; any kind, your choice. Explain the model or process used in the evaluation and identify in your mind the strengths and weaknesses of the evaluation and the approach that was taken. The finished piece should be 500 words in length. You will share your ideas on your blog.
Due Sept 12 Value - 10 marks

Two: a simulated program case study. You will choose a model or approach that you feel is appropriate to evaluate this program and explain why you think it would work. This will be a one-page document that you will post on your blog.
Due Sept 19 Value – 10 Marks

Three: Using your test organization or program you will perform an evaluation assessment. This step is used to determine the feasibility and direction of your evaluation. You will post your assessment on your blog.
Due October 15? Value - 10 marks


Four: Objectives: to become familiar with logic models as a method for understanding the workings of an organization. You will map out and get a thorough overview of your chosen organization or program you need to create a logic model. It can be in the form of a flow chart or any of the other models we have reviewed in the course. The assignment will consist of a logic model (generally a single page) and a description of the model. This will also be posted on your blog. It is due October 15. Value - 10 marks

Assignment Five
: You will design and test a short survey. Include a variety of question types such as scale rating, short answer, and open-ended. You will submit the original version and the modified version based on the testing of the survey with four individuals. You will post your information on your blog.

This assignment is due on November 20. Value - 10 marks

Major assignment: Evaluation Plan

Objective: To demonstrate the ability to integrate the different tools and theories addressed in the class into an evaluation plan.

You will design an evaluation plan for the organization or program of your choice. Your final assignment will be a culmination of all we have done in the course. The plan will be a theoretical paper that outlines the program to be evaluated and the goals or objectives to be evaluated. It will demonstrate your ability to analyze a program, determine a suitable evaluation plan and create the instruments you would use to conduct the analysis. Essentially the purpose of an evaluation plan is to convince someone that you should be the evaluator for the evaluation. Hence, you want to convince an agency/institution/individual that you have the “best” team to perform the evaluation. So, an important piece of the evaluation plan is for you to describe, or elaborate upon, your reasons for selecting particular foci and approaches. We will address the specifics of this plan later in the course.

Due December 11, 2009. Value - 50 marks

Introduction to Program Evaluation

Course Description:
This course examines current models for the evaluation of educational programs. The emphasis is on exploring the range of options that is available to the program evaluator and on developing an awareness of the strengths and limitations of the models and techniques. Problems in carrying out educational evaluations are also studied: examples of such problems are the utilization of evaluation results and the ethics of evaluation. The course will use the Blackboard learning management system. You can access the course material by logging into http://webct6.usask.ca. Students will be required to create and maintain a blog to share their experiences and assignments with the others in the class (We will review suitable blog choices on the first class day).

Class Times, Appointments and Office Hours

This course will be taught in modules. If you are unable to attend any of the module you will be able to join via the Internet using a program called Elluminate. Please contact the instructor for details.

The first module will be held on September 5 in Room 2001 at the College of Education. It will focus on the basics of program evaluation.

The second module will be held on September 26 in Room 2001 at the College of Education. The specific techniques involved in conducting an evaluation. Pre-planning, logic models and resources will be discussed.

The third module will be held on October 17 in Room 2001 at the College of Education. This module will focus data collection and analysis. The understanding and application of focus groups and online survey techniques will also be addressed.

The fourth and final module will be held on November 21 in Room 2001 at the College of Education and deal with ethics in evaluation and review of the final project in the course.

I will be available to see you at any time by appointment. I will always be available to you through e-mail without an appointment.


Text: The course will not have a required textbook. If you wish to supplement the resources I have offered you in the course Owen and Roger’s book or McDavid and Hawthorne’s text would be useful additions to your professional library.

http://www.amazon.ca/Program-Evaluation-Approaches-John-Owen/dp/076196178X

Program Evaluation and Performance Measurement: An Introduction to Practice (McDavid and Hawthorne, 2005

Course Objectives
• To define and understand “What is program evaluation?”

• To understand the historical foundations of program evaluation.

• To identify and develop appropriate evaluation assessment techniques used in educational and other program settings.

• To understand appropriate data gathering techniques for evaluation purposes.

• To demonstrate the ability to create data gathering instruments.

• To understand the process and procedures involved in data analysis.

• To understand the unique roles and responsibilities of the various members of an evaluation team.

• To become aware of the ethical responsibilities of evaluators and the political implications of evaluations.

• To prepare for learning in a variety of authentic situations

http://www.schoolofed.nova.edu/arc/research_courses/sylpep.pdf

http://www.epa.gov/evaluate/whatis.htm
www.epa.gov/evaluate/whatis.pdf
http://cde.athabascau.ca/syllabi/mdde617.php
www.gsociology.icaap.org/methods/evaluationbeginnersguide.pdf
www.ocde.k12.ca.us/downloads/assessment/WHAT_IS_Program_Evaluation.pdf
www.en.wikipedia.org/wiki/Program_evaluation



Assignments


Assignment One: Choose a completed evaluation; any kind, your choice. Explain the model or process used in the evaluation and identify in your mind the strengths and weaknesses of the evaluation and the approach that was taken. The finished piece should be 500 words in length. You will share your ideas on your blog.
Due Sept 12.
Value - 10 marks

Assignment Two: I will e-mail you a simulated program case study. You will choose a model or approach that you feel is appropriate to evaluate this program and explain why you think it would work. This will be a one-page document that you will post on your blog.
Due Sept 19.

Value – 10 Marks

Assignment Three:
Using your test organization or program you will perform an evaluation assessment. This step is used to determine the feasibility and direction of your evaluation. You will post your assessment on your blog.
Due October 15.
Value - 10 marks

Assignment Four:
Objectives: to become familiar with logic models as a method for understanding the workings of an organization.
You will map out and get a thorough overview of your chosen organization or program you need to create a logic model. It can be in the form of a flow chart or any of the other models we have reviewed in the course. The assignment will consist of a logic model (generally a single page) and a description of the model. This will also be posted on your blog. It is due October 15. Value - 10 marks

Assignment Five:
You will design and test a short survey. Include a variety of question types such as scale rating, short answer, and open-ended. You will submit the original version and the modified version based on the testing of the survey with four individuals. You will post your information on your blog.

This assignment is due on November 20. Value - 10 marks

Major assignment: Evaluation Plan

Objective: To demonstrate the ability to integrate the different tools and theories addressed in the class into an evaluation plan.

You will design an evaluation plan for the organization or program of their choice. Your final assignment will be a culmination of all we have done in the course. The plan will be a theoretical paper that outlines the program to be evaluated and the goals or objectives to be evaluated. It will demonstrate your ability to analyze a program, determine a suitable evaluation plan and create the instruments you would use to conduct the analysis. Essentially the purpose of an evaluation plan is to convince someone that you should be the evaluator for the evaluation. Hence, you want to convince an agency/institution/individual that you have the “best” team to perform the evaluation. So, an important piece of the evaluation plan is for you to describe, or elaborate upon, your reasons for selecting particular foci and approaches. We will address the specifics of this plan later in the course.

Due December 11, 2009. Value - 50 marks

Module 1
What is evaluation?
What is Program Evaluation?
Please review the following material before we meet on Sept 5. They will give you a grounding in the concepts behind program evaluation.
http://www.managementhelp.org/evaluatn/fnl_eval.htm
http://pathwayscourses.samhsa.gov/eval101/eval101_toc.htm
This module is intended to introduce you to the concepts of Program Evaluation.It is not program or content specific. It does not matter what area you are most knowledgeable PE is a tool that you can apply to generate a better understanding of what is happening. The program evaluation you choose may based on your personal approach to a situation or the situation itself may point to a particular method. There are a number of approaches that will fit any given setting. Most program evaluations are short term. They are a snapshot of what is happening at a particular point in time. Longitudinal evaluations are difficult to conduct as they are more time consuming and costly. Essentially you are trying to answer the question, "Does the program do what it says it does?". Because evaluation is on-going your evaluation may steer your client in a particular direction and it will also be used to inform the next evaluation.
PE is essentially research into an organization program or process.
As you will learn when we study logic modelling four aspects of evaluation may include:
1. Input
2. Output
3. Outcome
4. Impact

The Canadian government views evaluation as:
1. Planning
2. Evaluating
3. Reccomending

You will develop your own approach to evaluation. It may be based on an existing model a combination of different factors that suit they type of evaluator you are and the situation you are involved in. The following section introduces you to some of the formatlized appraoches to evaluation.
Major theoretical concepts behind Program Evaluation
Evaluations can be formative, intended to provide feedback on the modification of an on-going program or summative, designed to determine if a process or program was effective not necessarily to change it. Here is a comparison of the two approaches.
http://jan.ucc.nau.edu/edtech/etc667/proposal/evaluation/summative_vs._formative.htm

Many modules have been developed by those who have studied PE over the years. A quick overview of the major models and the theorists who developed them is presented in this pdf document by Michael Scriven, one of the leading academics in the area of program evaluation. It is important to understand that a variety of models exist and the program evaluation has evolved in much the same way that research models in general have changed. Some of the more well-known models are the CIPP, Discrepancy, Adversary, goal-free, transactional. Here is an overview of the history and the major theoretical models in program evaluation.
Resources
809 Delicious account: http://delicious.com/wi11y0/809
Canadian Evaluation Society
http://www.evaluationcanada.ca/site.cgi?s=1
American evaluation association
http://www.eval.org/
Helpful textbooks
Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R.(2004). Program evaluation: Alternative approaches and practical guidelines. White Plains, NY: Longman.
Owen, J. M., & Rogers, P. J. (1999). Program evaluation: Forms and approaches. Thousand Oaks, CA: Sage.
Posovac, E., & Carey, R. (2003). Program Evaluation รข€“ Methods and Case Studies. (6th edition). New Jersey: Prentice Hall
ISBN #: 0130409669
Evaluation cookbook
http://www.icbl.hw.ac.uk/ltdi/cookbook/
Assignments for this module
1. Choose a completed evaluation; any kind, your choice. Determine the model used and identify in your mind the strengths and weaknesses of the evaluation and the approach that was taken. The finished piece should be 500 words in length. You will share your ideas on your blog. Due Sept 12
2. I will e-mail you a simulated program case study. You will choose a model or approach that you feel is appropriate to evaluate this program and explain why you think it would work. This will be a one page document that you will post on your blog. Due Sept 19

Module 2 - The process of evaluation

Before you conduct an evaluation you need to have as complete of an understanding of the focus of your evaluation as possible. You need to learn all that you can about the program, the purpose and the people that you will be working with. This means generating a thorough understanding of the organization that is connected to your evaluation. A good place to start is with any previous evaluations. This information will let you know how the organization has dealt with evaluations in the past and may help you determine if there is a willingness to put into practice the results of a study.
The following resources is a systematic look at the steps that are involved in an evaluation.
http://www.uwex.edu/ces/pdande/evaluation/index.html
Designing Evaluations : http://www.wmich.edu/evalctr/jc/DesigningEval.htm
Pdf version from the University of Wisconsin
Here is the checklist to get you through the process as a Word file.
A next step is to design a flow chart or a model of the organization you are working with that shows how the organization operates and how what you are evaluating fits into the big picture. This is done to cast a wide net to see where you will look to for input as well as to determine who will be effected by the outcomes of your evaluation. This can be done with a flow chart or what is known as a logic model. Logic models give a thorough breakdown of an organization.

Follow this link to learn about logic models
http://www.tbs-sct.gc.ca/eval/tools_outils/RBM_GAR_cour/Bas/module_02/module_0201_e.asp
http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html

Here is a helpful checklist for preparing to begin your evaluation
http://www.managementhelp.org/evaluatn/chklist.htm

Working with your clients
It is important for those you are working with to understand what you will and will not do. They must also understand what is needed from them and their organization. This is where the art and science meet. You will need to carefully judge the political climate and the willingness of the organization to actually change. The case may be that the higher-ups in an organization are implementing an evaluation without the support of the members of the organization. You may be seen as a threat and it may make sense for you to spend time working on the relationship component of the evaluation.

Assignments for this module

Case study
You will need to select an organization or program to use as a model for the rest of the course. It can be an educational program, a government program, or a particular organzation that has a specific mandate. It may be beneficial to choose an organization in your local community so that you can access individuals for input in your school work. Once you have decided on who you would like to use please e-mail me your choice and why you chose the program or organization.

Assignment #3
Using your test organization or program you will perform an evaluation assessment. This step is used to determine the feasability and direction of your evaluation. You will posted your assessment on your blog. Due October.

Assignment #4
To map out and get a thorough overview of your chosen organization or program you need to create a logic model. It can be in the form of a flow chart or any of the other models we have reviewed in the course. This will also be posted on your blog. It is due October

Module 3 - Gathering and evaluating data
You will now be confident that you can proceed with the evaluation based on the results of your evaluation assessment. At this point you will need to create a set of instruments to generate data that will answer your questions about the chosen program or organization.

Designing your evaluation
Once you have done the preliminary work with the client and the focus of the evaluation you need to develop the measures and instruments that you will use to answer your questions. This means choosing the format, type and then testing the instruments to ensure that they will work properly. Here is an overview of some of the different options you have for gathering data. You may want to begin by looking at any information that has already been gathered by an organization. This may be survey data, graduation rates, or financial records. You will likely create a survey of the major stakeholders or interview them individually or in a focus group. This file will give you a good grounding in designing surveys and working with focus groups.
Creating surveys
A survey is a common way to generate data from stakeholders, employees and clients connected to a program or policy. Having clear well-written questions presented in a variety of formats will go along way to generating a reliable means to generate data. You can use existing surveys and modify them to work with the specifics of your particular evaluation. Here is a sample survey for you to review.Traditionally this has been done using a paper form. This has worked well but there is now the option of using the Internet. Using the Internet allows for data to be in a format that can be more easily collected and analyzed. The U of S has an online survey tool available for you to use. It can be accessed at http://www.usask.ca/its/services/websurvey_tool/

Here are two other useful resources for creating surveys.
Creating a paper survey
Getting better results from online surveys
Focus groups
FG allow you to meet with many people at once to discuss and collectively generate data. Focus groups where you gather people together to discuss issues are also useful. They will allow you to get homogenous or mixed groups to share and feed off one another.
Here is an example of a document used to organize and conduct a focus group.
Validity of your instruments
Having confidence in your data gathering instruments is very important. You cannot have any useful results if they have been based on flawed data. This is why evaluators will often use instruments that have been used and tested by others. If possible taking an existing survey and modifying it slightly to fit your client's needs will give you peace of mind and will be a better judge of what you are trying to measure. If you are designing a survey from scratch you need to make sure that what you are asking and how you are asking it is correct. This means sharing your instrument with others in the know or experts in measurement. Pilot testing and useability testing your survey with a group similar to the one you will be surveying is also very important.
Once you are confident that your instruments are valid and reliable then you can gather your data.


Data analysis

Results must be shared for your evaluation to of use to anyone. You should make recommendations to those who offer the program. This cannot be done without a careful analysis of the data that you have collected. Once you have gathered enough data you will have to compile and compare the results with the original objectives. This link gives you some insights into the process of data analysis http://www.uwex.edu/ces/tobaccoeval/resources/surveynotes28aug2001.html#defs
http://hsc.uwe.ac.uk/dataanalysis/qualTextDataEx.asp
(qualitative analysis)
Here is an example of an interview transcript that has been analyzed. Read through it and then test your own skills with what the researcher discovered. http://hsc.uwe.ac.uk/dataanalysis/qualTextDataEx.asp

From the same website here is a look at quantitative data analysis. http://hsc.uwe.ac.uk/dataanalysis/quantWhat.asp
Don't be scared, you will not have to become an expert in this type of analysis (At least not for this class).

Assignment #5
You will design and test a short survey. I have included an example for you to use as a guide. Include a variety of question types such as scale rating, short answer, and open-ended. You will submit the original version and the modified version based on the testing of the survey with a group of 4 different individuals. You will post your information on your blog. This assignment is due on November 16, 2009.

Module 4 - Ethics of evaluation
It is important that as an evaluator you need to be objective. Your primary purpose is to serve the needs of your client. That being said you must design and conduct your evalaution with the needs and protection of all those impacted by your results. There is often fear associated with the evaluation of one's performance. This is especially true when the evaluator is someone who is coming from the outside and does not have the chance to have a longitudinal look at programs or organizations.
Program evaluation standards are put forth by the American Evaluation Society to guide evaluators in their conduct.

This powerpoint presentation looks at the guiding principles of evaluation.
Sharing the Evaluation Results ( I found this and modified it for our class). http://www.busreslab.com/ESATsharingresults.htm


It is critical to share results in a timely manner for at least two reasons:

1. Everyone must know where the organization as a whole and their individual areas stand if you are going to fully leverage the creativity and efforts of the employee base.

2. Participants need to know that the time they spent in completing the survey was worthwhile.

Each organization has its own information-sharing culture in place. While in some cases, particularly if the survey showed communication to be a problem, the process will need some adjustment, we recognize that each organization will have an approach to information dissemination that it typically leverages. As such, modifications to our recommended approach may be in order to account for an organization's information-sharing culture.

The Basic Principles of Sharing Survey Results

1. Be honest. An organization must be willing to share both its strengths and its areas in need of improvement. Employees will see through attempts to hide or "spin" information.

2. Be timely. The sooner you release results, the sooner the organization can begin to move toward positive change.

3. Share appropriate information at each level. Senior management will need encapsulated results and access to detailed results for the organization as a whole and differences between divisions/departments. Division managers will need to know how their division compares to the organization as a whole and how departments in the division compare to each other. Department managers will need to know how their results compare to the organization as a whole and to the division to which they belong.

4. Don't embarrass people in front of their peers. Teamwork and morale can be harmed if, for example. Rather than pointing out low-scoring departments to all department managers, let all department managers know how they fared compared to other departments via one-on-one meetings.

5. Discuss what happens next. After the results have bee presented, let the audience know what steps will be taken to improve those items in need of improvement.

6. Respect confidentiality. Don't present information that would make people feel that their responses are not confidential. For example, it would not be appropriate for anyone in the organization to have access to comments for a small department, since some people may be able to accurately guess who made what comment. Your research supplier should assist in this by not providing information that could breach, or could be perceived to breach, confidentiality.

Process Considerations

Have a plan in place to disseminate information before the survey has been completed.

1. The CEO/president should be briefed by the internal project manager and/or the research supplier.
2. The CEO/president should share the results with division managers. Overall results should be shared in a group setting. Individual results should be shared in one-on-one meetings.
3. Key findings and implications should be highlighted in each presentation. Detailed results also should be presented. However, take care to avoid drowning people in information. This can be done by relying more heavily on graphics than on detailed tables to communicate.
4. Give employees an overview of overall results through the best means possible. For some organizations, this will be in a group setting. For others, it will be via email, Intranet, or newsletter. Consider using multiple methods.
5. Department managers should share departmental results with employees in a group meeting. It may be helpful to have an HR manager assist in the presentation. If HR managers will be part of this process, planning ahead will help the meetings to proceed smoothly and take place in a timely manner.

In all communications, make sure the communication is "two way." Questions should be encouraged.

Assignment
The final assignment in this class will be a proposed evaluation of the program of your choosing. Consult the sylabus and other material I have shared with you for details.

Here are some sample proposals.
Proposal 1
Proposal 2

These are examples of requests for proposals.
This is when organizations solicit input for evaluations.

Request for proposal 1
RFP 2
RFP 3
Evaluation reports
Sask Aboriginal Literacy Report pdf
Sask Literacy Report pdf

Saskhealth evaluations http://docs.google.com/gview?a=v&q=cache:V5tQS5tiZQgJ:www.health.gov.sk.ca/hen-newsletter-072006+evaluation+proposal+government+saskatchewan&hl=en

http://www.nrcan.gc.ca/evaluation/reprap/2006/e06003-eng.php
________________________________________________________________