Aligning With and Across Courses

student building bridge in architecture class

We report results of our measurable outcomes upward from the course level, where assessment occurs, to the program and university levels.  Results of our outcomes enlighten us on how well we are conducting our programs and, as a result, the impact we are making on student learning and development. To make it easier to connect course outcomes to program outcomes, try the strategy of mapping (aligning) to: 

  • Identify opportunities for learning & points for assessment.
  • Identify strengths, gaps, redundancies, & misalignments.
  • Provide evidence of meeting standards, goals & outcomes.
  • Illustrate how curriculum, co-curriculum, extra curriculum integrate and align.

Explore these Mapping Examples that allow mapping course outcomes to assessments, mapping courses to program outcomes, and mapping to undergraduate campus-wide learning outcomes.


Writing Learning Outcomes

Start with the end in mind… What knowledge, skills, attitudes, competencies, and habits of mind are expected of students as a result of participating in your course and in a program of study? Be transparent in these expectations for students by placing learning outcomes on each course syllabi and share program outcomes on all program websites.

What makes for a good outcome? An outcome must be measurable, meaningful and manageable. It specifies what you want the student to know or do. A good outcome statement also uses active verbs. An outcome consists of three components:

Audience (A) = Person doing or expressing
Behavior (B) =  What student will do or report
Condition (C) = What student needs to do to succeed 
Examples of learning outcomes include:

Students in an introductory science course will be able to recall at least 5 of the 7 elements of the periodic table.
Students in the Psychology Program will design a research experiment they will carry out in their capstone course.
Students in the Service Learning Leadership Program will demonstrate increased leadership skills by successfully completing a leadership skills inventory, as indicated by a score of at least 80% correct.
Instructional design candidates will prepare for success in school settings as skilled professionals through demonstrated competency of multiple technologies. 

Resources

A helpful and frequently used resource when writing student learning outcomes is Bloom's Taxonomy of Cognitive Skills. Bloom's taxonomy provides verbs associated with a ranking of thinking skills, moving from less complex thinking skills at the knowledge level to more complex thinking at the evaluation level. Make sure to set the level of the outcome to match the level at which the content is taught.


Assessment Types

It is best practice to collect direct evidence, as much as possible, of student learning. Direct evidence requires students to demonstrate their learning, as through writing samples, performances, projects, tests, to name a few. Self-reports provide useful information about the student's perceptions of their experiences. Assessment conducted during a course or program of study is called formative assessment. Assessment conducted at the end of a course or program is referred to as summative. 

These are so many ways to measure what students know and can do, so why limit yourself to just one or two methods? Here are some tips to get you started!

  • Choose assessments that engage the student in meaningful ways. 
  • Select methods that answer your questions and provide evidence to show whether students have achieved the expected learning outcomes related to educational objectives and goals. 
  • Collect direct evidence, rather than self-reports, of student learning. 
  • Use a combination of assessment approaches to measure student learning. This includes gathering evidence through research papers and other process reports, multiple choice or essay examinations, personal essays, journals, computational exercises and problems, case studies, audiotapes, videotapes, and short-answer quizzes. This information may be gathered from in-class or as out-of-class assignments. 

Are there existing data that can be used or must new data be collected?

  • Alumni, employer, student surveys
  • Exit interviews with graduates
  • Graduate follow-up studies
  • Percentage of students who go on to graduate school
  • Retention and transfer studies
  • Job placement statistics
  • Activities selected or elected by students
  • Faculty/student ratios
  • Percentage of students who study abroad
  • Enrollment trends
  • Percentage of students who graduate within five-six years
  • Diversity of student body

Classroom Assessment Techniques examples from Angelo, T. A., & Cross, K., P. (1993). Classroom assessment techniques (2nd Edition). San Francisco: Jossey-Bass.

Here is a detailed list of assessment measures to consider.


Rubrics

Rubrics are used to assess a variety of interactions including capstone projects, collections of student work (e.g. portfolios), direct observations of student behavior, evaluations of performance, external juried review of student projects, photo and music analysis, and student performance, to name a few. The main advantages of rubrics are that they help standardize assessment of more subjective learning outcomes, such as critical thinking or interpersonal skills, and they are easy for practitioners to use and understand. Rubrics clearly articulate the criteria used to evaluate students.

Rubric Examples

  • Texas A&M University has rubrics for 15 different leadership competency areas, including communication, diversity, and critical thinking. View Texas A&M’s full set of rubrics
  • American Association of Colleges and Universities (AAC&U) has rubrics that cover areas such as information literacy, teamwork, and civic engagement, to name a few. Each VALUE rubric contains the most common and broadly shared criteria or core characteristics considered critical for judging the quality of student work in that outcome area. View AAC&U Value Rubrics
  • PSU University Studies Rubrics for 4 UNST Goals: Inquiry and Critical Thinking, Communication, Diversity, Equity and Social Justice, and Ethics, Agency, & Community. View PSU University Studies goals and rubrics

Quizzes and Tests

There are two primary testing alternatives: Locally developed/faculty generated tests and quizzes and commercially produced standardized tests and examinations.

Locally Developed/ Faculty-Generated Tests and Quizzes

Locally developed tests and quizzes are the most widely used method for evaluating students. Pre-test/post-test assessments are locally developed and administered at the beginning and at the end of courses or academic program. These test results enable faculty to monitor student progression and learning throughout prescribed periods of time.  The advantages of this locally-developed tests include: best match to course/program content; faculty grading allows faster feedback, faculty control over interpretation and use of results. 

Commercially Produced Standardized Tests and Examinations

Commercially generated tests and examinations are used to measure student competencies under controlled conditions. Tests are developed and measured nationally to determine the level of learning that students have acquired in specific fields of study. The advantages include ability to compare from year to year or to other groups; national standard can be used for program’s performance criteria; convenient; well-developed tests have reliability and validity information.


Survey

A survey is an indirect method of assessment that can measure specific goals or learning outcomes related to the teaching and learning in academic programs.  For example, through students’ self-reports a survey can assess:

  • the impact of educational programming
  • experiences on students’ learning, development and satisfaction
  • students’ preparedness for the workface and continuing education

Best research practices indicate that a good survey is short and contains a combination of quantitative (closed-ended) and qualitative (open-ended) questions.

How do I develop a survey?

Below are general steps modified from DeVellis (2012).

Step 1: Decide what to measure. In this step you decide what constructs to measure. For example, you might decide to ask students about their learning experiences in relation to program goals.

Step 2: Generate items. Once you have defined your construct(s) of interest, then you need to write survey items. Typically, you generate more items than you will ultimately use in your survey. There two main reasons for this:

  • Upon review of items, you will find that some do not work as well as you like.
  • It is easier to find 10 good items measuring a construct if you have a pool of 30 items, then it is to come up with 10 great items from the start.

In creating your survey questions make sure to do the following:

  • Avoid exceptionally lengthy items.
  • Choose an appropriate reading difficulty level (between the fifth and seventh grades for most scales).
  • Avoid ambiguity.
  • Avoid double-barreled items where the item could be affirmed or denied for multiple reasons.
  • Follow conventional rules of grammar.
  • Decide if items will be positively or negatively worded. Remember, negative wording can be confusing for responses and will need to be reverse scored.

Step 3: Determine the response scale. The third step is to determine the response scale you will use to measure the construct(s) of interest. Will you use a dichotomous scale (e.g., yes or no) or a polytomous scale (e.g., Likert agreement scale)? Will you include open-ended items?

Step 4: Review the survey. It is always best practice to have your items reviewed by an outside person to determine if your survey has face-validity, in that is measures what it purports to measure. Survey items should be reviewed for quality too (not too lengthy, negatively worded, double barreled, jargon, ambiguous or otherwise unclear). Make sure to have students review the survey for clarity in language and ease of use too. This could be done through a pilot study or by involving students in the survey development process.

Considerations when collecting data for your survey

Make sure to collect relevant student background information to help make sense of the data. When examining survey data, it is important to disaggregate students’ rating by any variables that might influence their experience.  For example, student might differ in their experiences based on their major, gender identity, or enrollment status (full or part-time), to name a few.

How do I make sense of findings?

Make sure to analyze your data at the level of the question scale. This means, that if you used a dichotomous scale (e.g., yes or no) the level of analysis is limited to counts and percentages. If you used an interval-level scale (e.g., Likert-type agreement scale), you can report means and standards deviation.

Survey tools

There is a variety of tools you could use to create and administer your survey. The two most common survey tools used at PSU:

  • Qualtrics: If you are looking for a more advanced data collection this is the best tool to use. This software allows you to both collect and analyze data.  This platform is easy to learn and offers a variety of training webinars. Visit the PSU OIT Qualtrics resource page

    Google Forms: If you are looking to obtain quick and easy feedback or suggestion regarding a course or meeting this type of data collection is a great choice.  Data is saved to a file that can be exported for more detailed analyses. Visit the PSU OIT Google Forms resource page

    For more information on how to create surveys with Google Forms or Qualtrics, visit the PSU OIT resource page

When should I consider doing a survey?

As you decide which survey fits your needs it is important to consider evaluating beforehand the length, number of participants, cost/benefits and the time/effort invested in the process (Wise & Barham, 2012)

Types of surveys

Student exit survey/Alumni survey. Student information about their collegiate experiences can be gleaned through surveys and exit interviews. Surveys are often conducted on exit from the university, and then again as an alumni survey at scheduled interval cycles (e.g., six-month or one-year, three-year, five-year, and 10-year follow-ups). Student interviews are topically conducted on exit.

These methods indirectly assess, through students’ self-reports, the impact of educational programming and experiences on students’ learning, development, satisfaction, and preparedness for the workface and continuing education, to name a few.

There are myriad aspects of the students’ university experience one can assess:

  1. Employment history
  2. Career development (career preparedness)
  3. Graduate school preparedness
  4. Learning outcomes achievement related to campus-wide learning outcomes and/or program-level learning outcomes
  5. Educational experiences related to
  • Reflections on classroom and faculty experiences
  • Participation in and satisfaction with academic support services
  • Participation in and satisfaction with student services and programs
  • Involvement and satisfaction with co-curricular and out-of-class activities and experiences
  • Perceptions of and satisfaction with the overall collegiate environment

Looking for more assessment resources and guidance?

The Office of Academic Innovation is here to support you. Don’t hesitate to contact Raiza Dottin, Ed.D., Associate Director of Teaching, Learning, and Assessment at dottin@pdx.edu.


Determining Data Quality

Consistency. If you are using a rubric, is there a process to check for inter-rater reliability?

Quality. Basically what you want to know if your assessment method is credible. Here are some ways to check:.

Quantitative Assessment

  1. Content Validity: Is there a match between test (assessment) questions and the content or subject area assessed?
  2. Face Validity: Does the assessment appear to measure a particular construct as viewed by an outside person?
  3. Content-related Validity: Does an expert in the testing of that particular content area think it is credible?
  4. Curricular Validity: Does the content of an assessment tool match the objectives of a specific curriculum (course or program) as it is formally described?
  5. Construct Validity: Does the measure assess the underlying theoretical construct it is supposed to measure (i.e., the test is measuring what it is purported to measure).
  6. Consequential Validity:  Have you thought of the social consequences of using a particular test for a particular purpose?

Qualitative Assessment

  1. Have you accurately identified and described the students for whom data were collected?
  2. Can the findings be transferred (applied to) to another similar context?
  3. Is there dependability in your accounting of the changes inherent in any setting as well as changes to the assessment process as learning unfolded?
  4. Can the findings be confirmed by another?

Sampling. For program review, we ideally want a combination of assessment evidence to address program goals. This evidence includes assessment of all students in the program at times, and assessing only a subset of the students at other times.  We often see this difference in the choice to use quantitative vs. qualitative assessment methods.

Quantitative Methods

A randomly selected sample from a larger sample or population gives all the individuals in the sample an equal chance to be chosen. In a simple random sample, individuals are chosen at random and not more than once to prevent a bias that would negatively affect the validity of the results.  We strive in sampling for representativeness of the sample to the population from which it was drawn.

Qualitative Methods

Having a large number  of students is not essential using qualitative methods, as the goals may be to 1) explore topics in depth,  2) try a new method that explores a topic of interest, and 3) the assessment method used is labor intensive (e.g., portfolio reviews), as an example.


Using Assessment Results

  • How do your results provide evidence for your outcomes? 
  • What do your results say about your program process and the impact of the program on students’ learning and development?
  • Based on the results, what decisions will you make or what action will you take regarding programs, policies, and services as well as improvements/refinements to the assessment process?

Transparency and reporting

Transparency is making meaningful, understandable information about student learning and institutional performance (assessment) readily available to stakeholders. Information is meaningful and understandable when it is contextualized and tied to program and institutional goals for student learning. Practicing transparent assessment motivates us to be ethical, and to draw conclusions that are well-supported and clearly-reasoned.  Here are a few recommendation for creating more transparency:

  • Highlighting assessment results in your annual report
  • Sharing assessment findings on program website
  • Engaging participants or stakeholders in analyzing data
  • Sharing conclusions with board or committee members
  • Presenting results/process at a conference.
  • Sharing in briefings and in marketing

Feedback loops

Information from your data analysis and interpretation should be fed back into the assessment planning process. Based on the results, what decisions will you make or what action will you take regarding programs, policies, and services as well as improvements/refinements to the assessment process. Make sure to assess the effectiveness of these decisions or actions at a later date. 

If results were not as you hoped, then perhaps explore if outcomes are well-matched to the program; if assessments are aligned with outcomes. Do changes or improvements need to be made to the program in order to reach the goals of the program? Is there differential performance in that some groups of students benefitted and others did not?