EPISTEMOLOGICAL BELIEFS ASSESSMENT FOR PHYSICAL SCIENCE (EBAPS)

Table of contents

  1. Introduction
  2. Intended 'audience'
  3. Why another epistemological assessment?
  4. Subscales
  5. Items (Go here for the actual survey)
  6. Logistics and scoring

 


Introduction

EBAPS is a forced-choice instrument designed to probe students' epistemologies, their views about the nature of knowledge and learning in the physical sciences. It was initially developed and validated by Andrew Elby, John Frederiksen, Christina Schwarz, and Barbara White at the University of California, Berkeley.

 

Intended 'audience'

EBAPS is aimed at high school and college students taking introductory physics, chemistry or physical science. It's optimized for algebra-based courses. A versions of EBAPS suitable for purely conceptual courses (often aimed at liberal arts majors) is under development.

 

Why another epistemological assessment?

The Maryland Physics Expectations Survey (MPEX), developed by the Physics Education Research Group at the University of Maryland, and the Views about Science Survey (VASS), developed by Halloun and Hestenes at Arizona State University, probe a combination of students' epistemological beliefs and their course-specific expectations and study habits. In addition, those surveys work best if students' intuitive epistemologies take the form of consistent and articulate beliefs. Although epistemology and expectations cannot be completely disentangled, EBAPS attempts to focus on epistemology to the extent possible, and also attempts to probe tacit, contextualized epistemological knowledge that may affect students' learning behavior. For more details, including the justification for, development of and validation of EBAPS, please see the Idea Behind EBAPS, a mini-paper.  Section 3 of that paper discusses validity and reliability.

 

Subscales

EBAPS probes students' views along five non-orthogonal dimensions:

1. Structure of scientific knowledge. Is physics and chemistry knowledge a bunch of weakly connected pieces without much structure and consisting mainly of facts and formulas? Or is it a coherent, conceptual, highly-structured, unified whole?

2. Nature of knowing and learning. Does learning science consist mainly of absorbing information? Or, does it rely crucially on constructing one's own understanding by working through the material actively, by relating new material to prior experiences, intuitions, and knowledge, and by reflecting upon and monitoring one's understanding?

3. Real-life applicability. Are scientific knowledge and scientific ways of thinking applicable only in restricted spheres, such as a classroom or laboratory? Or, does science apply more generally to real life? These items tease out students' views of the applicability of scientific knowledge as distinct from the student's own desire to apply science to real life, which depends on the student's interests, goals, and other non-epistemological factors.

4. Evolving knowledge. This dimension probes the extent to which students navigate between the twin perils of absolutism (thinking all scientific knowledge is set in stone) and extreme relativism (making no distinctions between evidence-based reasoning and mere opinion).

5. Source of ability to learn. Is being good at science mostly a matter of fixed natural ability? Or, can most people become better at learning (and doing) science? As much as possible, these items probe students' epistemological views about the efficacy of hard work and good study strategies, as distinct from their self-confidence and other beliefs about themselves.


 

Items

You can view all 30 EBAPS items on the web, color-coded by subscale, or sorted by subscales.  (The subscale sort also includes the scoring scheme, discussed below.)  And, you can download a student-usable version of the survey in Microsoft Word format.

 

Logistics and scoring

Most students need 15 to 22 minutes to complete EBAPS. Scantron forms are recommended.

Each item is scored on a scale of 0 (least sophisticated) to 4 (most sophisticated). The scoring scheme is non-linear to take into account question-by-question variations in whether, for instance, neutrality is more or less sophisticated. A subscale score is simply the average of the student's scores on every item in that subscale. (When an item within a given subscale is left blank, the average is calculated without that item included.)  Sometimes we multiply through by 25 in order to report subscale scores on a scale of 0 to 100.

To automate the scoring using Microsoft Excel, see the instructions and download the Excel scoring template.