During my graduate studies at the University of California, Santa Cruz, I attended the Institute for Scientist & Engineer Educators (ISEE)’s Professional Development Program (PDP) in an effort to improve my teaching experience. Here is the PDP model description:

The PDP encompasses a suite of workshops organized into:

  • Two intensives: The Inquiry Institute and a Design Institute.
  • A practical teaching experience in a STEM venue (usually 1 or 2-day workshops).
  • Reflection on the PDP design and facilitation experience.

The program begins in March or April each year, and participants complete their teaching experience sometime between May and November. Participants often return to lead teaching teams or take on other leadership roles in our community.  These returners receive leadership and project management training as they adopt advanced roles. The overall program experience includes:

  • The Inquiry Institute. A 4-day series of workshops provides background in research that supports effective and inclusive teaching as well as training in professional skills. A major component of the Inquiry Institute is experiencing an authentic inquiry experience as a learner. Participants engage in one of two activities, such as “Light & Shadow” or  “Analog To Digital”. In teams led by a Design Team Leader (DTL), participants then take early steps toward designing their own inquiry activity. The PDP Inquiry Institute is required for all participants and is followed by attendance at a Design Institute.
  • Design Institutes. A 2.5-day series of workshops where participants work in teams led by a DTL to further develop their inquiry activity, with a focus on employing research-based teaching methods and creating equitable learning environments. The goals of the Design Institutes are for teams to: work together to design an inquiry activity, establish a process for putting into practice the equitable and inclusive strategies in their design, and plan for teaching their designs in their venues.
  • Independent Design & Preparing to Teach. After leaving the Design Institute, teams work independently and with coaching to finish designing their activity. Teams meet in person or remotely to create a plan that outlines the details of their design and facilitation plan. Closer to the teaching date, the teams meet with an ISEE instructor in a final workshop that transitions them from designing their activity to preparing for in-the-moment teaching. 
  • Teaching Experience. PDP participants teach as a team the activity that they have designed. This is an essential part of the PDP experience, and is often transformative, as PDP participants get to put into practice the concepts and strategies learned through the PDP, now with actual learners. 
  • Reflection and documentation. The PDP experience culminates with each PDP team debriefing their experience. This is an important part of the experience and a process that many alumni have come to value and use in their post PDP experiences. ISEE provides a format that facilitates the process of debrief and reflection, encouraging participants to consider ways in which they could improve and redesign. Participants each complete an individual post-teaching report, which can be used as a basis for future teaching statements and job application materials.

I began in 2015, joining a biology group due to the absence of CS-related inquiry activities. Over the next three years (2016 – 2018), I returned as the Design Team Leader (DTL) and developed CS-related workshops for transfer students. Here’s a general description of our 2017 PDP inquiry activity and 2018 PDP inquiry activity.

In the following sections, “Inquiry Activity” presents the actual information shared during the inquiry activity. The subsequent sections, “Design Document”, “Teaching Plan”, and “List of STEM Practices”, detail our curriculum design process.

Inquiry Activity

Previous Experience Questionnaire

We asked students to do a questionnaire about their previous experience, so that we can divide up the students and make sure that at least half of the students know how to program in each day.Here are the questions:

  • Need a Google email – this is so that they can use Google Colab to run the notebook.
  • Do you have experience programming? Lots; some; minimal; none.
    • Most students voted “some” and “minimal”.
  • If you have some programming experience, which languages have you used? Python; C; Java; Matlab; None; Other.
    • Mostly Python, C, and Matlab. We actually had a student mentioning Fortran!
  • How comfortable do you feel programming in Python?
    • Choices:
      • Very comfortable, I program with it daily.
      • Medium comfortable, I’ve done a few homework assignments using it.
      • Minimally comfortable, I’ve written a few programs/pieces of code.
      • Uncomfortable, I’ve heard that it’s an awesome scientific language but have yet to play around with it.
    • Mostly minimally comfortable and uncomfortable.
  • How much experience do you have with ML? Lots; some; minimal; none.
    • Most students voted “none”.
  • Anything else you’d like us to know?
    • Couple students had ML experience.

Presentation Slides

This is the main presentation for our inquiry activity:

  • The goal is to help learners gain a more intuitive understanding of creating and using models, and to create an environment that allows learners to comfortably ask questions and try out ideas.
  • The inquiry is based on a machine learning activity, where students build a decision tree model to accurately predict a desired attribute of a dataset.
  • Students work in pairs, with at least one person in the group having programming experience.
  • The activity is split into three investigations: building a decision tree model by hand, building a decision tree model using Python’s scikit-learn library with real-world data, and making predictions on new data.
  • The presentation also emphasizes the importance of understanding the potential biases in data and the social impact of machine learning applications.

Here’s our class presentation in PDF. Note that the content has been modified to remove contact information and certain links for privacy reasons. Enjoy:

Code, Data, and Handouts for Facilitators and Students

Here’s our class handouts in PDF. It includes the day’s timeline, how to calculate model accuracy, and a prompt for student presentation:

For the full data and code, download the zip here:

Post-Teaching Assessment

Assessment of Learners’ Understanding of Main Content Learning Outcome.

Content Goal and Its Importance. Our primary goal was to teach students how to build a decision tree to accurately predict a dataset attribute, explaining and justifying their solutions. Decision trees were chosen for their simplicity and effectiveness in introducing basic machine learning concepts, providing students with a strong foundation in predictive modeling.

Common Challenges Learners Face. Students often struggle with selecting the best features for tree splits and calculating the accuracy of their decision trees. Tracking data as the tree grows and understanding that accuracy calculations apply only to leaf nodes are common difficulties, highlighting the need for detailed, step-by-step guidance.

How We Assessed Understanding. We used a rubric focusing on attribute prediction, feature selection, accuracy as a metric, and balancing accuracy with model complexity. Many students had trouble with accuracy calculations, so we provided a handout with a simple example of calculating accuracy with a small decision tree. This helped students build their skills and confidence by applying the method to smaller versions of their trees before scaling up.

Improving Future Teaching. The assessment highlighted the need for clear, practical examples and more detailed handouts. Moving forward, we’ll tailor the learning experience to match students’ prior knowledge and experience levels, ensuring better understanding and engagement.

Assessment of Learners’ Proficiency with Main STEM Practice

Our activity focused on the STEM practice of optimization. This practice is crucial because engineers often need to make the best decisions within given constraints, and being able to justify these decisions enhances their credibility within the community. Optimization involves balancing trade-offs to achieve the best possible performance, which in our case is between model complexity and accuracy. To gauge proficiency, we used formative assessments during investigations, asking students to restate the reasons behind their decisions in their own words. This approach required them to re-examine their work closely, helping to solidify their understanding.

We assessed learners using a rubric that included:

  1. Describing the metric used to determine model goodness (accuracy in our case).
  2. Identifying and justifying important features (deciding which feature to split on).
  3. Performing trade-offs between model complexity and accuracy to optimize performance.

One example of proficiency was when a student clearly articulated the trade-offs between model complexity and accuracy. After struggling initially, they were able to explain why they chose specific features for splitting and how these choices affected the model’s performance. This demonstrated a deep understanding of the optimization process and the ability to apply it effectively.

Equitable and Inclusive Teaching

One key idea from the Equity & Inclusion theme is the growth mindset, as discussed in “Intelligence as a Malleable Construct” by Blackwell et al. This concept, which suggests that intelligence can develop through effort, is crucial for learners from marginalized groups. Encouraging a growth mindset fosters resilience and persistence, helping students stay motivated despite challenges.

Our team’s design emphasized the growth mindset by creating activities that increased in complexity and allowed for multiple iterations. This approach enabled students to experiment and receive feedback, reinforcing that their abilities can improve with effort. We also used pre-activity questionnaires to balance programming skills within groups and conducted daily surveys to adjust content based on feedback.

Moving forward, we would adjust the level of guidance provided, gradually reducing it to encourage more independent problem-solving. This aligns with growth mindset principles, helping students build confidence and autonomy. We will also integrate more culturally relevant examples and provide opportunities for students to connect the material to their own lives. This not only enriches the learning experience but also helps marginalized students see the relevance of STEM in their contexts, fostering a more inclusive environment.

Students’ Feedback on the Workshop

At the end of the first day, we conducted a survey and found that participants were frustrated by the lack of hands-on coding, as they were just clicking through the code. To address this for the second day’s activity, we removed a few crucial lines of code that required them to create a model. This approach encouraged participants to look up the documentation and figure out the necessary code themselves. While this decision was generally beneficial, some students expressed frustration about having to actually code, as noted in the second day’s evaluation.

The survey also indicated that the amount of material was appropriate and the difficulty level was manageable. However, students’ backgrounds varied, causing some to struggle with the coding more than others, even though most of the code was provided. It might be more effective to split the group into less experienced and more experienced students in terms of programming skills, though this would require additional preparation work.

Here are the results for some of the questions:

  • How much material did we present? Most people voted “just right”.
  • How did you feel about the pace? Most people voted “Just right”.
  • Going through Jupyter notebook on your own can be boring, even if it’s realistic. How engaging did you find the activity? Most people voted “medium engaging”, followed by “very engaging”.

Suggestions for Improvement

The workshop was well-received for its content and interactive approach, though it would benefit from adjustments to better accommodate varying levels of programming experience and provide clearer guidance on coding tasks.

  • Provide more background on Python syntax before diving into complex tasks.
  • Include clear instructions and prompts on where code needs to be written.
  • Consider having a tutor or assistant available to help with syntax and coding difficulties.
  • Offer more hands-on coding opportunities and reduce pre-written code.
  • Introduce ice-breakers or short activities to break up long sessions.

Future Workshops

We also asked students about future CS-related workshop. They seem enthusiastic about future workshops that integrate hardware and software, particularly those involving robot programming and practical applications using Raspberry Pi. They also expressed interest in more foundational programming courses and advanced machine learning topics. Engaging and interactive topics that provide immediate feedback and real-world relevance are highly suggested.

Design Document

The PDP Design Notebook contains information to assist teams in developing a lesson plan for an inquiry activity. The lesson plan should incorporate the themes of inquiry, equity and inclusion, and assessment. It should also draw on research-based understandings of teaching and learning. The Design Notebook includes the following components:

  • General Information: Contains details such as activity name, team members, teaching venue, discipline, and the expected number and background knowledge of learners.
  • Content Design Tool: Outlines the core concept of the activity, its importance, and the learners’ expected thinking process.
  • Inquiry Institute: Provides space for notes from the Inquiry Institute sessions, focusing on topics like equity and inclusion, assessment-driven design, and designing inquiry investigations.
  • Design Institute: Includes tools for working with goals and rubrics, focusing on investigations, raising questions, designing synthesis/reflection on practices, introducing the activity, and designing thinking tools.
  • Teaching Plan Milestones: Specifies deadlines for updating the Teaching Plan with the team’s most current design ideas.

The E&I Design Approach focuses on the following areas:

  • Developing an identity as a person in STEM: Acknowledge students’ work and avoid patronizing them.
  • Learners’ goals, interests, and values: Use datasets relevant to Hawaii and the students’ internship skills. Pair students with different backgrounds.
  • Students’ interests; ownership; identity: Provide different datasets or questions within the same dataset. Use the same dataset for the toy problem so students can check their work.
  • Beliefs and biases about learning, achievement, and teaching:  Emphasize that machine learning is applicable across disciplines and that a lack of programming experience is okay.
  • Multiple ways to productively participate: Mix students with and without a computer science background.
  • E&I in Facilitation: Address teams as a group, ask open-ended questions, praise effort, provide hints, encourage peer teaching, and avoid being patronizing.

Here’s our design notebook in PDF. Note that the content has been modified to remove contact information and certain links for privacy reasons. Enjoy:

Teaching Plan

This document outlines a teaching plan for our machine learning activity focused on decision tree learning, designed for STEM undergraduates at the Akamai PREP Computer Science event. The activity aims to:

  • Teach students how to build decision trees
  • Use them to predict dataset attributes
  • Optimize decision-making processes

The plan emphasizes equity and inclusion by promoting growth mindsets and encouraging collaboration through pair or team-based investigations. The activity is structured into several phases:

  • Introduction: Provides context, goals, and an overview of the activity timeline. It also emphasizes the iterative nature of the learning process and encourages a growth mindset.
  • Raising Questions: Encourages students to ask questions about a toy dataset, fostering critical thinking and setting the stage for investigation.
  • Investigations:
    • Students work in teams to create decision trees, calculate accuracy, and work with real-world data using Python and scikit-learn.
    • The investigation progresses from a toy problem to real-world data, with checkpoints to assess understanding and provide guidance.
  • Culminating Assessment Task: Assesses students’ understanding through individual presentations, requiring them to explain their decision tree models, predictions, and the rationale behind their choices.
  • Synthesis: Consolidates learnings, highlights different teams’ approaches, and encourages reflection on the optimization practice. It also introduces students to other machine learning algorithms and resources for further exploration.

The document also includes facilitation notes and rubrics to guide instructors in delivering the activity and assessing student performance.

Here’s the teaching plan in PDF. Note that the content has been modified to remove contact information and certain links for privacy reasons. Enjoy:

Recommended List of STEM Practices

The materials in this section were originally sourced from PDP. I am presenting them here because I find them quite useful in designing the inquiry activity.

Introduction

Here’s a list of essential STEM practices that are important across many STEM fields and have been a focus for many PDP teams when designing their inquiry activities. These recommended practices are based on extensive lists and frameworks from the literature (see references below). While PDP teams are welcome to choose other practices if they have a strong reason, we’ve found that focusing on the practices listed here has led to the most success. Some of these practices may be more science-oriented or engineering-oriented, but most are relevant to both fields.

List of Practices

STEM practiceExample Dimensions of PracticeReferences
Generating research questions.– Defining a question that expands beyond one’s existing knowledge.
– Defining a question that can be investigated.
– Defining a question that is of appropriate scope and scale for the time allotted for study.
[1], [2], [3]
Defining problems.– Stating a problem/need in a solvable way (e.g., if a “better” solution is needed, articulate what “better/best” means in this case.
– Defining functional requirements – what would a solution to this problem need to do.
– Stating requirements in a testable way.
– Distinguishing between constraints and requirements.
– Stating why a solution is needed.
[2], [4]
Hypothesizing and making predictions.– Turning a question into a hypothesis
Stating prediction in a testable way.
– Refining hypothesis and prediction based on outcome of prior experiment(s).
[1], [3]
Designing and carrying out investigations.– Selecting variables that are most relevant to the scientific question or engineering problem.
– Controlling variables and/or defining control samples.
– Planning procedures that will allow relevant measurements to be made with tools/technology at hand.
– Planning follow-up procedures to confirm results.
[1], [2], [3], [5]
Developing and using models.

Note: sometimes this practice overlaps with explanation, since explanations often reference models.
– Articulating which aspects of a phenomenon are important to include in a model.
– Distinguishing between a model and the actual phenomenon that the model represents.
– Recognizing limitations of a model, and how they may affect resulting explanation or solution.
– Adjusting a model to incorporate new evidence.
[2], [6]
Building algorithms.– Identifying and defining relevant variables
Mapping logical flow of algorithm.
– Determining if there are different “cases” or options to be worked through.
– Defining “chunks” of code that perform specific operations.
Relates to some skills discussed in [2] and [4]
Designing solutions within requirements.– Evaluating solutions based on requirements.
– Identifying whether trade-offs are independent, or interdependent.
– Understanding how trade-offs work within and between solutions.
[4], [7]
Explaining results and/or solutions based on evidence.– Making a claim.
– Connecting claim and evidence through reasoning.
– Interpreting whether observations and/or data are in support of claim.
– Finding flaws in models or data.
– Causal coherence – using chains or networks of inferences.
– Coordinating results from multiple studies.
[1], [2], [8], [9]
List of STEM Practices

References for general frameworks of STEM practices

[1] Chinn, C.A., & Malhotra, B.A., 2002, “Epistemologically authentic inquiry in schools: A theoretical framework for evaluating inquiry tasks.” Science Education, 86: 175-218. 

  • We recommend this paper highly not because the framework of science practices is particularly better, but because the discussion of what the practices look like in “authentic inquiry” and in a range of “simple tasks” is very valuable.

[2] National Research Council, 2012, A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas. Committee on Conceptual Framework for the New K-12 Science Education Standards. Washington DC: The National Academies Press.

  • This framework has significant advantages and disadvantages. It has really pushed on the term “practices” as the right term, which is a push we tend to agree with. A “practice” is something one can get better at with practice, and “practices” are also culturally-embedded and influenced. The NGSS framework tries to be very even-handed between science and engineering, which is another appealing feature. On the other hand, this framework was explicitly developed for the K-12 context (not that that means it doesn’t apply in higher ed). The choice of “constructing explanations” and “engaging in argument from evidence” as separate practices can be difficult to parse. “Modeling” could instead be seen as a set of practices that may be used in conjunction with any of the others: using a model to develop a hypothesis, using a model to plan an experiment, constructing a model as an explanation, etc. While the attempt to treat engineering fairly is valiant, the actual treatment largely fails to break engineering down into sub-practices.

[3] Exploratorium, 2006. Process Skills. Institute for Inquiry, San Francisco.

  • This framework is a straightforward and familiar view of science. It avoids the trap of an overly-simplified presentation of a step-by-step “scientific method” but is easily accessible since it refers to many of the same terms.

[4] Seagroves, S., & Hunter, L., 2010. “An engineering technology skills framework that reflects workforce needs on Maui and the Big Island of Hawai’i” in Learning from Inquiry in Practice, L. Hunter & A.J. Metevier, eds., Astronomical Society of the Pacific 436: 434–448.

References related to specific STEM practices

[5] Dasgupta et al., 2014. “Development and Validation of a Rubric for Diagnosing Students’ Experimental Design Knowledge and Difficulties”, CBE-Life Science Education, 13: 265-284.

[6] Schwarz, C., et al., 2009. “Models: defining a learning progression for scientific modeling”Learning Profession in Science (LeaPS) conference proceedings.

  • Page 5 of this paper may be helpful to PDP teams that are focusing their inquiries on modeling.

[7] Arnberg, N., 2014, “Supporting the articulation of engineering solutions: An operational definition of engineering requirements”, Chapter from Ph.D. dissertation, University of California, Santa Cruz.

  • Pages 100-107 describe three main challenges that interns faced when defining requirements to an engineering problem.

[8] Ryu & Sandoval, 2012. “Improvements to Elementary Children’s Epistemic Understanding from Sustained Argumentation”, Science Education, 96: 488-526.

  • This paper is a study on elementary age children, and is quite a read; however, there are 4 criteria that the authors consider central to scientific argumentation (bottom of page 494) that might be useful for PDP participants that are focusing their activity on the core STEM practice of argumentation, and could use their criteria without reading the entire paper.

[9] The claim-evidence-reasoning framework; the work of McNeill & Krajcik, such as Inquiry and Scientific Explanations: Helping Students Use Evidence and Reasoning or Supporting Students’ Construction of Scientific Explanations by Fading Scaffolds in Instructional Materials.