In most cases, there are far fewer instructors than students. Working as a teaching assistant for Basics of Computer Systems at University of British Columbia, I remember that my colleagues and I spent great amount of time in grading assignments submitted by hundreds of students and had hard time giving each student qualitative feedback every week. I believe program synthesis can play a great role in that it can automatically generate and solve problems (even with difficulty customized to each student) as well as provide detailed, tailored feedback to students.

Herewith, I leave my super subjective notes and comments on each material to remember and easily refer to them at any point in the future. Hence, this page is *constantly updated* as I continue exploring the field of programming languages, especially program synthesis in the context of computer-aided education!

DISCLAIMER: This post does *not* claim to be comprehensive or exhaustive :)

## General Guide to Computer-Aided Education

**A great starting point to get a sense of what computer-aided education is.**

Example-Based Learning in Computer-Aided STEM Education

CACM 2014, Sumit Gulwani

**Types of Problems**- Procedural Problems
- Require students to follow a specific procedures
- Students are expected to memorize and apply concepts
- Examples: mathematical procedures (addition, greatest common divisor computation, etc.) and algorithmic procedures (breadth-first search, insertion sort, Dijkstra’s shortest-path algorithm, regular expression to automaton conversion, etc.)

- Conceptual Problems
- Require students creative thinking
- Students are expected to give educated guesses
- Examples: proof problems (natural deduction proofs, proofs of algebraic theorems, proofs of non-regularity of languages, etc.) and construction problems (geometric constructions, automata, algorithmic procedures, bit vector circuits, etc.)

- Procedural Problems
**Example-Based Learning**- Problem Generation
- Solution Generation
- Automatically generating solutions when given a problem description in some subject domain
- Usages: generate solutions for automatically generated problems, complete students’ incomplete solutions, generate hints
- Procedural problems
- Writing down and executing the corresponding procedure for a given problem
- Programming by examples (PBE)
- Mostly applied to end-user applications
- Examples: spreadsheet tasks including string transformations and table layout transformations (it can be framed as the problem of learning mathematical procedures from example traces).

- Programming by examples (PBE)

- Writing down and executing the corresponding procedure for a given problem
- Conceptual problems
- Searching over the underlying solution space
- Perform reasoning over examples (as opposed to abstract symbolic reasoning)
- Reduce solution space to solutions with small length
- Examples
- Geometry constructions
- Intuition: regard geometry constructions as straight-line programs that manipulate geometry objects
- Goal: synthesize a straight-line program that realizes the relational specification between inputs and outputs
- Approach: generate random input-output examples and perform brute-force search to find a construction

- Natural deduction proofs

- Geometry constructions

- Searching over the underlying solution space

- Feedback Generation

**More emphasis on computing education research (CER)**

The Importance of Computing Education Research

CoRR 2016, Steve Cooper, Jeff Forbes, Armando Fox, Susanne E. Hambrusch, Andrew Ko, Beth Simon

**Automated Feedback Generation**

Automated Feedback Generation for Introductory Programming Assignments

PLDI 2013, Rishabh Singh, Sumit Gulwani, Armando Solar-Lezama

**Contributions**- Reduced the problem of finding minimal corrections in an incorrect program to the problem of synthesizing a correct problem from a sketch (check)
- Defined an error model language to ease the burden of providing correction rules

**Evaluation**- Appropriate corrections: 64%
- Average time: 10 seconds
- Limitations
- Unable to deal with completely incorrect answers, big conceptual errors, unimplemented features, timeout, structural requirements, etc.
- Diversity of different answers are not taken care of (check)
- Require instructors to provide corrections as well as the bound for input size, the number of loop unrolling, and recursion depth.

**Approach**- System
- Prerequisites: reference implementation and correction rules
- Input: students’ answers
- Output: feedback with minimum corrections

- Techniques
- Program Rewriter
- Input: a program and correction rules
- Output: a program with set-expressions and set-statements (potential corrections applied with cost model)

- Sketch Translator
- Input: output from the previous step
- Output: programs with holes
- Used Sketch to leave parts of programs unspecified

- Sketch Solver
- Input: output from the previous step
- Output: a program with minimum corrections (check)
- Used an incremental solving algorithm CEGISMIN to solve minimization problem

- Feedback Generator
- Input: output from the previous step
- Output: feedback in natural language
- Associated each correction rule with a feedback message

- Program Rewriter

- System

**Automated (Partial) Grading**

Thanks to Loris D’Antoni , I was able to get more information about automated grading on basic problems of concepts in the theory of computation as well as Automata Tutor, the actual implementation of the work with web-based interfaces for end-users. After I received a reply from him this afternoon, I spent the rest of the day reading his follow-up papers, and I must say, I truly enjoyed the time. I am really grateful to his work and appreciate his endeavor to make the service online. You can find the original paper on the work as well as follow-up papers listed below at his website.

Automated Grading of DFA constructions

IJCAI 2013, Rajeev Alur, Loris D’Antoni, Sumit Gulwani, Dileep Kini, Mahesh Viswanathan

How Can Automatic Feedback Help Students Construct Automata?

TOCHI 2015, L. D’Antoni, D. Kini, R. Alur, S. Gulwani, M. Viswanathan, B. Hartmann

Automata Tutor and What We Learned from Building an Online Teaching Tool

Bulletin of EATCS, L. D’Antoni, M. Weaver, A. Weinert, R. Alur

**Functionalities****Automatic grading**- DFA constructions
- NFA constructions (naive)
- NFA to DFA transformation (naive)
- Regular expression constructions (naive)
- Pumping lemma constructions (under development)

**Personalized feedback**- Binary feedback
- Counterexample
- Hints

**Course management**- Creating problems, problem sets, and courses
- Posting assignments
- Collecting grades

**Case Study****Goal**- Determine if feedback can improve learnability of DFA constructions
- Identify strength and weakness of different feedback types

**Setting**- Participants: 377 students
- Method: Automata Tutor
- Part A: 4 problems with no restriction on the number of attempts
- Part B: 1 problem with only one attempt

- Feedback: randomly assigned among binary feedback, counterexamples, and hints

**Results**- Total number of attempts: 3,085
- Average attempts: 3.3
- Average points: 3.53 / 5
- Overall evaluation
- Binary feedback: least preferred
- Counterexamples: most preferred
- + Serve as a specific test case

- Hints
- + Provide a general idea of what is wrong
- – Can be confusing, complex, or verbose

**Future Work**- Regular expressions
- Define new metrics and feedback tailored for regular expressions

- Automata meta-constructions
- Able to use Coq to effectively grade complex assignments ★
- Example: Given two DFAs, define their intersection

- Proof of non-regularity
- Proof of DFA correctness
- Context-free languages
- Turing machines
- MOOC deployment

- Regular expressions
**Summary****Contributions**- Classified common mistakes made when constructing DFAs and suggested three metrics for measuring how far off students’ solutions are from the correct answer
- Extended Monadic-Second Order (MSO) logic to describe regular languages easily and implemented algorithms for transforming it to DFAs and visa versa

**Evaluation**- Total attempts: 800 attempts in total
- Results: tool graded better than two instructors; no mistake for correctness, standard scheme of grading, and consistency
- Limitations
- Fail to grade some particular examples
- Not scalable for DFAs with large alphabets or many states

**Approach**- System
- Prerequisites: description of regular languages (i.e. problems)
- Input: student attempts
- Output: (partial) grade with feedback

- Technique
- Algorithm for grading DFA constructions
- Problem syntactic distance
- Technique: synthesize a logic description of a DFA, represent the MOSEL formula as an ordered tree, and compute tree edit distance with the solution’s ordered tree
- Feedback: English description of a DFA using the logic description

- Solution syntactic difference
- Technique: use regular language density to compare the size of the difference between two languages when viewed as sets
- Feedback: English description of misclassified strings or counterexamples

- Problem semantic difference
- Technique: use DFA edit distance which reflects the smallest number of edits (edit script) necessary to make the given DFA into a correct DFA
- Feedback: English description of how to fix using the edit script

- Problem syntactic distance

- Algorithm for grading DFA constructions

- System

## Conferences

Intelligent Tutoring Systems (ITS) 2016

The Intelligent Tutoring Systems (ITS) 2016 conference is the 13th in an ongoing series of top-flight international conferences on the use of advanced computer technologies and interdisciplinary research for enabling, supporting and enhancing human learning.

The theme of the conference is “Adaptive Learning in real world contexts“. It wishes to stress the need for devising learning systems that can adapt adequately to users, furnishing them the knowledge they are seeking in real world contexts, i.e. systems that are effectively usable in everyday learning situations such as courses in schools or training programs in companies, but also in informal situations such as Web or App provided help in using new technologies. The above theme encourages conference participants to think about this educational need in our increasingly complex everyday world.

Topics of interest to the conference include, but are not limited to:

- Intelligent tutoring
- Informal learning environments, learning as a side effect of interactions
- Collaborative and group learning, communities of practice and social networks
- Simulation-based learning and serious games
- Dialogue and discourse during learning interactions
- Co-adaptation between technologies and human learning
- Ubiquitous and mobile learning environments
- Empirical studies of learning with technologies, understanding human learning on the Web
- Adaptive support for learning, models of learners, diagnosis and feedback
- Modeling of motivation, metacognition, and affect aspects of learning
- Recommender systems for learning
- Virtual pedagogical agents and learning companions
- Ontological modeling, semantic web technologies and standards for learning
- Multi-agent and service oriented architectures for learning and tutoring environments
- Educational exploitation of data mining and machine learning techniques
- Instructional design principles or design patterns for educational environments
- Authoring tools and development methodologies for advanced learning technologies
- Domain-specific learning technologies, e.g. language, mathematics, reading, science, medicine, military, and industry.
- Non conventional interactions between artificial intelligence and human learning
- Privacy and security in e-learning environments

Innovation and Technology in Computer Science Education (ITiCSE 2016)

ITiCSE 2016, the 21th Annual Conference on Innovation and Technology in Computer Science Education, will normally address aspects of computing education, that is, the education of students who are studying computing.

Within the domain of computing and computing engineering education, papers might cover:

- Specific educational subject matter, such as programming, database systems, or computer security
- Specific groups of students, such as women, minorities, or school students
- Broader topics, such as curriculum, groupwork, or class infrastructure

Computer-Supported Cooperative Work and Social Computing (CSCW 2017)

The 20th ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2017) is the premier venue for presenting research in the design and use of technologies that affect groups, organizations, communities, and networks. Bringing together top researchers and practitioners from academia and industry, CSCW explores the technical, social, material, and theoretical challenges of designing technology to support collaborative work and life activities.

World Conference on Computers in Education (WCCE 2017)

The World Conference on Computers in Education (WCCE 2017) is organized by the International Federation for Information Processing and hosted by the Irish Computer Society. The conference theme, “Tomorrow’s Learning: Involving Everyone”, reflects the long-standing commitment to both learning and technology that has concerned members of the Technical Committee 3 (TC3) and its working groups of the International Federation for Information Processing (IFIP).

The conference will focus on the latest uses of technologies, on computing and technology learning resources, highlighted educational practice, thought-leadership for the future and cutting edge educational research. This event will bring together professionals and novices, experienced teachers and learners, to dream about and discuss future learning environments.

Artificial Intelligence in Education (AIED 2017)

The 18th International Conference on Artificial Intelligence in Education (AIED 2017) is the next in a longstanding series of biennial international conferences for high quality research in intelligent systems and cognitive science for educational computing applications. Empirical and theoretical papers are solicited for submission.

쩐다

LikeLike