Teaching Artificial Intelligence as a Lab Science

Our Project

Introduction The Teaching Artificial Intelligence as a Laboratory Science (TAILS) project is designed to develop a new paradigm for teaching introductory artificial intelligence (AI) concepts by implementing an experimental approach modeled after the lab sciences. It explores whether structured labs with exercises that are completed in teams before students leave the classroom can build a sense of accomplishment, confidence, community, and collaboration among students, characteristics which have been shown to be critical to retain women and non-­‐traditional computer science students in the field.

TAILS presents to students an array of fundamental AI algorithms as a set of hands-­‐on activities made available through a database of lab activities, including software exercises and experiments that provide experience with concepts from multiple perspectives and multiple modes of representation. Best practices in software engineering will be reinforced in students through careful design and documentation of the modules.

The proposed activities are designed to engage the kinetic learner and provide the “big picture” that model-­‐driven learners need to assimilate course material. Existing research has shown that structured labs with exercises that can be completed before students leave the classroom build a sense of accomplishment and confidence. Progressively sophisticated experiments teach inexperienced students and challenge more advanced students.

TAILS contributes two components to STEM education: a set of lab experiments to promote student retention of concepts and retention of majors, and insight into student learning through the labs. TAILS contributes to exemplary STEM education by creating learning materials and strategies, implementing new instructional strategies, and assessing and evaluating student achievement.

This project addresses learning outcomes in five categories: skills (students will demonstrate the ability to solve problems collaboratively as they work in pairs to complete lab activities), concepts (students will demonstrate knowledge of artificial intelligence and software engineering concepts), communication (students will be able to describe course concepts at multiple levels of abstraction), application (students will be able to identify applications of AI concepts), and research (students will demonstrate curiosity about course material). Assessments will draw from standard classroom assessments described by Angelo and Cross (1993) and the Online Evaluation Resource Library. They will include in part a teamwork attitude questionnaire, a team process log to record perceptions about collaboration, exam questions and pre-­‐ and post-­‐tests, explanation and implementation of software code, concept maps, contrasts of multiple concepts, specification of requirements for a software program , domain-­‐ and implementation-­‐level design of software programs, descriptions of algorithms geared toward non-­‐computer scientists and technical managers, application cards, explanations of the objectives and significance of experiments, and enhancement of algorithms.

TAILS contributes to the STEM education knowledge base by promoting individual efforts to solve a programming assignment while building an education community through laboratory work that encourages cooperation and teamwork among students. The paradigm can be adapted to computer science courses at all academic levels and is expected to increase participation in the field by shortening the time required to prepare undergraduates to engage in research.


TAILS August, Stephanie E. (2012) Enhancing Expertise, Sociability and Literacy through Teaching Artificial Intelligence as a Lab Science. Proceedings, 119th ASEE Annual Conference, San Antonio, TX, June 10-13, 2012.

Our Team

Current Members

Gustavo Peres Dias, Information Systems, University of São Paulo, Brazil

Gustavo Dias is an undergraduate student at the University of São Paulo in Brazil studying Information Systems. His interests are artificial intelligence solutions focused on data mining for business intelligence and information security with risk management. As a grantee of the Brazil Scientific Mobility Program/CAPES, he worked during summer 2016 with James Yen and Elizabeth Shen on the TAILS project organizing and standardizing the current modules, updating and making improvements to the algorithms and redesigning the applications’ web pages.

Elizabeth Shen, Department of Electrical Engineering and Computer Science, LMU|LA

Elizabeth Shen is an undergraduate student at Loyola Marymount University, pursuing a degree in Computer Science. She enjoys doing Brazilian Jiu-Jitsu in her free time, and is a former collegiate boxer for the University of San Francisco. Elizabeth worked during the summer of 2016 with James Yen and Gustavo Perez Dias on the TAILS project, making adjustments to improve the websites and documentation, and developing learning modules that teach Artificial Intelligence concepts using methods modeled by laboratory sciences.

James Hao Yen, Department of Electrical Engineering and Computer Science, LMU|LA

James Yen is an undergraduate student at Loyola Marymount University studying Computer Science. He is due to graduate in May of 2017 (God willing). He enjoys studying religious theology in his spare time, particularly about the Nuṣayrī‐ʿAlawīs and their doctrine of the trinity. Besides theology, James is a disbeliever in the possibility of creating a truly artificially intelligent machine, citing that any code written will reveal over time an obvious pattern. James believes that only human beings can participate in time, unlike machines, which do not reveal intelligence. James worked with Gustavo and Elizabeth in the summer of 2016 to update the TAILS project websites and their respective documentations. He currently lives at home with his parents and his cat.

Stephanie E. August, Ph.D., Department of Electrical Engineering and Computer Science, LMU|LA

Stephanie August is an Associate Professor for Graduate Education at Loyola Marymount University, Los Angeles. She teaches courses in artificial intelligence, database management systems, and software engineering. Her research interests include applications of artificial intelligence including interdisciplinary new media applications, natural language understanding, argumentation, and analogical reasoning. She has several publications in these areas. Dr. August is actively involved in the Scholarship of Teaching and Learning community and is a 2006 CASTL Institute Scholar (Carnegie Academy for the Scholarship of Teaching and Learning). She is currently directing graduate and undergraduate students on two NSF-funded projects, to develop materials for teaching artificial intelligence through an experimental approach modeled after the lab sciences, and to develop a Virtual Engineering Sciences Learning Lab in Second Life to provide an immersive learning environment for introductory engineering and computer science courses. Her industry experience includes software and system engineering for several defense C3I programs, and applied artificial intelligence research for military and medical applications.

Past Members

Poulomi Chatterjee, Systems Engineering and Computer Science, LMU|LA

Poulomi Chatterjee studied Systems Engineering with a technical focus in Computer Science as a graduate student at Loyola Marymount University. She was responsible for testing the TAILS modules and designing exercises and quizzes for the students.

Michael Fraser, Department of Electrical Engineering and Computer Science, LMU|LA

Michael Fraser studied Electrical Engineering with an emphasis in Computer Engineering while also pursuing minors in Applied Math and Computer Science as an undergraduate student at Loyola Marymount University. He is involved with the LMU Chapters of Engineers Without Borders, Society of Hispanic Professional Engineers, and Institute of Electrical and Electronics Engineers. Michael worked with Miguel during the summer of 2013 on the Search and Agents applications. Michael also developed exercises for the students and updated the website.

Robert Quinlan Thames, Department of Electrical Engineering and Computer Science, LMU|LA

Robert "Quin" Thames double-majored in Computer Science and Electrical Engineering as an undergraduate student at Loyola Marymount University. Quin has created a game engine and artificial intelligence agent that plays the board game Nine Mens Morris against a user. Students learn about the AI algorithm by modifying the agent’s code to make the agent smarter and better at the game. He also worked on the VESLL (Virtual Engineering Sciences Learning Lab) project, developing a virtual simulation tool in Second Life using LSL to help students better grasp the concept of a differential equations problem.

Miguel Vazquez, Department of Electrical Engineering and Computer Science, LMU|LA

Miguel Vazquez Electrical Engineering with an emphasis in Computer Engineering while also pursuing minors in Applied Math and Computer Science as an undergraduate student at Loyola Marymount University studying. He is involved with the LMU Chapters of Engineers Without Borders, Society of Hispanic Professional Engineers, and Institute of Electrical and Electronics Engineers. Miguel worked with Michael during the summer of 2013 on the Search and Agents applications. Miguel also developed exercises for the students and created UML diagrams to document the code design.

Andrew Won, Department of Electrical Engineering and Computer Science, LMU|LA

Andrew Won was an undergraduate student at Loyola Marymount University working towards a bachelor's degree in computer science. He worked on the Teaching AI as a Lab Science (TAILS) project developing a basic search lab module. As part of the TAILS team he initiated a Git version control repository and encouraged version control for collaboration in lab module development. Andrew has experience programming with Java, JavaScript, and C and used HTML, CSS, and LaTeX for the basic search module. Andrew's interests lied in software development for mobile platforms and in expanding accessibility of educational materials to people who may otherwise not have the opportunity. In addition to the TAILS project, Andrew worked in a law office specializing in software patent prosecution, was developing a video game for the Android operating system, and worked on a project implementing an IEEE754 conversion library in Javascript.

Publications

(August 2010) August, Stephanie E. CCLI: Enhancing Expertise, Sociability and Literacy through Teaching Artificial Intelligence as a Lab Science. NSF Grant no.0942454, 2010.

(August 2010 b) August, S. E.; Neyer, A.; Shields, M.J.; Vales, J.I.; Hammers, M.L. Co-opting Games and Social Media for Education. AI and Fun Workshop at the 24th Association for the Advancement of Artificial Intelligence (AAAI) Conference, July 11–15, 2010, Atlanta, GA, and Alelo University seminar series, Alelo, Inc., August 27, 2010. https://research.cc.gatech.edu/aifun/content/presentations.

(August 2012) August, Stephanie E. (2012) Enhancing Expertise, Sociability and Literacy through Teaching Artificial Intelligence as a Lab Science. Proceedings, 119th ASEE Annual Conference, San Antonio, TX, June 10-13, 2012. Poster.

(August, Fraser, Vazquez 2014) August, S.E.; Fraser M.A. and Vazquez, M.A. Teaching Artificial Intelligence as a Lab Science: Basic and Informed Search. Poster accepted for presentation at the 45th ACM Technical Symposium on Computer Science Education, March 5 – 8, 2014, Atlanta, Georgia, USA.

Announcements

Stakeholders' Meeting

The TAILS project stakeholders’ meeting is tentatively scheduled for the last week of June. Interested parties from academia and industry are welcome to attend and provide input and guidance for the project. Contact Stephanie August (saugust@lmu.edu, 310-338-5973) for details.

Assessment Workshop

A formative assessment workshop presenting a subset of the TAILS learning modules is planned for the Saturday, May 21, 2016. A small stipend is available to those participating in the workshop. Interested faculty and undergraduate and graduate students are invited to apply for the workshop by contacting Stephanie August (saugust@lmu.edu, 310-338-5973).

TAILS Process

Artificial intelligence and software engineering course material can be interwoven and presented in a lab experiment paradigm to provide experiential learning opportunities in which students collaboratively solve problems. This approach has the potential to increase retention of women and non-traditional computer science students in computer science courses, while reinforcing best practices in software engineering.


Overview

The Teaching Artificial Intelligence as a Laboratory Science†1 (TAILS) project is designed to develop a new paradigm for teaching introductory artificial intelligence (AI) concepts by implementing an experiment-based approach modeled after the lab sciences. It explores whether structured labs with exercises that are completed in teams before students leave the classroom can build a sense of accomplishment, confidence, community, and collaboration among students, characteristics which have been shown to be critical to retain women and non-traditional computer science students in the field.

TAILS presents to students an array of fundamental AI algorithms as a set of hands-on activities made available through a database of lab activities, including software exercises and experiments that provide experience with concepts from multiple perspectives and multiple modes of representation. Best practices in software engineering will be reinforced in students through careful design and documentation of the modules.

The proposed activities are designed to engage the kinetic learner and provide the “big picture” that model-driven learners need to assimilate course material. Existing research has shown that structured labs with exercises that can be completed before students leave the classroom build a sense of accomplishment and confidence2, 3. Progressively sophisticated experiments teach inexperienced students and challenge more advanced students.

TAILS contributes two components to STEM education: a set of lab experiments to promote student retention of concepts and retention of majors, and insight into student learning through the labs. TAILS contributes to exemplary STEM education by creating learning materials and strategies, implementing new instructional strategies, and assessing and evaluating student achievement.

This paper describes the components, presents an example from adversarial search, and identifies a mapping of outcomes and objectives to assessments.

This material is based upon work supported by the National Science Foundation under Course, Curriculum, and Laboratory Improvement (CCLI) Grant No. 0942454. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Components of TAILS Lab Experiments

TAILS will deliver the tale of each AI algorithm or concept through a story with nine parts, including a description of the concept, relevant applications, sample test data, design description, exercises that guide the student in implementation, a test driver, suggested experiments, source code that implements the algorithm, and complexity analysis. This choice of components is patterned after the organization found in the files of software support that accompany Winston's approach4 and standard software engineering practice. Previous work5 identified components that model for the student an array of abstractions which they can use for presenting an algorithm or concept, each geared toward a different audience.

The Idea provides an overview of the concept to be presented, a functional description that avoids implementation details. This component contains references to additional sources of information on the topic. When an algorithm is presented in the context of an application, such as a game that uses heuristic search, the rules of the game will be included to orient the user. This description of the concept is especially well suited to the non-computer scientist or when a general introduction to the concept is needed. Students will learn that an extended version is appropriate when the focus is on the big picture, while a minimal version (one-sentence) would preface a presentation of additional information, such as a UML model of the algorithm.

The Applications section sets the algorithm in context and provides descriptions of real world applications that use the concept. Graphics, such as a soccer-playing robot or a game board and related references support the visual learner, and provide additional avenues of exploration. Knowledge of applications enables students to explain the significance of the algorithm to a general audience.

Sample Input/Process/Output contains an annotated trace of the program in execution, including system input, a description of the processing taking place, and display of the resulting output. Interactive demonstrations of the concept will be included where feasible to allow the student to run live demonstrations, as well as experiment with various forms of input. Sample Input/Process/Output (IPO) corresponds to the concept of operations and user manuals, and supports black box testing of the concept using the related source code. A readme file describing the procedure for running the demonstration will be included. An IPO example is useful for establishing the scope of program, and as the preface to a detailed design description.

Implementation-independent Design Description provides an abstract, high-level view of the system implementing the algorithm. Both textual descriptions and diagrams allow the user to explore the design in a top-down manner. This description is language-independent for clarity and longevity. Programming languages used to implement algorithms and programming environments change far more rapidly than the ideas implemented and TAILS code is not restricted to a particular software platform. Having the design in addition to the code enables the student to evolve a system at the design level, rather than at the code level. A developer would present this model when reviewing the overall design of the algorithm before a technical manager or as a preface to a review of the actual code.

Implementation-specific “HINT” File(s) contain part of the code needed to implement an algorithm or concept in a particular programming language, and guidelines or HINTs the user can use to implement the remainder of the code. For example, a program for visiting the nodes of a tree would be provided with the enqueing technique stubbed out. The student would be instructed to write the code that would transform the program into one that does a depth-first, breadth-first, or non-deterministic search. A hint file for at least one language will be included for each concept and will include file header and comment blocks, and follow an established coding style, reinforcing coding standards. A readme file describing the procedure for compiling and running the student's code will be included for each HINT file.

Test Suite and Driver(s) are provided for each implementation-specific HINT file that will run the student's program and test it using the data in the test suite. The student can then compare the expected results with the results generated. At least one of the tests will correspond to running the example in the IPO discussed above. A readme file describing the procedure for running the test will be included for each driver along with directions for downloading the specific programming environment needed, since TAILS source code will be written for a variety of platforms.

Experiments give students a starting point for interacting with both the concept and the code. This section lays out the preparation needed to complete the experiments and provides a set of increasingly complex tasks for students to complete collaboratively in pairs. Some tasks can be completed by students with any level of sophistication during a single class period, while others are suitable as course projects for advanced students. The experiment assignments include an implementation-independent set of test data and expected results, as well as ideas for enhancements and extensions, and go beyond the basic testing outlined for the test driver above. They also include an implementation language-dependent test driver with trial input and expected output for the simpler experiments, which is useful to verify that the code has been correctly implemented. The HINT files, test drivers and experiment drivers will be repeated for each implementation. For example, there might be a Java version, a C++ version, and a Common Lisp version of an implemented concept.

Source Code will be provided because learning by example is a powerful paradigm for mastering a new subject. We offer the ending of the tale -- solutions to the exercise in the HINT files, as well as more extensive implementations readily available from other sources. Many students, especially non-majors, benefit from having solutions provided in order to fully understand the material and gain a sense of accomplishment as they experiment with the code. In the case of an algorithm implemented within a game, having access to a fully functioning version of the game will facilitate learning for those students not familiar with the game. Majors have ample opportunity to write original code in software engineering, database, and capstone courses. Providing both the implementation-independent design description with the corresponding source code models best practices in software engineering. Providing an executable version of a program, such as a game that uses heuristic search, allows students to understand its functionality first hand before trying to implement it.

Complexity Analysis complements the work done in a data structures or algorithms class, and reveals the various ways that complexity can be measured for this particular problem. Students can analyze the changes they have made in the experiments measuring, for example, time to execute vs. memory required. A developer presents the complexity analysis to an audience considering whether the design meets performance requirements.

Sample Course Module: Adversarial Search - Implementation of the Minimax Algorithm for Nine Men's Morris 6

The Tale of the minimax algorithm is told in the context of the zero-sum game Nine Men’s Morris (NMM), a strategy board game using adversarial search. Highlights of module components are summarized here.

The Idea provides an overview of the minimax algorithm, alpha-beta pruning and heuristics. These are illustrated with examples from tic-tac-toe. The history of Nine Men’s Morris (NMM), which traces its roots back to 2000 BCE, gives the student a context for the game, and outlines its rules and strategies for play. Details of the human machine interface (HMI) for NMM appear in figure 1 in the gallery.

Several Applications of minimax with alpha beta pruning are described, conveying a sense of its broad range of usefulness. These include the automated chess player Deep Blue, Envelope Constrained Filters used in radar pulse compression and real-time pursuit evasion algorithms.

Sample Input/Process/Output illustrates the user interface for NMM game play and modes of play (human vs. human, human vs. computer, computer vs. computer) and briefly describes the reasoning used to make automated plays.

Implementation-independent Design Description includes system overview and architecture diagrams, as well as class and sequence diagrams using the Unified Modeling Language, along with narrative to augment the diagrams. Figures 2-4 provide examples of the code and data design documentation, which can be found in the gallery.

Implementation-specific “HINTs” corresponding to each of the experiments outline the details of code changes required by the exercises and offer illustrative snippets of code to guide the student.

Test Suite and Driver(s) describes and identifies test preparations, descriptions, and the process used to test the NMM minimax agent and its graphical user interface (GUI). The test environment is defined, including details about workstation requirements, software requirements, and the test environment setup. Test descriptions which define prerequisite conditions, test inputs, expected test results, criteria for evaluating results, the test procedure, and assumptions and constraints for testing are provided. Lastly, the features to be tested and those which are not tested are identified.

The Experiments section as shown in figure 5 identifies the prerequisite knowledge and preparation and provides an overview of experiments that range from learning how to play NMM to enhancing its performance. Figure 6 provides additional detail for experiment #2. Students complete a subset of the exercises according to the course in which the module is used, their maturity as a programmer, and course requirements.

Well-designed and commented Source Code includes an implementation for the GUI, game board, and computer player agent.

Complexity Analysis reviews the time and space requirements for the minimax algorithm and minimax with alpha-beta pruning. These images can be found in the gallery.

Conclusion and Future Work

TAILS contributes to the STEM education knowledge base by promoting individual efforts to solve a programming assignment while building an education community through laboratory work that encourages cooperation and teamwork among students. The paradigm can be adapted to computer science courses at all academic levels and is expected to increase participation in the field by shortening the time required to prepare undergraduates to engage in research.

Computing and software are ubiquitous. There is a compelling need for software engineering education in computer science7, 8 and engineering,9, 10, 11, 12 as well as animation, biology and other disciplines in which computing plays an ever increasing role. The TAILS model demonstrates a technique for integrating software engineering concepts that can be used in computing-intensive courses beyond traditional computer science programs.

Alpha testing is underway on the initial version of the adversarial search/Nine Men’s Morris module. Work has begun on developing course materials for unification, basic and informed search and conceptual clustering algorithms and we continue to define rubrics used to grade student work and assess outcomes in a consistent manner. Future considerations include the possibility of building upon laboratory projects developed as part of the Machine Learning Experiences in AI framework13 or the Model AI Assignments presented at the Symposium on Educational Advances in Artificial Intelligence14.


References

[1] August, Stephanie E. CCLI: Enhancing Expertise, Sociability and Literacy through Teaching Artificial Intelligence as a Lab Science. NSF Grant no.0942454, 2010.
[2] Beyer, S., Rynes, K., Perrault, J., Hay, K., Haller, S. Gender differences in computer science students. SIGCSE ’03, 2003, pp.49-53.
[3] Strok, D. Women in AI. IEEE Expert, 7:4, August 1992, pp.7-22.
[4] Winston, Patrick Henry. Artificial Intelligence. 3rd edition. Addison-Wesley, Reading MA, 1992.
[5] August, S.E. Integrating Artificial Intelligence and Software Engineering: An Effective Interactive AI Resource... does more than teach AI. In Mehdi Khosrowpour (Ed.), Proceedings of the 2003 Information Resource Management Association International Conference). Hershey PA: Information Resource Management Association, 2003, pp. 17-19.
[6] Shields, Matthew. Adversarial search: An implementation of the minimax algorithm for Nine Men's Morris. CMSI 677 class project, LMU, spring 2009.
[7] Pour, Gilda; Griss, Martin L.; and Lutz, Michael. The push to make software engineering respectable. Computer, May 2000, pp.35-43.
[8] Lethbridge, Timothy C. What knowledge is important to a software professional? Computer, May 2000, 44-50.
[9] Long, L.N. The Critical Need for Software Engineering Education. CROSSTALK, The Journal of Defense Software Engineering, 10(1), January 2008, pp.6-10.
[10] IEEE Computer Society and the ACM. “Curriculum Guidelines for Undergraduate Degree Programs in Software Engineering.” http:// sites.computer.org/ccse/SE2004Volume.pdf. 2004. (last accessed 10 January 2012)
[11] Sanders, P. Improving Software Engineering Practice. CROSSTALK, The Journal of Defense Software Engineering, January 1999, pp. 4-7.
[12] Vaughn, R. Software Engineering Degree Programs. CROSSTALK, The Journal of Defense Software Engineering, 13(3), March 2000, pp. 7-9.
[13] MLeXAI: Machine Learning Experiences in AI: A Multi-Institutional Project. NSF DUE 0716338. http://uhaweb.hartford.edu/compsci/ccli/index.htm (last accessed 11 January 2012)
[14] Model AI Assignments, Symposium on Educational Advances in Artificial Intelligence. http://eaai.stanford.edu/ (last accessed 11 January 2011)
[15] OERL: Online Evaluation Resource Library. href="#">http://oerl.sri.com/home.html (last accessed 10 January 2012)

TAILS OUTCOME, OBECTIVES, AND ASSESSMENTS

The TAILS project addresses learning outcomes in five categories: skills, concepts, communication, application, and research. The learning outcomes, specific objectives for each outcome and their planned assessments are shown in table 1. We will rely in part of the Dewar-Bennett Knowledge Expertise Grid1 to analyze our data. The grid defines criteria for summative evaluation that can be adapted for evaluating knowledge of engineering-related content and rates a student's affective and cognitive knowledge in terms of the student's level of expertise. We are in the process of defining the rubrics that can be used to grade student work and assess the outcomes in a reliable manner. Scoring rubrics will be developed for products produced by students, student writing, and open-ended responses on exams. Information learned about student accomplishment of the outcomes will be used to improve the course, both as it is in progress, and for future offerings of the course.

TAILS learning outcomes: Skills and Concepts.
Category Outcome Objective Assessment
Skills Students will demonstrate the ability to solve problems collaboratively Student will demonstrate collaboration and teamwork skills Students will work in pairs to complete the lab activities, then:
• Complete a teamwork attitude questionnaire2
• Write a team process log to record perceptions about collaboration2
Concepts Students will demonstrate knowledge of artificial intelligence concepts Students will demonstrate recall and general understanding of AI concepts • Answer exam questions
• Complete pre- and post-tests
• Explain and write software code
• Draw a concept map3
    Students will demonstrate a deep understanding of course concepts • Contrast multiple concepts4
• Define and give one example of a course concept5
  Students will demonstrate knowledge of software engineering practices Students will demonstrate proficiency in software engineering practices at background-appropriate (grade- and major-appropriate) level • Specify requirements for a software program
• Complete a domain-level design for a software program
• Design an algorithm at an implementation-specific level
• Reverse engineer software for an algorithm
Communication Students will be able to describe course concepts at multiple levels of abstraction Students will be able to describe course concepts clearly and without technical jargon • Write an elevator statement6 geared toward the student's grandmother to describe the concept
    Students will be able to describe course concepts for a classmate or technical manager • Write an algorithm in pseudocode to describe the concept for a technical manager
Application Students will be able to identify applications of AI concepts Students will be able to identify real world applications for AI concepts beyond those provided in course materials • Complete application cards7
Research Students will demonstrate curiosity about course material Students will demonstrate the ability to extend course concepts • Describe one new experiment that can be used in conjunction with each algorithm studied; explain the objective of the experiment and why this is a worthwhile objective
• Describe one enhancement to the algorithm studied and explain why the enhancement is worthwhile

We will rely in part of the Dewar-Bennett Knowledge Expertise Grid (Dewar and Bennett, 2004) to analyze our data. The grid defines criteria for summative evaluation that can be adapted for evaluating knowledge of engineering-related content and rates a student's affective and cognitive knowledge in terms of the student's level of expertise. The PI will also work with the LMU Director of Assessment at LMU during the 2010-2011 academic year to design rubrics that can be used to grade student work and assess the outcomes in a reliable manner. Scoring rubrics will be developed for products produced by students, student writing, and open-ended responses on exams. Information learned about student accomplishment of the outcomes will be used to improve the course, both as it is in progress, and for future offerings of the course.

1Based on (Dewar-Bennett Knowledge Expertise Grid)
2Based on (OERL: Online Evaluation Resource Library)
3Based on (Angelo and Cross 1993, pp.197-202)
4Based on (Angelo and Cross 1993, p.168)
5Based on (Angelo and Cross 1993, p.38)
6Based on (Angelo and Cross 1993, pp.183-187)
7Based on (Angelo and Cross 1993, pp.236-239)

References
Dewar, Jackie and Bennett, Curtis. 8-dimensional Mathematical Knowledge-expertise Grid. http://myweb.lmu.edu/carnegie/webport/knowgrid.htm, 2004, Loyola Marymount University (last accessed 21 May 2009).
OERL: Online Evaluation Resource Library. http://oerl.sri.com/home.html (last accessed 10 January 2012)
Angelo, Thomas A. and Cross, K. Patricia. Classroom Assessment Techniques; A Handbook for College Teachers. 2nd edition. San Francisco: Jossey-Bass, 1993.

Tails Student Groups

The course designation in which the student group would encounter TAILS material is included under Level (Course).

Level (Course) Major Background Preparatory Coursework
Undergraduate (CMSI 182) Non-computer science Varies No formal preparation in mathematics or computer science assumed
Undergraduate (CMSI 485) Non-computer science Computer science minor Programming; Data structures and analysis; Computer systems organization
Undergraduate (CMSI 485) Computer science 2.5 years programming major completed; junior level Programming; Data structures and analysis; Programming languages; Calculus
Undergraduate (CMSI 485) Computer science 3.5 years programming major completed; senior level Programming; Data structures and analysis; Programming languages; Graphics; Calculus; Microprocessors; Database management systems; Software engineering group project
Graduate (CMSI 677 CMSI 682 CMSI 698 SS:MASDAI) Computer science Minimum equivalent of computer science minor; Prior work experience in industry Programming; Data structures and analysis; Computer systems organization; Programming languages
Graduate (CMSI 677 CMSI 682 CMSI 698 SS:MASDAI) Electrical engineering Undergraduate degree in electrical engineering Some programming experience
Graduate (CMSI 677 CMSI 682 CMSI 698 SS:MASDAI) Systems engineering Three years experience in industry; Often but not always an undergraduate degree in a technical field No formal preparation in mathematics or computer science assumed
N/A N/A Software engineer working in industry Undergraduate or graduate degree in computer science; Experienced software developer


CMSI 182 Introduction to Computer Science (for non-majors)
CMSI 485 Introduction to Artificial Intelligence
CMSI 677 Artificial Intelligence
CMSI 682 Knowledge-based Systems
CMSI 698 Special Studies: Multi-agent Systems and Distributed AI

TAILS EVALUATION PLAN

Formative evaluation will take place in summer workshops and in the classroom where TAILS materials are used. Summative evaluation will consist of three components. First, workshop attendees will be assessed at the end of the workshop. Second, students in undergraduate AI courses at the PI's home institution will be assessed at module boundaries, at the end of the semester, and after graduation. Third, faculty members teaching AI courses at LMU and other institutions using TAILS will be asked to judge its effectiveness in developing community and collaborative relationships among students and in terms of the students’ ability to communicate concepts from multiple perspectives and at multiple levels of abstraction. Table 2 outlines the activities related to the project evaluation plan, the estimated time required for each activity, the participants resources and outcomes. Involvement of the outside evaluator is indicated in bold in the Who's Involved column.

 Project Stage  Time Allocation  Planned Activity  Who's Involved  Resources Needed  Planned Outcome
 Exercise Preparation  For the duration of the project  Develop assessments and rubrics for class presentation of TAILS materials to evaluate effectiveness  PI, Graduate Assistant  Exercises, sample rubrics  Rubrics to use in evaluating effectiveness of course activities
   1 day annually  Review assessments and rubrics for class presentation of TAILS materials  PI, Evaluator, LMU Director of Assessment  Exercises, rubrics  Improved rubrics to use in evaluating effectiveness of course activities
 Workshop Preparation  1 day annually  Review planned assessments for summer workshops  PI, Evaluator  Exercises, rubrics  Enhancement of assessments to ensure they will collect appropriate data in the workshops
 Summer Workshop Y1  1 day  TAILS exercises for experiments #1 and 2 presented in lecture/lab setting  PI, Graduate Assistant, 10 computer science students working in pairs  TAILS exercises, student workstations, classroom  Outcomes as listed in Outcomes and Assessments documents
 Summer Workshop Y2, Y3  1 day each year  TAILS exercises for experiments #3 and 4, #5 and 6, respectively, presented in lecture/lab setting  PI, Graduate Assistant, 10 computer science students working in pairs, 5 computer science faculty  TAILS exercises, student and faculty workstations, classroom  Outcomes as listed in Outcomes and Assessments documents
 Implementation and Progress Evaluation  .5 day each year  Annual project presentation  PI, interested faculty, industry stakeholders  Project results to date  Monitor and determine progress in achieving project goals and expected outcomes
   1 hour each week  Project team meeting  PI, Graduate Assistant, Undergraduate Assistant  Log of project activities, status reports, project objectives and activity timeline  Monitor progress, establish short-term action items, identify risks and strategies for mitigating them
 Final Summative Evaluation  5 days  Revise survey of LMU computer science graduates from the classes of 2011 through 2013 and prepare contact list  PI, Graduate Assistant  Survey of Computer Science graduates from the classes of 2007 through 2010  Revised survey and distribution list
   5 days  Distribute survey to computer science graduates from the classes of 2011 through 2013 and collect and tabulate the results  PI, Graduate Assistant  Alumni survey for LMU computer science graduates from the classes of 2011 through 2013  Data collected from the survey
   2 days  Compare results of 2007-2010 survey to results of 2011-2013 survey  PI, Graduate Assistant  2007-2010 survey and results; 2011-2013 survey and results  Qualitative and quantitative summary of results
 Project Evaluation  For the duration of the project  Provide TAILS materials and assessments to interested faculty from other institutions upon request  PI, faculty at other institutions  Exercises, assessments, rubrics  Faculty at other institutions will be able to incorporate TAILS modules into their courses
   For the duration of the project  Perform assessments in classes and workshops presenting TAILS materials  PI, Graduate Assistant, students and faculty enrolled or participating in classes and workshops presenting TAILS materials  Exercises, assessments  Collection of data to be analyzed
   For the duration of the project, at the end of each academic term  Request assessment results from faculty using TAILS materials at other institutions  PI, Graduate Assistant, faculty from other institutions using TAILS materials  Assessments used by participating faculty and assessment results collected  Additional data collected
   1 day each academic term  Review assessment results to ensure that rubrics were applied in a manner consistent to that used by the PI  PI, Graduate Assistant  Assessments used by participating faculty and raw assessment results  Validated data
   10 days each year  Apply rubrics to data collected from classes and workshops presenting TAILS materials  PI, Graduate Assistant  Assessments and rubrics  Data derived from applying rubrics to assessments
   5 days each year  Tabulate and summarize data collected from classes and workshops presenting TAILS materials  PI, Graduate Assistant  Collected data  Organized results to used in evaluation activities
   3 days each summer  Review data collected from classes using TAILS and summer workshops  PI, Evaluator, Graduate Assistant  Exercises, rubrics, tabulation and summary of results collected  Evaluation of effectiveness and action items for revisions to course content and presentation of materials
 Project Review  6 months  Overall project evaluation  PI, Graduate Assistant  Summarized reports from Project Evaluation activities  Assessment of project success
   1 month  Write final report  PI, Graduate Assistant  Project materials, assessment of project success Final report

Photo Gallery

Location

Loyola Marymount University
Department of Electrical Engineering and Computer Science
1 LMU Drive
Los Angeles, CA 90045-2658
View Map


Contact Information

For questions about the project, please contact

Stephanie E. August
Principal Investigator and Project Director

Department of Electrical Engineering and Computer Science
Frank R. Seaver College of Science and Engineering
Doolan Hall Room 201b
Loyola Marymount University
1 LMU Drive
Los Angeles, CA 90045-2659
Phone: (310) 338-5973
Fax: (310) 338-2782
E-mail: saugust@lmu.edu