# SYSC 575, ECE 455/555: Neural Networks I

Winter 2012, MW, 16:40-18:30, CH 321

Neural networks is a computational and engineering methodology based on emulating how nature has implemented biological brain (in particular, the brain's massively parallel and learning aspects). As such, it holds promise for significant impact on how important classes of scientific and engineering problems are solved. The objective of the this course is to have the students obtain a working knowledge of this forefront technology which is in the midst of a (second) renaissance.

This course covers basic ideas of the neural network (NN) methodology, a computing paradigm whose design is based on models taken from neurobiology and on the notion of "learning." A variety of NN architectures and associated computational algorithms for accomplishing learning are studied. Experiments with various NN architectures are performed via a (commercial) simulation package. Students also use the simulator to complete a classification project.

## Topics:

- introduction and overview of artificial intelligence and neural networks
- "black-box" representations
- universal approximation
- learning methods: supervised, unsupervised, and reinforcement
- Hebbian learning, delta rule, and generalized delta rule
- multilayer perceptrons (MLP)
- radial basis functions (RBF)
- probabilistic neural networks (PNN) and general regression neural networks (GRNN)
- Hopfield networks and bidirectional associative memory (BAM)
- recurrent neural networks (RNN)
- learning vector quantizer (LVQ) and self-organizing map (SOM)
- adaptive resonance theory (ART)
- adaptive critic methods and adaptive dynamic programming (ADP)

## Course requirements:

- reading assignments: short reading assignments to inform class discussions
- NN simulator experiments and write-ups: short homework assignments using the NN simulator
- course project (during the last three to four weeks of the term): based on data and a problem context that will be provided, students will select an NN paradigm, perform experiments using the NN simulator, and complete a project report
- in-class midterm and final exams

## Texts:

*Neural Networks - A comprehensive foundation*, Haykin, Simon, Prentice Hall (2nd or 3rd edition).*Neural Computing*(tutorial volume of manual for the NeuralWorks simulation package), NeuralWare, Inc., 1993.

## Prerequisites:

Senior standing in EE or CS or Graduate standing

Note: The $160 "lab fee" charged for this course entitles the student to a download package that contains A) the neural network simulation package NeuralWorks Professional II/Plus (list price $1995), via a site license agreement with NeuralWare, Inc. and B) the User Guide, which includes text #2 above.

## Handouts:

- Syllabus
- Tentative Course Schedule (02/29/2012)
- Rumelhart, D.E., Hinton, G.E., and Williams, R.J. (1987), "Learning Internal Representation by Error Propagation"
- Lendaris, G.G. (1991), "Role of error-surfaces in weight space for weight-modification rules"
- Everly, B. (2006), NeuralWare checkpoint bug fix email
- Lendaris, G.G. (1998), "Notes on Radial Basis Function approach to Neural Networks"
- "BAM Tips for NeuralWorks BAM user-interface"
- Wasserman, P.D. (1989), "Adaptive Resonance Theory," Ch. 8 in
*Neural Computing: Theory and Practice* - Lendaris, G.G. and Stanley, G.L. (1970), "Diffraction-Pattern Sampling for Automatic Pattern Recognition"
- Lawrence, J. (1991), "Data Preparation for a Neural Network,"
*AI Expert* - Caudill, M. (1991), "Avoiding the Backprop Trap,"
*AI Expert* - Rumelhart, D.E., Hinton, G.E., and Williams, R.J. (1987), "Distributed Representations" [excerpt on coarse coding]
- Kosko, B. (1992), "Fuzzy Associative Memories"

## Lecture Notes:

Selected lecture notes will be posted on PSU's electronic reserve library site within approximately 48 hours after the lecture.

## Assignments:

- Reading 1 (01/09/2012)
- Readings 2 and 3 (01/23/2012)
- Homework 1 (01/23/2012)
- Homework 2 (02/13/2012)
- Project Parts A and B (02/20/2012)

## Notes:

## Slides:

- Lendaris, G.G. (2006), DHP Tutorial
- Hughes, J.G. (2010), Contextual Reinforcement Learning [note DHP material at end]

## Additional Resources:

## Instructor Information:

Joshua G. Hughes, Systems Science Ph.D. Program

Harder House, Room 207

hughesjg@pdx.edu

office hours: Thursdays 1630-1830 (also available via appointment)

*There is no TA for this class.*