Bachelors Level/Third Year/Sixth Semester/Science csit/sixth semester/neural networks/syllabus

B.Sc Computer Science and Information Technology

Institute of Science and Technology, TU

Nature of the course: (Theory+Lab)

F.M: 60+20+20 P.M: 24+8+8

Credit Hrs: 3Hrs

Neural Networks [CSC383]
Course Objective
i.
The course objective is to demonstrate the concept of supervised learning, unsupervised learning in conjunction with different architectures of Neural Network .
Course Description

The course introduces the underlying principles and design of Neural Network. The course covers the basics concepts of Neural Network including: its architecture, learning processes, single layer and multilayer perceptron followed by Recurrent Neural Network

S1:Introduction to Neural Network[4]
1
Basics of neural networks and human brain, Models of a neuron, Neural Network viewed as Directed Graphs, Feedback, Network Architectures, Knowledge Representation, Learning Processes, Learning Tasks
S2:Rosenblatt’s Perceptron[3]
1
Introduction, Perceptron, The Perceptron Convergence Theorem, Relation between the Perceptron and Bayes Classifier for a Gaussian Environment, The Batch Perceptron Algorithm
S3:Model Building through Regression[5]
1
Introduction, Linear Regression Model: Preliminary Considerations, Maximum a Posteriori Estimation of the Parameter Vector, Relationship Between Regularized Least-Squares Estimation and Map Estimation, Computer Experiment: Pattern Classification, The Minimum-Description-Length Principle, Finite Sample-Size Considerations, The instrumental- Variables Method
S4:The Least-Mean-Square Algorithm[5]
1
Introduction, Filtering Structure of the LMS Algorithm, Unconstrained Optimization: A Review, The Wiener Filter, The Least-Mean-Square Algorithm, Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filter, The Langevin Equation: Characterization of Brownian Motion, Kushner’s Direct-Averaging Method, Statistical LMS Learning Theory for Small Learning-Rate Parameter, Virtues and Limitations of the LMS Algorithm, Learning-Rate Annealing Schedules
S5:Multilayer Perceptron[8]
1
Introduction, Batch Learning and On-Line Learning, The Back-Propagation Algorithm, XOR problem, Heuristics for Making the back-propagation Algorithm Perform Better, Back Propagation and Differentiation, The Hessian and Its Role in On-Line Learning, Optimal Annealing and Adaptive Control of the Learning Rate, Generalization, Approximations of Functions, Cross Validation, Complexity Regularization and Network Pruning, Virtues and Limitations of Back-Propagation Learning, Supervised Learning Viewed as OptimizationProblem, Convolutional Networks, Nonlinear Filtering, Small-Scale Versus Large-Scale Learning Problems
S6:Kernel Methods and Radial-Basis Function Networks[7]
1
Introduction, Cover’s Theorem on the separability of Patterns, The Interpolation problem, Radial-Basis-Function Networks, K-Means Clustering, Recursive Least-Squares Estimation of the Weight Vector, Hybrid Learning Procedure for RBF Networks, Kernel Regression and Its Relation to RBF Networks
S7:Self-Organizing Maps[6]
1
Introduction, Two Basic Feature-Mapping Models, Self-Organizing Map, Properties of the Feature Map, Contextual Maps, Hierarchical Vector Quantization, Kernel Self-Organizing Map, Relationship between Kernel SOM and Kullback-Leibler Divergence
S8:Dynamic Driven Recurrent Networks[7]
1
Introduction, Recurrent Network Architectures, Universal Approximation Theorem, Controllability and Observability, Computational Power of Recurrent Networks, Learning Algorithms, Back Propagation through Time, Real-Time Recurrent Learning, Vanishing Gradients in Recurrent Networks, Supervised Training Framework for Recurrent Networks Using Non Sate Estimators, Adaptivity Considerations, Case Study: Model Reference Applied to Neurocontrol
References
1.
Simon Haykin, Neural Networks and Learning Machines, 3rd Edition, Pearson
2.
Christopher M. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, 2003
3.
Martin T. Hagan, Neural Network Design, 2nd Edition PWS pub co
Labrotary Work
Practical should be focused on Single Layer Perceptron, Multilayer Perceptron, Supervised Learning, Unsupervised Learning, Recurrent Neural Network, Linear Prediction and Pattern Classification