04-Mar-01[1]

Views:
 
Category: Entertainment
     
 

Presentation Description

No description available.

Comments

Presentation Transcript

Introduction to Radial Basis Function Networks : 

Introduction to Radial Basis Function Networks 主講人: 虞台文

Content : 

Content Overview The Models of Function Approximator The Radial Basis Function Networks RBFN’s for Function Approximation Learning the Kernels Model Selection

Introduction to Radial Basis Function Networks : 

Introduction to Radial Basis Function Networks Overview

Typical Applications of NN : 

Typical Applications of NN Pattern Classification Function Approximation Time-Series Forecasting

Function Approximation : 

Function Approximation Unknown Approximator

Supervised Learning : 

Neural Network Supervised Learning Unknown Function

Neural Networks as Universal Approximators : 

Neural Networks as Universal Approximators Feedforward neural networks with a single hidden layer of sigmoidal units are capable of approximating uniformly any continuous multivariate function, to any desired degree of accuracy. Hornik, K., Stinchcombe, M., and White, H. (1989). "Multilayer Feedforward Networks are Universal Approximators," Neural Networks, 2(5), 359-366. Like feedforward neural networks with a single hidden layer of sigmoidal units, it can be shown that RBF networks are universal approximators. Park, J. and Sandberg, I. W. (1991). "Universal Approximation Using Radial-Basis-Function Networks," Neural Computation, 3(2), 246-257. Park, J. and Sandberg, I. W. (1993). "Approximation and Radial-Basis-Function Networks," Neural Computation, 5(2), 305-316.

Statistics vs. Neural Networks : 

Statistics vs. Neural Networks

Introduction to Radial Basis Function Networks : 

Introduction to Radial Basis Function Networks The Model of Function Approximator

Linear Models : 

Linear Models Fixed Basis Functions Weights

Linear Models : 

Linear Models Feature Vectors Inputs Hidden Units Output Units Decomposition Feature Extraction Transformation Linearly weighted output

Linear Models : 

Linear Models Feature Vectors Inputs Hidden Units Output Units Decomposition Feature Extraction Transformation Linearly weighted output y 1 2 m x1 x2 xn w1 w2 wm x = Can you say some bases?

Example Linear Models : 

Example Linear Models Polynomial Fourier Series Are they orthogonal bases?

Single-Layer Perceptrons as Universal Aproximators : 

Single-Layer Perceptrons as Universal Aproximators Hidden Units With sufficient number of sigmoidal units, it can be a universal approximator.

Radial Basis Function Networks as Universal Aproximators : 

Radial Basis Function Networks as Universal Aproximators Hidden Units With sufficient number of radial-basis-function units, it can also be a universal approximator.

Non-Linear Models : 

Non-Linear Models Adjusted by the Learning process Weights

Introduction to Radial Basis Function Networks : 

Introduction to Radial Basis Function Networks The Radial Basis Function Networks

Radial Basis Functions : 

Radial Basis Functions Center Distance Measure Shape Three parameters for a radial function: xi r = ||x  xi||  i(x)= (||x  xi||)

Typical Radial Functions : 

Typical Radial Functions Gaussian Hardy Multiquadratic Inverse Multiquadratic

Gaussian Basis Function (=0.5,1.0,1.5) : 

Gaussian Basis Function (=0.5,1.0,1.5)

Inverse Multiquadratic : 

Inverse Multiquadratic c=1 c=2 c=3 c=4 c=5

Most General RBF : 

Most General RBF Basis {i: i =1,2,…} is `near’ orthogonal.

Properties of RBF’s : 

Properties of RBF’s On-Center, Off Surround Analogies with localized receptive fields found in several biological structures, e.g., visual cortex; ganglion cells

The Topology of RBF : 

The Topology of RBF Feature Vectors Inputs Hidden Units Output Units Projection Interpolation As a function approximator

The Topology of RBF : 

The Topology of RBF Feature Vectors Inputs Hidden Units Output Units Subclasses Classes As a pattern classifier.

Introduction to Radial Basis Function Networks : 

Introduction to Radial Basis Function Networks RBFN’s for Function Approximation

The idea : 

The idea x y

The idea : 

The idea x y

The idea : 

The idea x y

The idea : 

The idea x y

The idea : 

The idea x y

Radial Basis Function Networks as Universal Aproximators : 

Radial Basis Function Networks as Universal Aproximators Training set Goal for all k

Learn the Optimal Weight Vector : 

Learn the Optimal Weight Vector Training set Goal for all k

Regularization : 

Regularization Training set Goal for all k If regularization is unneeded, set

Learn the Optimal Weight Vector : 

Learn the Optimal Weight Vector Minimize

Learn the Optimal Weight Vector : 

Learn the Optimal Weight Vector Define

Learn the Optimal Weight Vector : 

Learn the Optimal Weight Vector Define

Learn the Optimal Weight Vector : 

Learn the Optimal Weight Vector

Learn the Optimal Weight Vector : 

Learn the Optimal Weight Vector Variance Matrix Design Matrix

Summary : 

Summary Training set

Introduction to Radial Basis Function Networks : 

Introduction to Radial Basis Function Networks Learning the Kernels

RBFN’s as Universal Approximators : 

RBFN’s as Universal Approximators Training set Kernels

What to Learn? : 

What to Learn? Weights wij’s Centers j’s of j’s Widths j’s of j’s Number of j’s  Model Selection

One-Stage Learning : 

One-Stage Learning

One-Stage Learning : 

One-Stage Learning The simultaneous updates of all three sets of parameters may be suitable for non-stationary environments or on-line setting.

Two-Stage Training : 

Two-Stage Training Step 1 Step 2 Determines Centers j’s of j’s. Widths j’s of j’s. Number of j’s. Determines wij’s. E.g., using batch-learning.

Train the Kernels : 

Train the Kernels

Unsupervised Training : 

Unsupervised Training

Unsupervised Training : 

Random subset selection Clustering Algorithms Mixture Models Unsupervised Training

The Projection Matrix : 

The Projection Matrix

The Projection Matrix : 

The Projection Matrix Error Vector

authorStream Live Help