FUZZY LOGIC Crisp set theory is governed by a logic that uses one of only two values: true or false. This logic cannot represent vague concepts, and therefore fails to give the answers on the paradoxes.
The basic idea of the fuzzy set theory is that an
element belongs to a fuzzy set with a certain degree of membership. Thus, a proposition is not either true or false, but may be partly true (or partly false) to any degree. This degree is usually taken as a real number in the interval [0,1].

Membership Functions (MFs) :

Membership Functions (MFs) Characteristics of MFs:
Subjective measures
Not probability functions MFs Heights 5’10’’ .5 .8 .1 “tall” in Asia “tall” in the US

Slide 5:

It can be seen that the crisp set asks the question, Ts the man tall?' and draws
a line at, say, 180 cm. Tall men are above this height and not tall men below. In
contrast, the fuzzy set asks, 'How tall is the man?' The answer is the partial
membership in the fuzzy set, for example, Tom is 0.82 tall.

MF Formulation :

MF Formulation disp_mf.m

Slide 8:

What is a fuzzy set?
A fuzzy set can be simply denned as a set with fuzzy boundaries.
Let X be the universe of discourse and its elements be denoted as x. In classical
set theory, crisp set A of X is denned as function fA (x) called the characteristic
function of A In the fuzzy theory, fuzzy set A of universe X is defined by function μa(*)
called the membership function of set A This degree, a value between 0 and 1, represents the degree of membership,
also called membership value, of element x in set A.

Slide 10:

Linguistic Values (Terms)

Slide 11:

A linguistic variable carries with it the concept of fuzzy set qualifiers, called hedges. Hedges are terms that modify the shape of fuzzy sets. They include adverbs such as very, somewhat, quite, more or less and slightly. Hedges can modify verbs, adjectives, adverbs or even whole sentences. They are used as
All-purpose modifiers, such as very, quite or extremely.
Truth-values, such as quite true or mostly false.
Probabilities, such as likely or not very likely.
Quantifiers, such as most, several or few.
Possibilities, such as almost impossible or quite possible

Operations on Linguistic Values :

Operations on Linguistic Values Concentration: Dilation: Contrast
intensification: intensif.m

Slide 14:

What is the difference between classical and fuzzy rules?
A classical IF-THEN rule uses binary logic, and in fuzzy logic
Rule: 1
IF speed is > 100
THEN stopping_distance is long
Rule: 2
IF speed is < 40
THEN stopping_distance is short Rule: 1
IF speed is fast
THEN stopping_distance is long
Rule: 2
IF speed is slow
THEN stopping_distance is short As an example for the previous rule;

Fuzzy Reasoning :

Fuzzy Reasoning Single rule with multiple antecedent
Rule: if x is A and y is B then z is C
Fact: x is A’ and y is B’
Conclusion: z is C’
Graphic Representation: A B T-norm X Y w A’ B’ C2 Z C’ Z X Y A’ B’ x is A’ y is B’ z is C’

Slide 17:

4.6 Fuzzy inference
Fuzzy inference can be defined as a process of mapping from a given input to an
output, using the theory of fuzzy sets.
4.6.1 Mamdani-style inference Rule: 1
IF x is A3
OR γ is B1
THEN z C1
Rule: 2
IF x is A1
AND γ is B2
THEN z is C2
Rule: 3
IF x is Al
THEN zisC3 Rule: 1
IF projectjunding is adequate
OR projectjstaffing is small
THEN risk is low
Rule: 2
IF projectjunding is marginal
AND project_stafftng is large
THEN risk is normal
Rule: 3
IF projectjunding is inadequate
THEN risk is high where x, y and z {project funding, project staffing and risk) are linguistic variables;
Al, A2 and A3 (inadequate, marginal and adequate) are linguistic values determined by fuzzy sets on universe of discourse X (project funding); Bl and B2 (small and large) are linguistic values determined by fuzzy sets on universe of discourse Y (project staffing); Cl, C2and C3 (low, normal and high) are linguisticvalues determined by fuzzy sets on universe of discourse Z (risk).

Slide 18:

The crisp input xl (project
funding rated by the expert as 35 per cent) corresponds to the membership
functions Al and A2 (inadequate and marginal) to the degrees of 0.5
and 0.2, respectively, and the crisp input γl (project staffing rated as 60
per cent) maps the membership functions Bl and B2 (small and large) to
the degrees of 0.1 and 0.7, respectively. Step 1: Fuzzification Step 2: Rule evaluation
The second step is to take the fuzzified inputs, μ(X=A1) = 0.5, μ(X=A2) =
0.2, μ(y=B1) =0.1 and μ(y=B2) = 0.7, and apply them to the antecedents of the fuzzy rules. If a given fuzzy rule has multiple antecedents, the
fuzzy operator (AND or OR) is used to obtain a single number that represents the result of the antecedent evaluation. The truth value is then applied to the consequent membership function.

Slide 19:

Step 3: Aggregation of the rule outputs
Aggregation is the process of unification of the outputs of all rules.
In other words, we take the membership functions of all rule consequents
previously clipped or scaled and combine them into a single
fuzzy set. Step 4: Defuzzification
The last step in the fuzzy inference process is defuzzification. Fuzziness
helps us to evaluate the rules, but the final output of a fuzzy system has
to be a crisp number. How do we defuzzify the aggregate fuzzy set? centroid technique. It finds the point where a vertical line would slice the aggregate set into two equal masses.Mathematically this centre of gravity (COG) can be expressed as in practice, a reasonableestimate can be obtained by calculating it over a sample of
points, as

Slide 21:

Mamdani or Sugeno?
The Mamdani method is widely accepted for capturing expert knowledge. It
allows us to describe the expertise in more intuitive, more human-like manner.
However, Mamdani-type fuzzy inference entails a substantial computational
burden. On the other hand, the Sugeno method is computationally effective and
works well with optimisation and adaptive techniques, which makes it very
attractive in control problems, particularly for dynamic nonlinear systems.

NEURO FUZZY SYSTEMS :

NEURO FUZZY SYSTEMS Neural Networks neural networks are low-level computational structures that perform well when dealing with raw data
although neural networks can learn, they are opaque to the user. Fuzzy Systems fuzzy logic deals with reasoning on a higher level, using linguistic information acquired from domain experts.
fuzzy systems lack the ability to learn and cannot adjust themselves to a new environment. Integrated neuro-fuzzy systems can combine the parallel computation and learning abilities of neural networks with the humanlike knowledge representation and explanation abilities of fuzzy systems. As a result, neural networks become more transparent, while fuzzy systems become capable of learning.

Slide 23:

How does a neuro-fuzzy system look? Mamdani fuzzy inference model, and a
neurofuzzy system that corresponds to
this model.

Slide 25:

Layer 1 is the input layer
Layer 2 is the input membership or fuzzification layer.
Neurons in this layer represent fuzzy sets used in the antecedents of fuzzy rules.
The activation function of a membership neuron is set to the function that specifies the neuron's fuzzy set.
A fuzzification neuron receives a crisp input and determines the degree to which this input belongs to the neuron's fuzzy set.
In this example triangular membership function. As we can see, the output of a fuzzification
neuron depends not only on its input, but also on the centre, a, and the width, b,
of the triangular activation function. Parameters a and b of the fuzzification neurons can play the same role in a neuro-fuzzy system as synaptic weights in a neural network.

Slide 26:

Layer 3 is the fuzzy rule layer.
Each neuron in this layer corresponds to a single fuzzy rule. A fuzzy rule neuron receives inputs from the fuzzification neurons that represent fuzzy sets in the rule antecedents. In a neuro-fuzzy system, intersection can
be implemented by the product operator. Thus, the output of neuron i in Layer 3
is obtained as: The value of μr1 represents the firing strength of fuzzy rule neuron R1. The weights between Layer 3 and Layer 4 represent the normalised degrees of confidence (known as certainty factors) of the corresponding fuzzy rules. These weights are adjusted during training of a neuro-fuzzy system.

Slide 27:

What is the normalised degree of confidence of a fuzzy rule?
Different rules represented in a neuro-fuzzy system may be associated with different degrees of confidence. An expert may attach the degree of confidence to each fuzzy IF-THEN rule by setting the corresponding weights within the range of [0,1]. During training, however, these weights can change. To keep them within the specified range, the weights are normalised by dividing their respective values by the highest weight magnitude obtained at each iteration. Layer 4 is the output membership layer. Neurons in this layer represent
fuzzy sets used in the consequent of fuzzy rules. An output membership neuron receives inputs from the corresponding fuzzy rule neurons and combines them by using the fuzzy operation union. This operation can be implemented by the probabilistic OR (also known as the algebraic sum). That is, For example, The value of μc1 represents the integrated firing strength of fuzzy rule neurons
A3 and R6.

Slide 28:

Layer 5 is the defuzzification layer. Each neuron in this layer represents a single output of the neuro-fuzzy system.
It takes the output fuzzy sets clipped by the respective integrated firing strengths and combines them into a single fuzzy set.
The output of the neuro-fuzzy system is crisp, and thus a combined output fuzzy set must be defuzzified.
The output of the neuro-fuzzy system is crisp, and thus a combined output
fuzzy set must be defuzzified.
Neuro-fuzzy systems can apply standard defuzzification methods, including the centroid technique. In this example; The sum-product composition calculates the crisp output as the weighted average of the centroids of all output membership functions.The weighted average of the centroids of the clipped fuzzy sets Cl and C2 is calculated as,

Slide 29:

How does a neuro-fuzzy system learn?
A neuro-fuzzy system is essentially a multi-layer neural network, and thus it can
apply standard learning algorithms developed for neural networks, including
the back-propagation algorithm (Kasabov, 1996; Lin and Lee, 1996; Nauck et al,
1997; Von Altrock, 1997).
When a training input-output example is presented to the system
The back-propagation algorithm computes the system output and compares it with the desired output of the training example.
The difference (also called the error) is propagated backwards through the network from the output layer to the input layer.
The neuron activation functions are modified as the error is propagated.
To determine the necessary modifications, the backpropagation algorithm differentiates the activation functions of the neurons.

Slide 30:

Distribution of 100 training patterns in the three dimensional input-output space X1 × X2 × Y. Each training pattern here is determined by three variables: two inputs xl and x2, and one output γ. Input and output variables are represented by two linguistic values: small (S) and large (L). The data set of Figure 8.7 is used for training the five-rule neuro-fuzzy system shown in Figure 8.8(a). Suppose that fuzzy IF-THEN rules incorporated into the system structure are supplied by a domain expert.

Slide 31:

initial weights between Layer 3 and Layer 4 are set to unity.
During training the neuro-fuzzy system uses the back-propagation algorithm to
adjust the weights and to modify input and output membership functions.

Slide 32:

On top of that, we cannot be sure that the 'expert' has not left out a few rules.
What can we do to reduce our dependence on the expert knowledge?
Can a neuro-fuzzy system extract rules directly from numerical data?
Given input and output linguistic values, a neuro-fuzzy system can automatically
generate a complete set of fuzzy IF-THEN rules. Because expert knowledge is not embodied in the system this time, we set all
initial weights between Layer 3 and Layer 4 to 0.5. After training we can eliminate
all rules whose certainty factors are less than some sufficiently small number, say
0.1. As a result, we obtain the same set of four fuzzy IF-THEN rules An example demonstrates that a neuro-fuzzy system extract fuzzy rules directly from numerical data:

Slide 34:

The combination of fuzzy logic and neural networks constitutes a powerful
means for designing intelligent systems. Domain knowledge can be put into a neuro-fuzzy system by human experts in the form of linguistic variables and fuzzy rules.
When a representative set of examples is available, a neuro-fuzzy
system can automatically transform it into a robust set of fuzzy IF-THEN rules, and thereby reduce our dependence on expert knowledge when building intelligent systems.

Slide 35:

ANFIS: Adaptive Neuro-Fuzzy Inference System
The Sugeno fuzzy model was proposed for a systematic approach to generating fuzzy rules from a given input-output data set. A typical Sugeno fuzzy rule can be
expressed in the following form: where ×1,×2,..,×m are input variables; A1,A2,...,Am are fuzzy sets; and γ is either a constant or a linear function of the input variables. When γ is a constant,
we obtain a zero-order Sugeno fuzzy model in which the consequent of a rule is
specified by a singleton. When γ is a first-order polynomial,we obtain a first-order Sugeno fuzzy model. Jang's ANFIS is normally represented by a six-layer feedforward neural
network. Figure 8.10 shows the ANFIS architecture that corresponds to the first order
Sugeno fuzzy model.

Slide 36:

Each input is represented by two fuzzy sets, and the output by a first-order polynomial.
The ANFIS implements four rules:

Slide 37:

Layer 1 is the input layer. Layer 2 is the fuzzification layer. Neurons in this layer perform fuzzification.
In Jang's model, fuzzification neurons have a bell activation function.
A bell activation function, which has a regular bell shape, is specified as: where xi is the input and yi is the output of neuron i in Layer 2 and ai,bi and
Ci are parameters that control, respectively, the centre, width and slope of the
bell activation function of neuron i.

Slide 38:

Layer 3 is the rule layer.
Each neuron in this layer corresponds to a single Sugeno-type fuzzy rule.
A rule neuron receives inputs from the respective fuzzification neurons and calculates the firing strength of the rule it represents.
In an ANFIS, the conjunction of the rule antecedents is evaluated by the operator
product. Thus, the output of neuron i in Layer 3 is obtained as,

Slide 39:

Layer 4 is the normalisation layer.
Each neuron in this layer receives inputs from all neurons in the rule layer, and calculates the normalised firing strength of a given rule.
The normalised firing strength is the ratio of the firing strength of a given rule
to the sum of firing strengths of all rules. It represents the contribution of a given
rule to the final result.
Thus, the output of neuron i in Layer 4 is determined as, For example,

Slide 40:

Layer 5 is the defuzzification layer. Each neuron in this layer is connected to
the respective normalisation neuron, and also receives initial inputs, x1 and x2.
A defuzzification neuron calculates the weighted consequent value of a given
rule as, Layer 6 is represented by a single summation neuron. This neuron calculates
the sum of outputs of all defuzzification neurons and produces the overall ANFIS
output, γ, Thus, the ANFIS shown in Figure 8.10 is indeed functionally equivalent to a firstorder
Sugeno fuzzy model.

Slide 41:

How does an ANFIS learn?
An ANFIS uses a hybrid learning algorithm that combines the least-squares
estimator and the gradient descent method (Jang, 1993). In the ANFIS training algorithm, each epoch is composed from a forward pass and a backward pass.
In the forward pass, a training set of input patterns (an input vector) is presented to the ANFIS, neuron outputs are calculated on the layer-by-layer basis, and rule consequent parameters are identified by the least squares estimator. In the Sugeno-style fuzzy inference, an output, γ, is a linear function. Thus, given the values of the membership parameters and a training set of P input-output patterns, we can form P linear equations in terms of the consequent parameters as:

Slide 42:

Mamdani Assilian Model
R1: If x is A1 and y is B1 then z is C1
R2: If x is A2 and y is B2 then z is C2
Ai , Bi and Ci, are fuzzy sets defined on the universes of x, y, z respectively
Takagi-Sugeno Model
R1: If x1 is A1 and x2 is B1 then z =f1(x1,x2)
R1: If x1 is A2 and x2 is B2 then z =f2(x1,x2)
For example: fi(x1,x2)=x1 ki1+x2 ki2 +ki0

ANFIS Architecture: Sugeno’s ANFIS :

ANFIS Architecture: Sugeno’s ANFIS Assume that FIS has 2 inputs x, y and one output z.
Sugeno’s ANFIS:
Rule1: If x is A1 and y is B1, then f1 = p1x+q1y+r1.
Rule2: If x is A2 and y is B2, then f2 = p2x+q2y+r2.

ANFIS Architecture: Sugeno’s ANFIS :

ANFIS Architecture: Sugeno’s ANFIS Layer 1: fuzzification layer
Every node I in the layer 1 is an adaptive node with a node function
O1,I = Ai(x) for i=1,2 or : membership grade of a fuzzy set A1,A2
O1,I = Bi-2(y) for i=3,4
Parameters in this layer: premise (or antecedent) parameters.
Layer 2: rule layer
a fixed node labeled whose output is the product of all the incoming signals:
O2,I = wi = Ai(x) Bi(y) for i=1,2 : firing strength of a rule.
Layer 3: normalization layer
a fixed node labeled N.
The i-th node calculates the ratio of the i-th rule’s firing strength to the sum of all rules’ firing strength: O3,I = wi = wi / (wi + wi ) for i=1,2
Outputs of this layer are called normalized firing strengths.
Layer 4: defuzzification layer
an adaptive node with a node fn O4,I = wi fi = wi (pi x + qi y + ri ) for i=1,2
where wi is a normalized firing strength from layer 3 and {pi , qi ri } is the parameter set of this node – Consequent Parameters.
Layer 5: summation neuron
a fixed node which computes the overall output as the summation of all incoming signals
Overall output = O5, 1 = ∑ wi fi = ∑ wi fi / ∑ wi

ANFIS Architecture: Sugeno’s ANFIS :

ANFIS Architecture: Sugeno’s ANFIS How does an ANFIS learn?
A learning algorithm of the least-squares estimator + the gradient descent method
Forward Pass: adjustment of consequent parameters, pi, qi, ri.
Rule consequent parameters are identified by the least-square estimator.
Find a least-square estimate of k=[r1 p1 q1.. r2 p2 q2 .. rn pn qn] , k*, that minimizes the squared error e=|Od – O|2.
E = e2 / 2 = (Od – O)2 / 2
The consequent parameters are adjusted while the antecedent parameters remain fixed.
Backward Pass: adjustment of antecedent parameters
The antecedent parameters are tuned while the consequent parameters are kept fixed.
E.g.) Bell activation fn: [1 + ((x-a)/c)2b] -1.
Consider a correction applied to parameter of a, a., a= a + a..
where

Slide 46:

As soon as the rule consequent parameters are established, we can compute an
actual network output vector, y, and determine the error vector, e=yd-y In the backward pass, the back-propagation algorithm is applied. The error signals are propagated back, and the antecedent parameters are updated according to the chain rule.
Let us, for instance, consider a correction applied to parameter a of the bell
activation function used in neuron A1. We may express the chain rule as follows: where a is the learning rate, and E is the instantaneous value of the squared error
for the ANFIS output neuron, i.e.,

Slide 47:

In the ANFIS training algorithm suggested by Jang, both antecedent parameters
and consequent parameters are optimised.
In the forward pass, the consequent parameters are adjusted while the antecedent parameters remain
fixed.
In the backward pass, the antecedent parameters are tuned while the
consequent parameters are kept fixed.
However, in some cases, when the input output data set is relatively small, membership functions can be described by a human expert. In such situations, these membership functions are kept fixed throughout the training process, and only consequent parameters are adjusted (Jang et al, 1997).

Constraints for Training Fuzzy Sets :

Constraints for Training Fuzzy Sets Valid parameter values
Non-empty intersection of adjacent fuzzy sets
Keep relative positions
Maintain symmetry
Complete coverage (degrees of membership add up to 1 for each element)

A Neuro-Fuzzy System :

A Neuro-Fuzzy System is a fuzzy system trained by heuristic learning techniques derived from neural networks
can be viewed as a 3-layer neural network with fuzzy weights and special activation functions
is always interpretable as a fuzzy system
uses constraint learning procedures
is a function approximator (classifier, controller)

Slide 54:

The NEFCLASS Model The main goal of NEFCLASS is to create a readable classifier that also provides an
acceptable accuracy. An interpretable fuzzy system should display the following features:
few meaningful rules with few variables in their antecedents,
few meaningful sets for each variable,
there are no rule weights,
identical linguistic terms are represented by identical fuzzy sets,
only normal fuzzy sets are used, or even better fuzzy numbers or fuzzy intervals.

Slide 55:

NEFCLASS provides means to ensure the readability of the solution by giving
the user complete control over the learning process. It should also be stressed that
interpretable solutions can usually not be obtained without the user‘s cooperation.
NEFCLASS must be seen as a tool that supports users in finding readable fuzzy classifiers. It is not an automatic classifier creator where data is fed in and a solution pops out. It is necessary that the user works with this tool.
For this reason only fast learning strategies are used to give the user the possibility to interact with the tool.

Slide 56:

2.1 The Structure of the NEFCLASS-Model
it is possible to view a neuro-fuzzy system as a special three layered feedforward neural network where the first layer represents the input variables that means the pattern tuples,
the hidden layer represents fuzzy rules,
the third layer represents the output variables that means one unit for every class,
the units use t-norms and t-conorms as activation functions,
the fuzzy sets are encoded as (fuzzy) connection weights. The learning algorithm works by modifying the structure and/or the parameters that means the inclusion or deletion of neurons or adaption of the weights. It is an important aspect that the changes caused by the learning process can be interpreted in terms of neural networks as well as in terms of fuzzy systems. The black box behavior of neural networks is avoided and a successful learning process can be seen as an increase of explicit knowledge which is represented in the rule base.

Slide 57:

For semantical reasons all these weights are fixed at 1. Alternatively, the output activation can be computed by a maximum operation instead of a weighted sum.

Slide 58:

2.2 Learning a Rule Base - The Algorithm
A NEFCLASS system can be
built from partial knowledge about the patterns, and can be refined by learning, or
It can begin with an empty rule base that is filled by creating rules from the training data.
For each input variable the user must decide how many fuzzy sets are to be used to partition the domain of the respective variable.
The user must also specify a value of kmax, i.e. The maximum number of rule nodes that may be created in the hidden layer.
For each class there must be at least one rule.

Slide 60:

There are three ways to create a rule base for a NEFCLASS system.
The “simple” procedure can only be successful, if the patterns are selected randomly
from the learning set, and if the cardinalities of the classes are approximately equal. It works for simple problems like the Iris data.
Usually a user will choose “best” or “best per class” rule learning.
The “best per class” should be selected, when one supposes that the patterns are distributed in an equal number of clusters per class.
“Best rule ” learning is suitable, when there are classes, which have to he represented by a larger number of rules than other classes. Either way, rule learning is completed after three cycles through the data set.

Slide 61:

1-)WITH BEST RULE ALGORITHM
NEFCLASS created at first 19 rules (81 would be possible) and selected the best 7 rules (see Table1). Fuzzy set learning stopped after 126 epochs, because the error was not decreased for 50 epochs. After learning,RESULTS for
DATA SET: %96 correct (3 out of 75 misclassified)
and TOTAL: %96.67 CORRECT
TEST SET:%97.3(2 out of 75 misclassified) (5 out of 150 misclas.) if (s, m, s, s) then Setosa
if (s. s, s, s) then Setosa
if (m, s, l, l) then Virginica
if (l, s, l, l) then Virginica
if (l, m, l, l) then Virginica
if (m, s, m, m) then Versicolour
if (m, s, m, s) then Versicolour Seven rules found by NEFCLASS to classify
the Iris data Deriving Fuzzy Rules From Data Example Iris data problem: 150 patterns belonging to three different classes (Iris Setosa, Iris Versicolour, and Iris Virginica) with 50 patterns each. The patterns have four input features (sepal and petal length and width of the iris flower).

Slide 62:

2-)SAME PROBLEM WITH “BEST PER CLASS” ALGORITHM USING ONLY THIRD AND FOURTH INPUTS
Using only the third and fourth input, and allowing the system to create three rules.
Using “best per class” rule learning, the system finds at first 5 rules (9 would be possible) and finally selects the three rules shown in Table 2.
After training for 110 epochs RESULTS for
TRAINING SET; 2 errors i.e. the same performance like seven
rules using all four features.
TEST SET ; 3 errors COMPARISON WITH FuNe-I
FuNe-I reached a classification rate of 99% on the test set of the Iris data using 13 rules and four inputs, and a 96% classification rate using 7 rules and 3 inputs.
FunE-I has a more complex structure and learning procedure
FuNe-I uses weighted rules which NEFCLASS avoids

Slide 64:

2.3 Training Fuzzy Sets - The Algorithm
The supervised learning algorithm of NEFCLASS to adapt its fuzzy sets runs cyclically through the learning set / until a given end criterion is met, e.g. if a number of a
admissible misclassifications is reached, or if the error cannot be decreased further, etc. For the same reason the learning procedure cannot reach an error value of zero, and therefore the change in error is usually used as a stop criterion for the learnig algorithm. In [NAUCK ET AL. 97] it is reported that rule
weights are not necessary to obtain good classification results. However, without rule
weights a NEFCLASS system usually cannot produce exact output values of 0 or 1 due to
the mathematics involved.

Slide 65:

In this case the system is allowed to create three rules, therefore there
are unclassified patterns.

Slide 66:

One can see that the classification result is not bad, but improvements are desired. Pattern1 and pattern 2 are misclassified and three patterns are not classified. To shift and modifythe fuzzy sets would help:
Pattern 1 is correctly classified if
fuzzy set b is a bit smaller from the top,
fuzzy set d is a bit wider to the bottom,
fuzzy set c’ is a bit wider to the left.
Pattern 2 is correctly classified if
fuzzy set c’ is a bit smaller from the right,
fuzzy set e’ is a bit wider to the left.
The unclassified patterns are correctly classified if
fuzzy set b’ is a bit wider to the right.

Slide 67:

Wisconsin Breast Cancer Data Set This data set contains 683 cases distributed into two classes (benign and malign) with 9 features. 1-)When prior knowledge is supplied,
a fuzzy clustering method to obtain fuzzy rules is used
Discovered three clusters that were interpreted as fuzzy rules
Found trapezoidal membership functions that closely matched the projections.
The membership functions were interpreted by small, medium and large Question:How NEFCLASS performs when prior knowledge is supplied? RESULT:after 80 epochs of training, we obtained
a result of only 50 errors (92.7% correct).

Slide 68:

2-) NEFCLASS without prior knowledge, and Using four rules and “best per class” rule learning RESULT; performing badly with 135 errors(80.4% correct).
So in this case using prior knowledge
is a substantial advantage. The ability of NEURO-FUZZY models to perform rule extraction as opposed to the MLP make them attractive for real-world pattern recognition applications. Test of Neuro-Fuzzy Networks with MLP

Applying NEFCLASS-J :

Applying NEFCLASS-J Tool for developing Neuro-Fuzzy Classifiers
Written in JAVA
Free version for research available
Project started at Neuro-Fuzzy Group of University of Magdeburg, Germany A NEFCLASS learning result should be analyzed to obtain information that can he used to obtain an even better result by running NEFCLASS again with fewer
rules or fuzzy sets.
The goal of NEFCLASS is to provide an interpretable fuzzy classificator.

Conclusions :

Conclusions NEFCLASS-J offers the following new features for creating neuro-fuzzy classifiers based on the NEFCLASS model:
batch learning to remove the dependeny of the learning algorithm from the sequence of data,
automatic cross validation to determine the validity of a classifier,
automatic determination of the rule base size, handling of missing values,
automatic pruning of a classifer to reduce its size and to increase its interpretability,
a complete new GUI with a look and feel of standard applications.

Learning Fuzzy Rules :

Learning Fuzzy Rules Cluster-oriented approaches=> find clusters in data, each cluster is a rule
Hyperbox-oriented approaches=> find clusters in the form of hyperboxes
Structure-oriented approaches=> used predefined fuzzy sets to structure the data space, pick rules from grid cells

You do not have the permission to view this presentation. In order to view it, please
contact the author of the presentation.