企業智財經營平台評選決策模式之建立 ─模糊群體決策理論應用€¦ · 436 管理與系統 關鍵詞:智慧財產、直覺式模糊集合、模糊排序、敏感性分析
模糊模型規則庫自動建立之演算法 An improved approach to automatically build fuzzy...
description
Transcript of 模糊模型規則庫自動建立之演算法 An improved approach to automatically build fuzzy...
模糊模型規則庫自動建立之演算法 An improved approach to
automatically build fuzzy model rules
王乃堅 (Nai-Jian Wang)台灣科技大學電機系
中華民國九十年十月二十日地點:政大經濟系
2
Outline
Motivations
The concept of system identification
The improved algorithm
Simulations and Discussions
Conclusions and Future Works
3
Motivation
Only I/O data
Model construction
I/O relation
Modification
4
The concept of system identification
Structure Identification I
a: Input candidatesb: Input variables
Structure Identification II
a: Number of rulesb: Partition of input
spaceParameter Identification
5
Takagi and Sugeno’s model
1 2 3 4 5 6 7 8 9
1
3
6
9
Small Big
x
y
2R
1R
isxIfR :1
isxIfR :2
36.0, xythen
62.0, xythen
1 4 7
1 4 7
6
Sugeno and Yasukawa’s model
1 2 3 4 5 6 7 8 9
1
3
6
9
Small Big
x
y
2R1R
isxIfR :2 then,
1 4 7
isxIfR :3 then,
1 4 7
isxIfR :1 then,
1 4 7 5 6 9
5 6 9
5 6 9
3R
Medium
7
Fuzzy modeling開始
決定線性系統數目
建立初步的模糊系統參數
模糊系統參數的最佳化調整
是否滿足停止條件
結束
是
否線性系統數目 加 1
8
To decide the number of rules
開始
設定初始分類數目 為 2
執行FCM演算法
S(c)值是否達到最小
結束
是
否分類數目 1加
9
Fuzzy C-means clustering
cixvn
k
mik
n
kk
miki
1,)()(11
c
j
mjkikikk ddI
1
121
ikik vxd
0;1| ikikk vxdciiI
kk IcI ,,2,1~
kIi
ikkikk andIiI 1~
0
1X
2X
1S
2S
10
To determine the number of rules
n
kiik
c
i
mik xvvxcS
1
22
1
2 3 4 5 6
-20
-30
-40
-50
-21.84
-34.39
-42.38
-43.81
-43.52
S
c
11
Coarse fuzzy modeling
Fuzzy C-Regression Model (FCRM)
Premise parameters generation
Consequent parameters generation
12
Fuzzy C-Regression Model (1)
cTc
T
y
ystep
PX
PX
111
YDXXDX iT
iTl
istep1
2
P
nnnn
kki
d
d
d
0
011
D
mrikkk Ud
13
Fuzzy C-Regression Model (2)
kIiikkikk
c
j
m
jk
ik
ikk
kk
iTkkikk
andIiI
E
E
I
IcI
lPXyEciiIstep
1~
0
1
,2,1~
0)(,1|3
1
1
2
14 rr UUstep
14
Premise parameters generation (1)
cnc
iji
nj
U
1
1
1111
0 5 10 15 20 25 30 35 40 45 500
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
the order of data set
degr
ee
15
Premise parameters generation (2)
0 5 10 15 20 25 30 35 40 45 501
1.5
2
2.5
3
3.5
4
4.5
5
the order of data set
inpu
t da
ta
1 1.5 2 2.5 3 3.5 4 4.5 50
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
input data
degr
ee
16
Premise parameters generation (3)
1 1.5 2 2.5 3 3.5 4 4.5 50
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
input data
degr
ee
1 1.5 2 2.5 3 3.5 4 4.5 50.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
input data
degr
ee
17
Premise parameters generation (4)
2
2
121 exp,
ik
ikji
kik
ik p
pxppA
n
jij
n
j
ik
jkij
ik
px
p
1
1
2
1
2
2
n
jij
n
j
jkij
ik
x
p
1
11
18
Consequent parameters generation
rcrk
ck
ccc
rirk
ik
iii
rrkk
xaxaxaay
xaxaxaay
xaxaxaay
110
110
111
11
10
1
)(lPXyE iTkkik
19
Fine tuning開始
取得目前的前件部參數、後件部參數、
step size
建立新的一組參數
是否滿足停止條件
結束
是
否規則庫step size調整
將目標函數各別對前件部和後件部參數取
偏微分
gθθ nownext
20
The steepest decent method
g nownext θθ
T
n
θEθEθEθEθ
,,,
21
g
dd
d
nowEg
now
Minimum*
Descending
21
The gradient of objective function (1)
c
i
i
ic
i
i
des
des
w
ywy
yye
1
1
ˆ
c
i
i
iiks
i
desiks w
yyp
wyy
p
E
1
1ˆˆ2
?
iks
i
p
w
22
The gradient of objective function (2)
2
2
1
2
1
2
11
exp2
ik
ikk
ik
ikk
ik
ik
ik
ik
i
p
px
p
px
p
p
A
p
w
021
ik
i
ik
i
p
w
p
w
2
2
1
2
2
1
2
22
exp2
ik
ikk
ik
ikk
ik
ik
ik
ik
i
p
px
p
px
p
p
A
p
w
23
The gradient of objective function (3)
ik
desik a
yyy
a
E
ˆ
ˆ2
kc
j
j
i
desik
xw
wyy
a
E
1
ˆ2
c
irc
j
j
iirc
j
j
ii
c
j
j
ii
c
i
i
kik
iic
i
i
c
i
i
ic
i
i
xw
wax
w
wa
w
wa
w
xaxaaw
w
ywy
1
1
1
1
1
1
0
1
1101
1
1ˆ
24
Stop condition
n
j
jyjyn
indexeperformancPI
1
2ˆ1
P
i iT
iOiT
P
errorpercentageaverageAPE
1
%1001
thresholdPI
25
Example 1 (1)
5,1,1 21
25.12
21 xxxxy
Rule
3.095 3.2013.518 -0.249 -0.265
1.477 1.5112.751 2.406
6.504 -0.672 -0.4692.072 2.1562.828 2.437
4.842 -0.381 -0.4211.831 1.8392.667 2.805
4.136 -0.387 -0.3571.026 1.3692.897 2.544
5.052 -0.559 -0.2432.005 1.924
iA1iA2
iC0iC1
iC2
1R
2R
3R
4R
5R
26
Example 1 (2)
Rule
3.806 2.8425.165 -1.094 -0.224
0.957 1.4712.767 1.853
4.741 -1.117 -1.0721.080 0.6572.023 2.682
3.671 -0.572 -0.8840.590 1.3232.973 3.221
3.447 -0.317 -0.5510.951 1.1202.894 2.363
8.415 -0.376 -0.7851.984 2.230
iA1iA2
iC0iC1
iC2
1R
2R
3R
4R
5R
The optimal parameters
27
Example 1 (3)
0 500 1000 1500 2000 2500 3000 3500 4000 4500 50000
0.2
0.4
0.6
0.8
1
1.2
1.4
the number of iteration
the
num
ber
of p
erfo
rman
ce in
dex
Method Kim Ourrule
number3 5
PI 0.0197 0.006691run time 23674sec 1281secdata size 50 50
28
Example 2 (1)
xy
yxyxcz
)sin()sin(),(sin
Rule-1.550 2.9542.155 3.851-2.202 -0.0036.802 9.5403.642 -0.4665.567 8.214
2.008 0.307 -0.252
-0.065 -0.011 0.004
0.007 0.005 -0.002
iA1iA2
iC0iC1
iC2
1R
2R
3R
Rule-1.479 3.3493.485 5.370-1.879 -0.1976.859 9.3573.334 -0.5695.677 8.049
0.854 0.099 -0.087
0.018 0.001 0.00001
-0.078 0.009 -0.00036
iA1iA2
iC0iC1
iC2
1R
2R
3R
29
Example 2 (2)
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000
0.05
0.1
0.15
0.2
0.25
0.3
0.35
the number of iteration
the
num
ber
of p
erfo
rman
ce in
dex
Method Jang Ourrule
number16 3
data size 121 100paramet
ernumber
72 21
PI - 0.019934APE estimated 0.01% 0.001%
30
Example 3 (1)
25.115.01 zyxoutput
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 1000010
-8
10-6
10-4
10-2
100
102
the number of iteration
the
num
ber
of p
erfo
rmac
e in
dex
31
Example 3 (2)
ModelTraining
errorChecking
error
Thenumberof rule
Thenumber ofparameter
Trainingdata size
Checkingdata size
ANFIS[1] 0.04% 1.07% 8 50 216 125GMDH
model[34] 4.70% 5.70% - - 20 20
Sugeno andKang[14] -
11.50% 2.10% 3 22 20 20
Sugeno andKang[14] -
20.59% 3.40% 4 32 20 20
Ourmethod 0.0023% 1.51% 6 54 20 20
32
Conclusions and Future Works
架構精簡,彈性大易於在電腦上實現 不錯的運算效率和較佳的近似結果 有較佳的能力去描述未知系統改進 FCM方法不足之處 以其他的最佳化方法取代最陡坡降法
33
Least-squares estimator
yAAAθ
y2A-AθAθ
θ
yyAθ2yAθAθ
AθyAθyeeθaθ
yeAθ
TT
TT
TTTT
TT
1
1
2
ˆ
02
E
yEn
i
Tii