アルゴリズムイントロダクション 14章

39
アルゴリズムイントロダクション 14 データ構造の補強 tniky1 http://www.tniky1.com/study 201178日金曜日

description

アルゴリズムイントロダクション 第14章の勉強会資料 下記でPPT版,KEY版も配布してます. http://tniky1.com/study/

Transcript of アルゴリズムイントロダクション 14章

Page 1: アルゴリズムイントロダクション 14章

アルゴリズムイントロダクション第14章 データ構造の補強

tniky1

http://www.tniky1.com/study

2011年7月8日金曜日

Page 2: アルゴリズムイントロダクション 14章

データ構造の補強ってどういうこと?

補強するステップを習得しておけば,応用が利く(^^)b

既存のデータ構造を補強すやり方を学んでおこう!

教科書の内容で実践の全てをカバーすることは不可能

当然

全く新しいデータ構造を必要とすることは稀

既存のデータ構造を補強すればOKなことが多い

2011年7月8日金曜日

Page 3: アルゴリズムイントロダクション 14章

14.1 動的順序統計量

2色木を補強し,順序統計量(i番目要素の探査)をO(lgn)で行う

14.2 データ構造補強法

14.1を例にとり,データ構造の補強のステップを一般化

14.3 区間木

2色木を補強し,時間区間のような区間の動的集合を管理する

第14章トピック9章ではO(n)

このステップを覚えてしまおう

2011年7月8日金曜日

Page 4: アルゴリズムイントロダクション 14章

順序統計量木14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

大小

子以下と自身の要素数

14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

左の子以下 右の子以下 自分自身

このデータ構造を利用して順序統計量をこれまでの,O(n)からO(lgn)に改良する!

2011年7月8日金曜日

Page 5: アルゴリズムイントロダクション 14章

与えられた順位を持つ要素の検索i番目に小さいキーを見つけるには?ってこと.

例:17番目のキーを見つける14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

17

2011年7月8日金曜日

Page 6: アルゴリズムイントロダクション 14章

与えられた順位を持つ要素の検索i番目に小さいキーを見つけるには?ってこと.

例:17番目のキーを見つける14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

1712個小さいのは見つかった.自身の数を含めると残りで4番目を見つける

2011年7月8日金曜日

Page 7: アルゴリズムイントロダクション 14章

与えられた順位を持つ要素の検索i番目に小さいキーを見つけるには?ってこと.

例:17番目のキーを見つける14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

174

12個小さいのは見つかった.自身の数を含めると残りで4番目を見つける

この中で4番目を見つける

2011年7月8日金曜日

Page 8: アルゴリズムイントロダクション 14章

与えられた順位を持つ要素の検索i番目に小さいキーを見つけるには?ってこと.

例:17番目のキーを見つける14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

174

12個小さいのは見つかった.自身の数を含めると残りで4番目を見つける

この中で4番目を見つける

2011年7月8日金曜日

Page 9: アルゴリズムイントロダクション 14章

与えられた順位を持つ要素の検索i番目に小さいキーを見つけるには?ってこと.

例:17番目のキーを見つける14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

174

12個小さいのは見つかった.自身の数を含めると残りで4番目を見つける

この中で4番目を見つける

4

この中で4番目を見つける

2011年7月8日金曜日

Page 10: アルゴリズムイントロダクション 14章

与えられた順位を持つ要素の検索i番目に小さいキーを見つけるには?ってこと.

例:17番目のキーを見つける14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

174

12個小さいのは見つかった.自身の数を含めると残りで4番目を見つける

この中で4番目を見つける

4

2

この中で4番目を見つける

2011年7月8日金曜日

Page 11: アルゴリズムイントロダクション 14章

与えられた順位を持つ要素の検索i番目に小さいキーを見つけるには?ってこと.

例:17番目のキーを見つける14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

174

12個小さいのは見つかった.自身の数を含めると残りで4番目を見つける

この中で4番目を見つける

4

2

この中で4番目を見つける

2011年7月8日金曜日

Page 12: アルゴリズムイントロダクション 14章

与えられた順位を持つ要素の検索

304 Chapter 14 Augmenting Data Structures

OS-SELECT(x, i)1 r ! size[left[x]]+12 if i = r3 then return x4 elseif i < r5 then return OS-SELECT(left[x], i)6 else return OS-SELECT(right[x], i " r)

The idea behind OS-SELECT is similar to that of the selection algorithms in Chap-ter 9. The value of size[left[x]] is the number of nodes that come before x in aninorder tree walk of the subtree rooted at x . Thus, size[left[x]] + 1 is the rank of xwithin the subtree rooted at x .

In line 1 of OS-SELECT, we compute r , the rank of node x within the subtreerooted at x . If i = r , then node x is the i th smallest element, so we return x inline 3. If i < r , then the i th smallest element is in x’s left subtree, so we recurseon left[x] in line 5. If i > r , then the i th smallest element is in x’s right subtree.Since there are r elements in the subtree rooted at x that come before x’s rightsubtree in an inorder tree walk, the i th smallest element in the subtree rooted at xis the (i " r)th smallest element in the subtree rooted at right[x]. This element isdetermined recursively in line 6.

To see how OS-SELECT operates, consider a search for the 17th smallest ele-ment in the order-statistic tree of Figure 14.1. We begin with x as the root, whosekey is 26, and with i = 17. Since the size of 26’s left subtree is 12, its rank is 13.Thus, we know that the node with rank 17 is the 17 " 13 = 4th smallest elementin 26’s right subtree. After the recursive call, x is the node with key 41, and i = 4.Since the size of 41’s left subtree is 5, its rank within its subtree is 6. Thus, weknow that the node with rank 4 is the 4th smallest element in 41’s left subtree.After the recursive call, x is the node with key 30, and its rank within its subtreeis 2. Thus, we recurse once again to find the 4 " 2 = 2nd smallest element in thesubtree rooted at the node with key 38. We now find that its left subtree has size 1,which means it is the second smallest element. Thus, a pointer to the node withkey 38 is returned by the procedure.

Because each recursive call goes down one level in the order-statistic tree, thetotal time for OS-SELECT is at worst proportional to the height of the tree. Sincethe tree is a red-black tree, its height is O(lg n), where n is the number of nodes.Thus, the running time of OS-SELECT is O(lg n) for a dynamic set of n elements.

Determining the rank of an element

Given a pointer to a node x in an order-statistic tree T , the procedure OS-RANK

returns the position of x in the linear order determined by an inorder tree walk of T .

14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

174

12個小さいのは見つかった.自身の数を含めると残りで4番目を見つける

この中で4番目を見つける

4

2

2011年7月8日金曜日

Page 13: アルゴリズムイントロダクション 14章

要素の順位の決定さっきと逆.要素を指定され,何番目か答える

例:38.自分より小さい要素数を足していく14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

2011年7月8日金曜日

Page 14: アルゴリズムイントロダクション 14章

要素の順位の決定さっきと逆.要素を指定され,何番目か答える

例:38.自分より小さい要素数を足していく14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

2

2011年7月8日金曜日

Page 15: アルゴリズムイントロダクション 14章

要素の順位の決定さっきと逆.要素を指定され,何番目か答える

例:38.自分より小さい要素数を足していく14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

「これまでの順序」と「左側の子の要素数」と親

2

2011年7月8日金曜日

Page 16: アルゴリズムイントロダクション 14章

要素の順位の決定さっきと逆.要素を指定され,何番目か答える

例:38.自分より小さい要素数を足していく14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

「これまでの順序」と「左側の子の要素数」と親

4

2

2011年7月8日金曜日

Page 17: アルゴリズムイントロダクション 14章

要素の順位の決定さっきと逆.要素を指定され,何番目か答える

例:38.自分より小さい要素数を足していく14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

4

「これまでの順序」と「左側の子の要素数」と親

4

2

左から上がる=自分より大

2011年7月8日金曜日

Page 18: アルゴリズムイントロダクション 14章

要素の順位の決定さっきと逆.要素を指定され,何番目か答える

例:38.自分より小さい要素数を足していく14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

174

「これまでの順序」と「左側の子の要素数」と親

4

2

左から上がる=自分より大

2011年7月8日金曜日

Page 19: アルゴリズムイントロダクション 14章

要素の順位の決定さっきと逆.要素を指定され,何番目か答える

例:38.自分より小さい要素数を足していく14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

174

「これまでの順序」と「左側の子の要素数」と親

4

2

左から上がる=自分より大「これまでの順序」と「左側の子の要素数」と親

2011年7月8日金曜日

Page 20: アルゴリズムイントロダクション 14章

要素の順位の決定14.1 Dynamic order statistics 303

13

7 12

10

14

16

14

2 1 1

24

7

20

19 21

21

17

28

35 39

38

4730

41

26

1

2 1

4

12

1

1 1

3

5 1

7

20

key

size

Figure 14.1 An order-statistic tree, which is an augmented red-black tree. Shaded nodes are red,and darkened nodes are black. In addition to its usual fields, each node x has a field size[x], which isthe number of nodes in the subtree rooted at x .

A data structure that can support fast order-statistic operations is shown in Fig-ure 14.1. An order-statistic tree T is simply a red-black tree with additional infor-mation stored in each node. Besides the usual red-black tree fields key[x], color[x],p[x], left[x], and right[x] in a node x , we have another field size[x]. This field con-tains the number of (internal) nodes in the subtree rooted at x (including x itself),that is, the size of the subtree. If we define the sentinel’s size to be 0, that is, we setsize[nil[T ]] to be 0, then we have the identity

size[x] = size[left[x]] + size[right[x]] + 1 .

We do not require keys to be distinct in an order-statistic tree. (For example, thetree in Figure 14.1 has two keys with value 14 and two keys with value 21.) In thepresence of equal keys, the above notion of rank is not well defined. We removethis ambiguity for an order-statistic tree by defining the rank of an element as theposition at which it would be printed in an inorder walk of the tree. In Figure 14.1,for example, the key 14 stored in a black node has rank 5, and the key 14 stored ina red node has rank 6.

Retrieving an element with a given rank

Before we show how to maintain this size information during insertion and dele-tion, let us examine the implementation of two order-statistic queries that use thisadditional information. We begin with an operation that retrieves an element witha given rank. The procedure OS-SELECT(x, i) returns a pointer to the node con-taining the i th smallest key in the subtree rooted at x . To find the i th smallest keyin an order-statistic tree T , we call OS-SELECT(root[T ], i).

174

「これまでの順序」と「左側の子の要素数」と親

4

2

左から上がる=自分より大「これまでの順序」と「左側の子の要素数」と親

14.1 Dynamic order statistics 305

OS-RANK(T, x)

1 r ! size[left[x]] + 12 y ! x3 while y "= root[T ]4 do if y = right[p[y]]5 then r ! r + size[left[p[y]]] + 16 y ! p[y]7 return r

The procedure works as follows. The rank of x can be viewed as the number ofnodes preceding x in an inorder tree walk, plus 1 for x itself. OS-RANK maintainsthe following loop invariant:

At the start of each iteration of the while loop of lines 3–6, r is the rank ofkey[x] in the subtree rooted at node y.

We use this loop invariant to show that OS-RANK works correctly as follows:

Initialization: Prior to the first iteration, line 1 sets r to be the rank of key[x]within the subtree rooted at x . Setting y ! x in line 2 makes the invariant truethe first time the test in line 3 executes.

Maintenance: At the end of each iteration of the while loop, we set y ! p[y].Thus we must show that if r is the rank of key[x] in the subtree rooted at y at thestart of the loop body, then r is the rank of key[x] in the subtree rooted at p[y]at the end of the loop body. In each iteration of the while loop, we considerthe subtree rooted at p[y]. We have already counted the number of nodes inthe subtree rooted at node y that precede x in an inorder walk, so we must addthe nodes in the subtree rooted at y’s sibling that precede x in an inorder walk,plus 1 for p[y] if it, too, precedes x . If y is a left child, then neither p[y] norany node in p[y]’s right subtree precedes x , so we leave r alone. Otherwise, yis a right child and all the nodes in p[y]’s left subtree precede x , as does p[y]itself. Thus, in line 5, we add size[left[p[y]]] + 1 to the current value of r .

Termination: The loop terminates when y = root[T ], so that the subtree rootedat y is the entire tree. Thus, the value of r is the rank of key[x] in the entire tree.

As an example, when we run OS-RANK on the order-statistic tree of Figure 14.1to find the rank of the node with key 38, we get the following sequence of valuesof key[y] and r at the top of the while loop:

iteration key[y] r1 38 22 30 43 41 44 26 17

2011年7月8日金曜日

Page 21: アルゴリズムイントロダクション 14章

部分木のサイズの維持2色木で行われる操作にて,付与した情報(サイズ情報)を保つのに計算時間に影響が無いか調べる.

回転に関して306 Chapter 14 Augmenting Data Structures

LEFT-ROTATE(T, x)

RIGHT-ROTATE(T, y)

93

4219

126

4 7

x

y

9319 y

4211 x

6 4

7

Figure 14.2 Updating subtree sizes during rotations. The link around which the rotation is per-formed is incident on the two nodes whose size fields need to be updated. The updates are local,requiring only the size information stored in x , y, and the roots of the subtrees shown as triangles.

The rank 17 is returned.Since each iteration of the while loop takes O(1) time, and y goes up one level in

the tree with each iteration, the running time of OS-RANK is at worst proportionalto the height of the tree: O(lg n) on an n-node order-statistic tree.

Maintaining subtree sizes

Given the size field in each node, OS-SELECT and OS-RANK can quickly computeorder-statistic information. But unless these fields can be efficiently maintained bythe basic modifying operations on red-black trees, our work will have been fornaught. We shall now show that subtree sizes can be maintained for both insertionand deletion without affecting the asymptotic running time of either operation.

We noted in Section 13.3 that insertion into a red-black tree consists of twophases. The first phase goes down the tree from the root, inserting the new nodeas a child of an existing node. The second phase goes up the tree, changing colorsand ultimately performing rotations to maintain the red-black properties.

To maintain the subtree sizes in the first phase, we simply increment size[x] foreach node x on the path traversed from the root down toward the leaves. The newnode added gets a size of 1. Since there are O(lg n) nodes on the traversed path,the additional cost of maintaining the size fields is O(lg n).

In the second phase, the only structural changes to the underlying red-black treeare caused by rotations, of which there are at most two. Moreover, a rotation is alocal operation: only two nodes have their size fields invalidated. The link aroundwhich the rotation is performed is incident on these two nodes. Referring to thecode for LEFT-ROTATE(T, x) in Section 13.2, we add the following lines:

13 size[y] ! size[x]14 size[x] ! size[left[x]] + size[right[x]] + 1

Figure 14.2 illustrates how the fields are updated. The change to RIGHT-ROTATE

is symmetric.

306 Chapter 14 Augmenting Data Structures

LEFT-ROTATE(T, x)

RIGHT-ROTATE(T, y)

93

4219

126

4 7

x

y

9319 y

4211 x

6 4

7

Figure 14.2 Updating subtree sizes during rotations. The link around which the rotation is per-formed is incident on the two nodes whose size fields need to be updated. The updates are local,requiring only the size information stored in x , y, and the roots of the subtrees shown as triangles.

The rank 17 is returned.Since each iteration of the while loop takes O(1) time, and y goes up one level in

the tree with each iteration, the running time of OS-RANK is at worst proportionalto the height of the tree: O(lg n) on an n-node order-statistic tree.

Maintaining subtree sizes

Given the size field in each node, OS-SELECT and OS-RANK can quickly computeorder-statistic information. But unless these fields can be efficiently maintained bythe basic modifying operations on red-black trees, our work will have been fornaught. We shall now show that subtree sizes can be maintained for both insertionand deletion without affecting the asymptotic running time of either operation.

We noted in Section 13.3 that insertion into a red-black tree consists of twophases. The first phase goes down the tree from the root, inserting the new nodeas a child of an existing node. The second phase goes up the tree, changing colorsand ultimately performing rotations to maintain the red-black properties.

To maintain the subtree sizes in the first phase, we simply increment size[x] foreach node x on the path traversed from the root down toward the leaves. The newnode added gets a size of 1. Since there are O(lg n) nodes on the traversed path,the additional cost of maintaining the size fields is O(lg n).

In the second phase, the only structural changes to the underlying red-black treeare caused by rotations, of which there are at most two. Moreover, a rotation is alocal operation: only two nodes have their size fields invalidated. The link aroundwhich the rotation is performed is incident on these two nodes. Referring to thecode for LEFT-ROTATE(T, x) in Section 13.2, we add the following lines:

13 size[y] ! size[x]14 size[x] ! size[left[x]] + size[right[x]] + 1

Figure 14.2 illustrates how the fields are updated. The change to RIGHT-ROTATE

is symmetric.

追加に必要な時間はわずかO(1)

2011年7月8日金曜日

Page 22: アルゴリズムイントロダクション 14章

14.1 動的順序統計量

2色木を補強し,順序統計量(i番目要素の探査)をO(lgn)で行う

14.2 データ構造補強法

14.1を例にとり,データ構造の補強のステップを一般化

14.3 区間木

2色木を補強し,時間区間のような区間の動的集合を管理する

第14章トピック9章ではO(n)

このステップを覚えてしまおう

2011年7月8日金曜日

Page 23: アルゴリズムイントロダクション 14章

データ構造補強法①基礎となるデータ構造を選ぶ

②基礎データ構造の中で追加的に維持する情報を決定

③追加情報が基礎データ構造上の基本変更操作によって維持できることを確認する

④新しい操作を開発する

このステップを覚えてしまおう

2011年7月8日金曜日

Page 24: アルゴリズムイントロダクション 14章

データ構造補強法①基礎となるデータ構造を選ぶ

②基礎データ構造の中で追加的に維持する情報を決定

③追加情報が基礎データ構造上の基本変更操作によって維持できることを確認する

④新しい操作を開発する

2色木を選択

sizeフィールド (自分以下の要素数) を用意した

sizeフィールドの維持を含め,挿入・削除が可能

与えられた順位を持つ要素の決定/要素の順位の決定

このステップを覚えてしまおう

2011年7月8日金曜日

Page 25: アルゴリズムイントロダクション 14章

14.1 動的順序統計量

2色木を補強し,順序統計量(i番目要素の探査)をO(lgn)で行う

14.2 データ構造補強法

14.1を例にとり,データ構造の補強のステップを一般化

14.3 区間木

2色木を補強し,時間区間のような区間の動的集合を管理する

第14章トピック9章ではO(n)

このステップを覚えてしまおう

2011年7月8日金曜日

Page 26: アルゴリズムイントロダクション 14章

区間木

312 Chapter 14 Augmenting Data Structures

i i i i

(a)

i

(b)

i

(c)

i! i! i! i!

i!i!

Figure 14.3 The interval trichotomy for two closed intervals i and i !. (a) If i and i ! overlap, thereare four situations; in each, low[i] " high[i !] and low[i !] " high[i]. (b) The intervals do not overlap,and high[i] < low[i !]. (c) The intervals do not overlap, and high[i !] < low[i].

An interval tree is a red-black tree that maintains a dynamic set of elements, witheach element x containing an interval int[x]. Interval trees support the followingoperations.

INTERVAL-INSERT(T, x) adds the element x , whose int field is assumed to containan interval, to the interval tree T .

INTERVAL-DELETE(T, x) removes the element x from the interval tree T .

INTERVAL-SEARCH(T, i) returns a pointer to an element x in the interval tree Tsuch that int[x] overlaps interval i , or the sentinel nil[T ] if no such element is inthe set.

Figure 14.4 shows how an interval tree represents a set of intervals. We shall trackthe four-step method from Section 14.2 as we review the design of an interval treeand the operations that run on it.

Step 1: Underlying data structure

We choose a red-black tree in which each node x contains an interval int[x] andthe key of x is the low endpoint, low[int[x]], of the interval. Thus, an inorder treewalk of the data structure lists the intervals in sorted order by low endpoint.

Step 2: Additional information

In addition to the intervals themselves, each node x contains a value max[x], whichis the maximum value of any interval endpoint stored in the subtree rooted at x .

重なっている

左にある 右にある

内容に入る前に...

区間の定義今回扱うのは閉区間

2011年7月8日金曜日

Page 27: アルゴリズムイントロダクション 14章

区間木各ノードがただの値ではなく,区間(整数) int[x]である.

14.3 Interval trees 313

0 5 10 15 20 25 30

05

68

1516

1719

2526 26

3020

1921

239

108

3

(a)

[25,30]

[19,20]

[8,9]

[6,10][0,3]

[5,8] [15,23]

[16,21]

[17,19] [26,26]

3 10

10

23

23

30

20

30

26

20(b)

int

max

Figure 14.4 An interval tree. (a) A set of 10 intervals, shown sorted bottom to top by left endpoint.(b) The interval tree that represents them. An inorder tree walk of the tree lists the nodes in sortedorder by left endpoint.

Step 3: Maintaining the information

We must verify that insertion and deletion can be performed in O(lg n) time on aninterval tree of n nodes. We can determine max[x] given interval int[x] and themax values of node x’s children:

max[x] = max(high[int[x]], max[left[x]], max[right[x]]) .

Thus, by Theorem 14.1, insertion and deletion run in O(lg n) time. In fact, updat-ing the max fields after a rotation can be accomplished in O(1) time, as is shownin Exercises 14.2-4 and 14.3-1.

①基礎データ構造:2色木を選択.下端点が基準

②追加情報:自身以下のノードでの最大値を持つ2011年7月8日金曜日

Page 28: アルゴリズムイントロダクション 14章

区間木各ノードがただの値ではなく,区間(整数) int[x]である.

14.3 Interval trees 313

0 5 10 15 20 25 30

05

68

1516

1719

2526 26

3020

1921

239

108

3

(a)

[25,30]

[19,20]

[8,9]

[6,10][0,3]

[5,8] [15,23]

[16,21]

[17,19] [26,26]

3 10

10

23

23

30

20

30

26

20(b)

int

max

Figure 14.4 An interval tree. (a) A set of 10 intervals, shown sorted bottom to top by left endpoint.(b) The interval tree that represents them. An inorder tree walk of the tree lists the nodes in sortedorder by left endpoint.

Step 3: Maintaining the information

We must verify that insertion and deletion can be performed in O(lg n) time on aninterval tree of n nodes. We can determine max[x] given interval int[x] and themax values of node x’s children:

max[x] = max(high[int[x]], max[left[x]], max[right[x]]) .

Thus, by Theorem 14.1, insertion and deletion run in O(lg n) time. In fact, updat-ing the max fields after a rotation can be accomplished in O(1) time, as is shownin Exercises 14.2-4 and 14.3-1.

①基礎データ構造:2色木を選択.下端点が基準

②追加情報:自身以下のノードでの最大値を持つ2011年7月8日金曜日

Page 29: アルゴリズムイントロダクション 14章

区間木各ノードがただの値ではなく,区間(整数) int[x]である.

14.3 Interval trees 313

0 5 10 15 20 25 30

05

68

1516

1719

2526 26

3020

1921

239

108

3

(a)

[25,30]

[19,20]

[8,9]

[6,10][0,3]

[5,8] [15,23]

[16,21]

[17,19] [26,26]

3 10

10

23

23

30

20

30

26

20(b)

int

max

Figure 14.4 An interval tree. (a) A set of 10 intervals, shown sorted bottom to top by left endpoint.(b) The interval tree that represents them. An inorder tree walk of the tree lists the nodes in sortedorder by left endpoint.

Step 3: Maintaining the information

We must verify that insertion and deletion can be performed in O(lg n) time on aninterval tree of n nodes. We can determine max[x] given interval int[x] and themax values of node x’s children:

max[x] = max(high[int[x]], max[left[x]], max[right[x]]) .

Thus, by Theorem 14.1, insertion and deletion run in O(lg n) time. In fact, updat-ing the max fields after a rotation can be accomplished in O(1) time, as is shownin Exercises 14.2-4 and 14.3-1.

①基礎データ構造:2色木を選択.下端点が基準

②追加情報:自身以下のノードでの最大値を持つ2011年7月8日金曜日

Page 30: アルゴリズムイントロダクション 14章

区間木各ノードがただの値ではなく,区間(整数) int[x]である.

14.3 Interval trees 313

0 5 10 15 20 25 30

05

68

1516

1719

2526 26

3020

1921

239

108

3

(a)

[25,30]

[19,20]

[8,9]

[6,10][0,3]

[5,8] [15,23]

[16,21]

[17,19] [26,26]

3 10

10

23

23

30

20

30

26

20(b)

int

max

Figure 14.4 An interval tree. (a) A set of 10 intervals, shown sorted bottom to top by left endpoint.(b) The interval tree that represents them. An inorder tree walk of the tree lists the nodes in sortedorder by left endpoint.

Step 3: Maintaining the information

We must verify that insertion and deletion can be performed in O(lg n) time on aninterval tree of n nodes. We can determine max[x] given interval int[x] and themax values of node x’s children:

max[x] = max(high[int[x]], max[left[x]], max[right[x]]) .

Thus, by Theorem 14.1, insertion and deletion run in O(lg n) time. In fact, updat-ing the max fields after a rotation can be accomplished in O(1) time, as is shownin Exercises 14.2-4 and 14.3-1.

①基礎データ構造:2色木を選択.下端点が基準

②追加情報:自身以下のノードでの最大値を持つ2011年7月8日金曜日

Page 31: アルゴリズムイントロダクション 14章

区間木③情報維持

挿入と削除がO(lgn)時間であることを検証

今回は省略

定理14.1及び練習問題14.2-4 14.3-1にて証明可能

2011年7月8日金曜日

Page 32: アルゴリズムイントロダクション 14章

区間木④新操作の開発

重なる区間があるかどうかの判断 : 例[22 25]

314 Chapter 14 Augmenting Data Structures

Step 4: Developing new operations

The only new operation we need is INTERVAL-SEARCH(T, i), which finds a nodein tree T whose interval overlaps interval i . If there is no interval that overlaps i inthe tree, a pointer to the sentinel nil[T ] is returned.

INTERVAL-SEARCH(T, i)1 x ! root[T ]2 while x "= nil[T ] and i does not overlap int[x]3 do if left[x] "= nil[T ] and max[left[x]] # low[i]4 then x ! left[x]5 else x ! right[x]6 return x

The search for an interval that overlaps i starts with x at the root of the tree andproceeds downward. It terminates when either an overlapping interval is foundor x points to the sentinel nil[T ]. Since each iteration of the basic loop takes O(1)time, and since the height of an n-node red-black tree is O(lg n), the INTERVAL-SEARCH procedure takes O(lg n) time.

Before we see why INTERVAL-SEARCH is correct, let’s examine how it workson the interval tree in Figure 14.4. Suppose we wish to find an interval that overlapsthe interval i = [22, 25]. We begin with x as the root, which contains [16, 21] anddoes not overlap i . Since max[left[x]] = 23 is greater than low[i] = 22, the loopcontinues with x as the left child of the root—the node containing [8, 9], whichalso does not overlap i . This time, max[left[x]] = 10 is less than low[i] = 22,so the loop continues with the right child of x as the new x . The interval [15, 23]stored in this node overlaps i , so the procedure returns this node.

As an example of an unsuccessful search, suppose we wish to find an intervalthat overlaps i = [11, 14] in the interval tree of Figure 14.4. We once again beginwith x as the root. Since the root’s interval [16, 21] does not overlap i , and sincemax[left[x]] = 23 is greater than low[i] = 11, we go left to the node containing[8, 9]. (Note that no interval in the right subtree overlaps i—we shall see why later.)Interval [8, 9] does not overlap i , and max[left[x]] = 10 is less than low[i] = 11, sowe go right. (Note that no interval in the left subtree overlaps i .) Interval [15, 23]does not overlap i , and its left child is nil[T ], so we go right, the loop terminates,and the sentinel nil[T ] is returned.

To see why INTERVAL-SEARCH is correct, we must understand why it sufficesto examine a single path from the root. The basic idea is that at any node x , if int[x]does not overlap i , the search always proceeds in a safe direction: an overlappinginterval will definitely be found if there is one in the tree. The following theoremstates this property more precisely.

14.3 Interval trees 313

0 5 10 15 20 25 30

05

68

1516

1719

2526 26

3020

1921

239

108

3

(a)

[25,30]

[19,20]

[8,9]

[6,10][0,3]

[5,8] [15,23]

[16,21]

[17,19] [26,26]

3 10

10

23

23

30

20

30

26

20(b)

int

max

Figure 14.4 An interval tree. (a) A set of 10 intervals, shown sorted bottom to top by left endpoint.(b) The interval tree that represents them. An inorder tree walk of the tree lists the nodes in sortedorder by left endpoint.

Step 3: Maintaining the information

We must verify that insertion and deletion can be performed in O(lg n) time on aninterval tree of n nodes. We can determine max[x] given interval int[x] and themax values of node x’s children:

max[x] = max(high[int[x]], max[left[x]], max[right[x]]) .

Thus, by Theorem 14.1, insertion and deletion run in O(lg n) time. In fact, updat-ing the max fields after a rotation can be accomplished in O(1) time, as is shownin Exercises 14.2-4 and 14.3-1.

2011年7月8日金曜日

Page 33: アルゴリズムイントロダクション 14章

区間木④新操作の開発

重なる区間があるかどうかの判断 : 例[22 25]

314 Chapter 14 Augmenting Data Structures

Step 4: Developing new operations

The only new operation we need is INTERVAL-SEARCH(T, i), which finds a nodein tree T whose interval overlaps interval i . If there is no interval that overlaps i inthe tree, a pointer to the sentinel nil[T ] is returned.

INTERVAL-SEARCH(T, i)1 x ! root[T ]2 while x "= nil[T ] and i does not overlap int[x]3 do if left[x] "= nil[T ] and max[left[x]] # low[i]4 then x ! left[x]5 else x ! right[x]6 return x

The search for an interval that overlaps i starts with x at the root of the tree andproceeds downward. It terminates when either an overlapping interval is foundor x points to the sentinel nil[T ]. Since each iteration of the basic loop takes O(1)time, and since the height of an n-node red-black tree is O(lg n), the INTERVAL-SEARCH procedure takes O(lg n) time.

Before we see why INTERVAL-SEARCH is correct, let’s examine how it workson the interval tree in Figure 14.4. Suppose we wish to find an interval that overlapsthe interval i = [22, 25]. We begin with x as the root, which contains [16, 21] anddoes not overlap i . Since max[left[x]] = 23 is greater than low[i] = 22, the loopcontinues with x as the left child of the root—the node containing [8, 9], whichalso does not overlap i . This time, max[left[x]] = 10 is less than low[i] = 22,so the loop continues with the right child of x as the new x . The interval [15, 23]stored in this node overlaps i , so the procedure returns this node.

As an example of an unsuccessful search, suppose we wish to find an intervalthat overlaps i = [11, 14] in the interval tree of Figure 14.4. We once again beginwith x as the root. Since the root’s interval [16, 21] does not overlap i , and sincemax[left[x]] = 23 is greater than low[i] = 11, we go left to the node containing[8, 9]. (Note that no interval in the right subtree overlaps i—we shall see why later.)Interval [8, 9] does not overlap i , and max[left[x]] = 10 is less than low[i] = 11, sowe go right. (Note that no interval in the left subtree overlaps i .) Interval [15, 23]does not overlap i , and its left child is nil[T ], so we go right, the loop terminates,and the sentinel nil[T ] is returned.

To see why INTERVAL-SEARCH is correct, we must understand why it sufficesto examine a single path from the root. The basic idea is that at any node x , if int[x]does not overlap i , the search always proceeds in a safe direction: an overlappinginterval will definitely be found if there is one in the tree. The following theoremstates this property more precisely.

14.3 Interval trees 313

0 5 10 15 20 25 30

05

68

1516

1719

2526 26

3020

1921

239

108

3

(a)

[25,30]

[19,20]

[8,9]

[6,10][0,3]

[5,8] [15,23]

[16,21]

[17,19] [26,26]

3 10

10

23

23

30

20

30

26

20(b)

int

max

Figure 14.4 An interval tree. (a) A set of 10 intervals, shown sorted bottom to top by left endpoint.(b) The interval tree that represents them. An inorder tree walk of the tree lists the nodes in sortedorder by left endpoint.

Step 3: Maintaining the information

We must verify that insertion and deletion can be performed in O(lg n) time on aninterval tree of n nodes. We can determine max[x] given interval int[x] and themax values of node x’s children:

max[x] = max(high[int[x]], max[left[x]], max[right[x]]) .

Thus, by Theorem 14.1, insertion and deletion run in O(lg n) time. In fact, updat-ing the max fields after a rotation can be accomplished in O(1) time, as is shownin Exercises 14.2-4 and 14.3-1.

[22 25]

2011年7月8日金曜日

Page 34: アルゴリズムイントロダクション 14章

区間木④新操作の開発

重なる区間があるかどうかの判断 : 例[22 25]

314 Chapter 14 Augmenting Data Structures

Step 4: Developing new operations

The only new operation we need is INTERVAL-SEARCH(T, i), which finds a nodein tree T whose interval overlaps interval i . If there is no interval that overlaps i inthe tree, a pointer to the sentinel nil[T ] is returned.

INTERVAL-SEARCH(T, i)1 x ! root[T ]2 while x "= nil[T ] and i does not overlap int[x]3 do if left[x] "= nil[T ] and max[left[x]] # low[i]4 then x ! left[x]5 else x ! right[x]6 return x

The search for an interval that overlaps i starts with x at the root of the tree andproceeds downward. It terminates when either an overlapping interval is foundor x points to the sentinel nil[T ]. Since each iteration of the basic loop takes O(1)time, and since the height of an n-node red-black tree is O(lg n), the INTERVAL-SEARCH procedure takes O(lg n) time.

Before we see why INTERVAL-SEARCH is correct, let’s examine how it workson the interval tree in Figure 14.4. Suppose we wish to find an interval that overlapsthe interval i = [22, 25]. We begin with x as the root, which contains [16, 21] anddoes not overlap i . Since max[left[x]] = 23 is greater than low[i] = 22, the loopcontinues with x as the left child of the root—the node containing [8, 9], whichalso does not overlap i . This time, max[left[x]] = 10 is less than low[i] = 22,so the loop continues with the right child of x as the new x . The interval [15, 23]stored in this node overlaps i , so the procedure returns this node.

As an example of an unsuccessful search, suppose we wish to find an intervalthat overlaps i = [11, 14] in the interval tree of Figure 14.4. We once again beginwith x as the root. Since the root’s interval [16, 21] does not overlap i , and sincemax[left[x]] = 23 is greater than low[i] = 11, we go left to the node containing[8, 9]. (Note that no interval in the right subtree overlaps i—we shall see why later.)Interval [8, 9] does not overlap i , and max[left[x]] = 10 is less than low[i] = 11, sowe go right. (Note that no interval in the left subtree overlaps i .) Interval [15, 23]does not overlap i , and its left child is nil[T ], so we go right, the loop terminates,and the sentinel nil[T ] is returned.

To see why INTERVAL-SEARCH is correct, we must understand why it sufficesto examine a single path from the root. The basic idea is that at any node x , if int[x]does not overlap i , the search always proceeds in a safe direction: an overlappinginterval will definitely be found if there is one in the tree. The following theoremstates this property more precisely.

14.3 Interval trees 313

0 5 10 15 20 25 30

05

68

1516

1719

2526 26

3020

1921

239

108

3

(a)

[25,30]

[19,20]

[8,9]

[6,10][0,3]

[5,8] [15,23]

[16,21]

[17,19] [26,26]

3 10

10

23

23

30

20

30

26

20(b)

int

max

Figure 14.4 An interval tree. (a) A set of 10 intervals, shown sorted bottom to top by left endpoint.(b) The interval tree that represents them. An inorder tree walk of the tree lists the nodes in sortedorder by left endpoint.

Step 3: Maintaining the information

We must verify that insertion and deletion can be performed in O(lg n) time on aninterval tree of n nodes. We can determine max[x] given interval int[x] and themax values of node x’s children:

max[x] = max(high[int[x]], max[left[x]], max[right[x]]) .

Thus, by Theorem 14.1, insertion and deletion run in O(lg n) time. In fact, updat-ing the max fields after a rotation can be accomplished in O(1) time, as is shownin Exercises 14.2-4 and 14.3-1.

[22 25]

[22 25]

2011年7月8日金曜日

Page 35: アルゴリズムイントロダクション 14章

区間木④新操作の開発

重なる区間があるかどうかの判断 : 例[22 25]

314 Chapter 14 Augmenting Data Structures

Step 4: Developing new operations

The only new operation we need is INTERVAL-SEARCH(T, i), which finds a nodein tree T whose interval overlaps interval i . If there is no interval that overlaps i inthe tree, a pointer to the sentinel nil[T ] is returned.

INTERVAL-SEARCH(T, i)1 x ! root[T ]2 while x "= nil[T ] and i does not overlap int[x]3 do if left[x] "= nil[T ] and max[left[x]] # low[i]4 then x ! left[x]5 else x ! right[x]6 return x

The search for an interval that overlaps i starts with x at the root of the tree andproceeds downward. It terminates when either an overlapping interval is foundor x points to the sentinel nil[T ]. Since each iteration of the basic loop takes O(1)time, and since the height of an n-node red-black tree is O(lg n), the INTERVAL-SEARCH procedure takes O(lg n) time.

Before we see why INTERVAL-SEARCH is correct, let’s examine how it workson the interval tree in Figure 14.4. Suppose we wish to find an interval that overlapsthe interval i = [22, 25]. We begin with x as the root, which contains [16, 21] anddoes not overlap i . Since max[left[x]] = 23 is greater than low[i] = 22, the loopcontinues with x as the left child of the root—the node containing [8, 9], whichalso does not overlap i . This time, max[left[x]] = 10 is less than low[i] = 22,so the loop continues with the right child of x as the new x . The interval [15, 23]stored in this node overlaps i , so the procedure returns this node.

As an example of an unsuccessful search, suppose we wish to find an intervalthat overlaps i = [11, 14] in the interval tree of Figure 14.4. We once again beginwith x as the root. Since the root’s interval [16, 21] does not overlap i , and sincemax[left[x]] = 23 is greater than low[i] = 11, we go left to the node containing[8, 9]. (Note that no interval in the right subtree overlaps i—we shall see why later.)Interval [8, 9] does not overlap i , and max[left[x]] = 10 is less than low[i] = 11, sowe go right. (Note that no interval in the left subtree overlaps i .) Interval [15, 23]does not overlap i , and its left child is nil[T ], so we go right, the loop terminates,and the sentinel nil[T ] is returned.

To see why INTERVAL-SEARCH is correct, we must understand why it sufficesto examine a single path from the root. The basic idea is that at any node x , if int[x]does not overlap i , the search always proceeds in a safe direction: an overlappinginterval will definitely be found if there is one in the tree. The following theoremstates this property more precisely.

14.3 Interval trees 313

0 5 10 15 20 25 30

05

68

1516

1719

2526 26

3020

1921

239

108

3

(a)

[25,30]

[19,20]

[8,9]

[6,10][0,3]

[5,8] [15,23]

[16,21]

[17,19] [26,26]

3 10

10

23

23

30

20

30

26

20(b)

int

max

Figure 14.4 An interval tree. (a) A set of 10 intervals, shown sorted bottom to top by left endpoint.(b) The interval tree that represents them. An inorder tree walk of the tree lists the nodes in sortedorder by left endpoint.

Step 3: Maintaining the information

We must verify that insertion and deletion can be performed in O(lg n) time on aninterval tree of n nodes. We can determine max[x] given interval int[x] and themax values of node x’s children:

max[x] = max(high[int[x]], max[left[x]], max[right[x]]) .

Thus, by Theorem 14.1, insertion and deletion run in O(lg n) time. In fact, updat-ing the max fields after a rotation can be accomplished in O(1) time, as is shownin Exercises 14.2-4 and 14.3-1.

[22 25]

[22 25]

2011年7月8日金曜日

Page 36: アルゴリズムイントロダクション 14章

区間木④新操作の開発

重なる区間があるかどうかの判断 : 例[22 25]

314 Chapter 14 Augmenting Data Structures

Step 4: Developing new operations

The only new operation we need is INTERVAL-SEARCH(T, i), which finds a nodein tree T whose interval overlaps interval i . If there is no interval that overlaps i inthe tree, a pointer to the sentinel nil[T ] is returned.

INTERVAL-SEARCH(T, i)1 x ! root[T ]2 while x "= nil[T ] and i does not overlap int[x]3 do if left[x] "= nil[T ] and max[left[x]] # low[i]4 then x ! left[x]5 else x ! right[x]6 return x

The search for an interval that overlaps i starts with x at the root of the tree andproceeds downward. It terminates when either an overlapping interval is foundor x points to the sentinel nil[T ]. Since each iteration of the basic loop takes O(1)time, and since the height of an n-node red-black tree is O(lg n), the INTERVAL-SEARCH procedure takes O(lg n) time.

Before we see why INTERVAL-SEARCH is correct, let’s examine how it workson the interval tree in Figure 14.4. Suppose we wish to find an interval that overlapsthe interval i = [22, 25]. We begin with x as the root, which contains [16, 21] anddoes not overlap i . Since max[left[x]] = 23 is greater than low[i] = 22, the loopcontinues with x as the left child of the root—the node containing [8, 9], whichalso does not overlap i . This time, max[left[x]] = 10 is less than low[i] = 22,so the loop continues with the right child of x as the new x . The interval [15, 23]stored in this node overlaps i , so the procedure returns this node.

As an example of an unsuccessful search, suppose we wish to find an intervalthat overlaps i = [11, 14] in the interval tree of Figure 14.4. We once again beginwith x as the root. Since the root’s interval [16, 21] does not overlap i , and sincemax[left[x]] = 23 is greater than low[i] = 11, we go left to the node containing[8, 9]. (Note that no interval in the right subtree overlaps i—we shall see why later.)Interval [8, 9] does not overlap i , and max[left[x]] = 10 is less than low[i] = 11, sowe go right. (Note that no interval in the left subtree overlaps i .) Interval [15, 23]does not overlap i , and its left child is nil[T ], so we go right, the loop terminates,and the sentinel nil[T ] is returned.

To see why INTERVAL-SEARCH is correct, we must understand why it sufficesto examine a single path from the root. The basic idea is that at any node x , if int[x]does not overlap i , the search always proceeds in a safe direction: an overlappinginterval will definitely be found if there is one in the tree. The following theoremstates this property more precisely.

14.3 Interval trees 313

0 5 10 15 20 25 30

05

68

1516

1719

2526 26

3020

1921

239

108

3

(a)

[25,30]

[19,20]

[8,9]

[6,10][0,3]

[5,8] [15,23]

[16,21]

[17,19] [26,26]

3 10

10

23

23

30

20

30

26

20(b)

int

max

Figure 14.4 An interval tree. (a) A set of 10 intervals, shown sorted bottom to top by left endpoint.(b) The interval tree that represents them. An inorder tree walk of the tree lists the nodes in sortedorder by left endpoint.

Step 3: Maintaining the information

We must verify that insertion and deletion can be performed in O(lg n) time on aninterval tree of n nodes. We can determine max[x] given interval int[x] and themax values of node x’s children:

max[x] = max(high[int[x]], max[left[x]], max[right[x]]) .

Thus, by Theorem 14.1, insertion and deletion run in O(lg n) time. In fact, updat-ing the max fields after a rotation can be accomplished in O(1) time, as is shownin Exercises 14.2-4 and 14.3-1.

[22 25]

[22 25]

[22 25]

2011年7月8日金曜日

Page 37: アルゴリズムイントロダクション 14章

まとめ既存のデータ構造を補強する流れが理解できたb

2011年7月8日金曜日

Page 38: アルゴリズムイントロダクション 14章

Thank you

2011年7月8日金曜日

Page 39: アルゴリズムイントロダクション 14章

区間木③情報維持

挿入と削除がO(lgn)時間であることを検証

下記を活用

14.3 Interval trees 313

0 5 10 15 20 25 30

05

68

1516

1719

2526 26

3020

1921

239

108

3

(a)

[25,30]

[19,20]

[8,9]

[6,10][0,3]

[5,8] [15,23]

[16,21]

[17,19] [26,26]

3 10

10

23

23

30

20

30

26

20(b)

int

max

Figure 14.4 An interval tree. (a) A set of 10 intervals, shown sorted bottom to top by left endpoint.(b) The interval tree that represents them. An inorder tree walk of the tree lists the nodes in sortedorder by left endpoint.

Step 3: Maintaining the information

We must verify that insertion and deletion can be performed in O(lg n) time on aninterval tree of n nodes. We can determine max[x] given interval int[x] and themax values of node x’s children:

max[x] = max(high[int[x]], max[left[x]], max[right[x]]) .

Thus, by Theorem 14.1, insertion and deletion run in O(lg n) time. In fact, updat-ing the max fields after a rotation can be accomplished in O(1) time, as is shownin Exercises 14.2-4 and 14.3-1.

2011年7月8日金曜日