G o o g l e File System

26
GOOGLE FILE SYSTEM 10006 2142 仕仕 100062118 仕仕 100062124 仕仕 1

description

G o o g l e File System. 100062142 陳 仕融 100062118 黃 振凱 100062124 林 佑恩 z. Outline. Introduction Design Overview System Interactions Master Operation Fault Tolerance and Diagnosis Summary. Introduction. A scalable distributed file system - PowerPoint PPT Presentation

Transcript of G o o g l e File System

Page 1: G o o g l e File System

GOOGLE

FILE SYSTEM

100062142陳仕融

100062118黃振凱

100062124林佑恩

Z

1

Page 2: G o o g l e File System

OUTLINE Introduction Design Overview System Interactions Master Operation Fault Tolerance and Diagnosis Summary

2

Page 3: G o o g l e File System

INTRODUCTION

1. A scalable distributed file system2. For large distributed data-intensive

application3. Fault tolerance, running on cheap hardware4. High performance for large number of

clients

3

Page 4: G o o g l e File System

INTRODUCTION: GFS IN USE

GFS is… Widely deployed within Google Provide hundreds of TBs of storage For service, research and development use

4

Page 5: G o o g l e File System

INTRODUCTION Observations of data usage in Google's application Files are typically large; multiple GB files are common Access pattern: most writes are appending while most

reads are sequential(no need for caching) Co-designing applications and file system benefits by

increasing flexibility Component failures are the norm

5

Page 6: G o o g l e File System

DESIGN OVERVIEW: ASSUMPTIONS

Cheap hardware often fail, and failure must be tolerated and recovered

Large files(GB~TB) must be managed efficiently Large streaming reads, and small random reads Large sequential append write High sustained bandwidth, rather than response time

6

Page 7: G o o g l e File System

DESIGN OVERVIEW: INTERFACE Synchronization & atomicity is encapsulated in the

GFS library Create Delete Open Close Read Write Snapshot Record Append

7

Page 8: G o o g l e File System

DESIGN OVERVIEW: ENVIRONMENT

Linux machine running user-level applications One master, multiple chunkservers Files are divided into fixed-sized(64MB) chunks Each chunk has a 64-bit handle(or identifier) By default, each chunk is replicated 3 times HeartBeat messages – check if server is alive Client and chunkserver do not cache file data

8

Page 9: G o o g l e File System

DESIGN OVERVIEW: DATA FLOW

9

Page 10: G o o g l e File System

DESIGN OVERVIEW: MASTER

File system metadata Metadata are small enough too keep in memory, which

simplify the design and gain performance File namespace File to chunk mapping table System-wide operation A GFS cluster has only one master

10

Page 11: G o o g l e File System

SYSTEM INTERACTION

Goal: minimizes the master’s involvements in all operations

Leases and Mutation Order Data Flow Atomic Record Appends Snapshot

11

Page 12: G o o g l e File System

SYSTEM INTERACTION

Leases and Mutation OrderMutation: operations(EX: operations such as write or append) that changes the contents or metadata of chunks.

Mutations are performed at all the chunk’s replicas

12

Page 13: G o o g l e File System

SYSTEM INTERACTION

Lease(租契 )Master grants a chunk lease to one of

the replica => primaryPrimary than picks a serial order for all mutations to the chunk(without master’s intervention)

13

Page 14: G o o g l e File System

14

System interaction

Page 15: G o o g l e File System

SYSTEM INTERACTION

Data flowfully utilize each machine’s network bandwidth : data is pushed linearly along a chain of chunkservers

Avoid network bottlenecks : each machine forwards the data to the closest network

Minimize the latency to push through all the data :pipelining the data transfer over TCP

15

Page 16: G o o g l e File System

SYSTEM INTERACTIONAtomic Record Appends

Client specifies only data. GFS appends the data to the file at least once atomically at an offset of GFS’s choosing and returns that offset to the client => guarantees replica is written at least once but does not guarantee all replicas are bytewise identical

16

Page 17: G o o g l e File System

SYSTEM INTERACTIONSnapshot

Makes a copy of a file or a directorySnapshot implementation

Master receives a snapshot requestMaster revokes leases on the chunks in the files it

is about to snapshotMaster create a new copy of the chunk Master logs the operationDuplicates the metadata for the source file or

directory treeNewly created snapshot files point to same

chunks as the source files

17

Page 18: G o o g l e File System

MASTER OPERATION Namespace management and locking

GFS represents its namespace as a lookup table mapping full pathnames to metadata

Each node in the namespace tree has an associated read-write lock

Locking schemeRequire no write lock on the parent directory => allows concurrent mutations in the same directory

Read lock on the directory name to prevent the directory from being deleted, renamed, or snapshotted

18

Page 19: G o o g l e File System

MASTER OPERATION

Replica placementCreation, Re-replication,

RebalancingChunk replicas are created for three reasonsChunk creationRe-replicationRebalancing

19

Page 20: G o o g l e File System

MASTER OPERATION Garbage collection

After a file is deleted , GFS does not immediately reclaim the available physical storage.

The file is renamed to a hidden name that includes the deletion timestamp

Remove hidden files for more than three daysThe file can still be read and undeleted until it is

removedOrphaned chunksReplica not known to the master is garbageRegular background activities of the master

20

Page 21: G o o g l e File System

MASTER OPERATION

Stale replica detectionMaster maintains a chunk version number for each chunk

Master removes stale replicas in its regular garbage collection

Client or the chunkserver always access up-to-date data

21

Page 22: G o o g l e File System

FAULT TOLERANCE AND DIAGNOSIS High availability

Fast recoveryChunk replicationMaster replication

Master state is replicated for reliability – its operation logs and checkpoints are replicated on multiple machines

Shadow master – provides read-only access even the primary master is down

22

Page 23: G o o g l e File System

FAULT TOLERANCE AND DIAGNOSIS

Data integrityChecksum

Each chunkserver independently verify the integrity of its own copy by maintaining checksums

A chunk is broken up into 64KB blocks. Each block has a corresponding 32 bits checksum

Chunkserver verifies the checksum. If mismatch, the requestor read from other replicas. Master clone correct replica and instructs the chunkserver to delete the false replica

23

Page 24: G o o g l e File System

FAULT TOLERANCE AND DIAGNOSIS

Diagnostic toolsDiagnostic logs that record many significant events

and all RPC(Remote Procedure Call) requests and replies

24

Page 25: G o o g l e File System

SUMMARY

GFS is widely used within Google as the storage platform for research and development as well as production data processing

Google File System is no doubt one of the crucial pusher that pushes Google to the top search engine of the world!!

25

Page 26: G o o g l e File System

THANKS FOR LISTENING26