Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

29
Copyright 2014 FUJITSU FUJITSU ETERNUS CD10000 (CD10K) The safe way to make Ceph storage enterprise ready! BYOD Build your own disaster? Alex Lam Vice President, Enterprise Business March 12, 2015

Transcript of Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

Page 1: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

Copyright 2014 FUJITSU

FUJITSU ETERNUS CD10000 (CD10K)The safe way to make Ceph storage enterprise ready!

BYOD – Build your own disaster?

Alex Lam

Vice President, Enterprise Business

March 12, 2015

Page 2: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

1

3rd Platform: Intersection of Mobile, Cloud, Social and Big Data

Source: IDC 12/12

From 2013 through 2020, 90% of IT industry

growth will be driven by 3rd Platform

technologies, that today represent just 22% of

ICT spending

Services will be build on innovative mash-ups of

cloud, mobile devices/apps, social technologies,

big data/analytics, and more

Data Center Transforming

Converged systems will account for over

1/3 of enterprise cloud deployments by

2016

Software-defined networks will penetrate

35% of Ethernet switching in the data

center

Growing importance of mega DC, Service

Copyright 2014 FUJITSU

Page 3: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

2

3rd Platform creates challenges for Traditional Storage

Copyright 2014 FUJITSU

3rd Platform – What is required from Storage

Manageability

Central management

of huge storage

amounts

Unified multi

protocol access

(block, file, object)

Seamless introduction

of new storage

Reliability

Full redundancy

Self healing

Geographical dispersion

Fast rebuild

Scalability

Practically unlimited

scalability in terms of

performance & capacity

No bottlenecks

No hot spots

Page 4: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

3

PB divide is creating need for new storage architectures

Traditional RAID systems face their limits

when crossing the petabyte divide

High RAID rebuild times, high risk

Exponentially rising costs for HA

Over or under provisioning

due to unexpected data growth

Extreme data migration duration

Significant issues with (planned) downtime

Costs per capacity

Performance issues

Need for new architectures

0.5PB 1PB 10PB 20PB 100PB

Traditional RAID systems

New storage architectures

Copyright 2014 FUJITSU

Page 5: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

4

A new Paradigm shift: Distributed, scale-out Storage

Classical scale-up, RAID storage

System is divided into RAID groups providing

efficient data protection

Issues with rebuild times of large capacity disks

Protection against system failures requires

external add-ons

Limited performance scalability inhibits

full capacity utilization of systems

Issues with performance hot-spots

Distributed scale-out storage

Data are broadly distributed over disks and nodes

delivering high I/O speed even with slow disks

Protection against disk and node failures is achieved

by creating 2, 3, 4 … n replicas of data

Distributed file system avoids central bottlenecks

Adding nodes provides linear performance scalability

Fault tolerance and online migration of nodes is part of the design → Zero down time

Copyright 2014 FUJITSU

Page 6: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

5

Ceph storage system software: a leading technology of distributed storage

Ceph is an open source, software defined storage platform

Designed to present object, block, and file storage from a

distributed x86 compute cluster, scalable to the Exabyte level

Data are replicated over disks and nodes, delivering fault tolerance

The system is designed to be both self-healing and self-managing

Objective is to reduce investment and operational costs

Ceph is part of the OpenStack project

Ceph software is the core storage element

of the OpenStack project

OpenStack enables IT organizations building private

and public cloud environments with open source software

Copyright 2014 FUJITSU

Page 7: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

6

Key Scalability Enabler for Ceph: CRUSH Data Placement

Controlled Replication Under Scalable Hashing (CRUSH)

Metadata computed instead of stored

Almost no central lookups

No hot spots

Pseudo-random, uniform (weighted) distribution

Dynamic adaption to infrastructure changes

Adding devices has no significant impact on data mapping

Infrastructure aware algorithm

Placement based on physical infrastructure

E.g., devices, servers, cabinets, rows, DCs, etc.

Easy and flexible placement rules

"Three replicas, different cabinets, same row"

Quickly adjusts to failures

Automatic and fast recovery from lost disks

CRUSH

Copyright 2014 FUJITSU

Page 8: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

7

A model for dynamic "clouds" – how nature overcomes the central access bottleneck

The fishes represents the objects

The swarm including the fishes

represents the object storage

What happens if …

Swarm of fishes represents the cluster

Copyright 2014 FUJITSU

Page 9: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

8

A model for dynamic "clouds" in nature

Dolphins attack the fish swarm

Copyright 2014 FUJITSU

Page 10: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

9

A model for dynamic "clouds" in nature

Some of the fishes are hunted

and eaten by the dolphins but …

The fish swarm will resist the

attack and the cluster remains

After a time the fish swarm

formation will regenerate

Copyright 2014 FUJITSU

Page 11: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

10

How CEPH Overcomes the Central Access Bottleneck

Disks assume the role of

fish in the storage swarm

Every disk is represented by an

OSD (Object Storage Daemon)

Clients can directly access the

data without a central instance

The fish swarm represents the

data. A single fish cannot influence

the rest of the population

End-user

Copyright 2014 FUJITSU

Page 12: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

11

Sounds good, so how do I implement CEPH?

Copyright 2014 FUJITSU

Page 14: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

13

Building Storage with Ceph looks simple – but……

Many Complexities exist:

Rightsizing server, disk types, network

bandwidth

Silos of management tools (HW, SW..)

Keeping Ceph versions in synch with

versions of server HW, OS, connectivity,

drivers in sync

Management of maintenance and support

contracts of components

Troubleshooting

Copyright 2014 FUJITSU

Build Ceph source storage yourself

Page 15: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

14

Is managing Ceph based storage easy?

1) Deployment of servers….

2) Connecting servers….

3) Operating servers….

4) Provisioning Ceph on Servers….

5) Operating Ceph….

6) Testing compatibility of Ceph with servers and

network….

7) Testing performance….

8) Updating Ceph on each server….

9) Start new compatibility and performance tests

after each update….

10) Get trained for management tools for each

component….

Copyright 2014 FUJITSU

Page 16: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

15

ETERNUS CD10000 – Making Ceph enterprise ready

Build Ceph open source storage yourself Out of the box ETERNUS CD1000

robust platform

incl. support

incl. maintenance

Single point of contact

ETERNUS CD10000 combines open source storage with enterprise–class quality of service

Copyright 2014 FUJITSU

Page 17: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

16

How do we move CEPH from Shadow IT to Mainstream in the Enterprise?

Although based on open source software, ETERNUS CD10000

is a fully integrated and quality assured storage system

End-to-end maintenance, consisting upgrades and troubleshooting for

the complete system from one source

Adding functionality where Ceph has gaps (e.g. VMware, SNMP)

Integrated management of Ceph and hardware functions increases

operational efficiency and makes Ceph simpler to use

Performance-optimized sizing and architecture avoid bottlenecks

during operation, maintenance and failure recovery

Adding integrated apps on top of the system

Copyright 2014 FUJITSU

ETERNUS CD10000 makes Ceph enterprise-ready

Page 18: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

17

Fujitsu Maintenance, Support and Professional Services

ETERNUS CD10000: A complete offer

Copyright 2014 FUJITSU

Page 20: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

19

Fujitsu’s enhancements on top of open source CEPH

Copyright 2013 FUJITSU

Ceph Open Source Software

Ceph Enterprise Distribution (Inktank)

Ceph Enriched Enterprise Functionality (VSM from Intel)

CD10000 Appliance (Fujitsu Enhancements in SW + HW)

Specific workload Apps (e.g. irod for Archiving)

Fujitsu

enhancements

Page 21: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

20

Unlimited Scalability

Cluster of storage nodes

Capacity and performance scales by

adding storage nodes

Three different node types enable

differentiated service levels

Density, capacity optimized

Performance optimized

Optimized for small scale dev & test

1st version of CD10000 can scale from

4 to 224 nodes

Scales up to 50+ Petabyte!

Copyright 2014 FUJITSU

Basic node 12 TB Performance node 35 TB Capacity node 252 TB

Page 22: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

21

One Storage System – seamless management

ETERNUS CD10000 delivers one seamless

management for the complete stack

Central Ceph software deployment

Central storage node management

Central network management

Central log file management

Central cluster management

Central configuration, administration and

maintenance

SNMP integration of all nodes and

network components

Copyright 2014 FUJITSU

Page 23: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

22

Seamless management drives productivity

Manual Ceph Installation

Setting-up a 4 node Ceph cluster with 15 OSDs: 1.5 – 2 admin days

Adding an additional node: 3 to 12 admin hrs (1/2 a day)

Vs.

Automated Installation through ETERNUS CD10000

Setting-up a 4 node Ceph cluster with 15 OSDs: 1 hour

Adding an additional node: 0.5 hour

Copyright 2014 FUJITSU

Page 24: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

23

TCO optimized

Based on x86 industry standard architectures

Based on open source software (Ceph)

High-availability and self-optimizing functions are part

of the design at no extra costs

Highly automated and fully integrated management

reduces operational efforts

Online maintenance and technology refresh reduce

costs of downtime dramatically

Extreme long lifecycle delivers investment protection

End-to-end design an maintenance from Fujitsu

reduces, evaluation, integration, maintenance costs

Copyright 2014 FUJITSU

Better service levels at reduced costs – business centric storage

Page 25: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

24

Adding and Integrating Apps

The ETERNUS CD10000 architecture

enables the integration of apps

Fujitsu is working with customers and

software vendors to integrate selected

storage apps

E.g. archiving, sync & share, data

discovery, cloud apps…

Copyright 2014 FUJITSU

Cloud

Services

Sync

& ShareArchive

iRODSdata

discovery

ET

ER

NU

S C

D10000

Object Level

Access

Block Level

Access

File Level

Access

Central Management

Ceph Storage System S/W and Fujitsu

Extensions

10GbE Frontend Network

Fast Interconnect Network

Perf

orm

ance N

odes

Capacity N

odes

Page 26: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

25

Typical usage areas

Data volumes

500TB and beyond

Exponential and

unknown data growth

Typical usage areas

(Cloud) service providers

Telecommunications providers

Financial institutions

R&D, public institutions, universities Companies with

large R&D activities

Public institutions with

huge document repositories

Media, broadcasting/

streaming companies

Business analytics tasks

needing fast access to large

scale of (historical) data

Copyright 2014 FUJITSU

Page 27: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

26

How ETERNUS CD10000 supports cloud biz

Cloud IT Trading Platform

An European provider operates a trading platform for cloud

resources (CPU, RAM, Storage)

Cloud IT Resources Supplier

The Darmstadt data center (DARZ) offers

storage capacity via the trading platform

Using ETERNUS CD10000 to provide storage

resources for an unpredictable demand

ETERNUS

CD10000

Copyright 2015 FUJITSU

Page 28: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

27

The FUJITSU difference

Most easy to deploy and operate Ceph package in the industry

Bringing simplicity to Ceph/Open Stack. The appliance hides complexity.

Complete Enterprise Class Services – best service package in the industry

For break fix of HW, SW, installation, even operation, from one vendor

More functionality – the most complete Ceph based storage device in the industry

Using Inktank + Intel’s VSM + Fujitsu own IP add ons

App concept for hyperconverged solutions

• E.g. Archiving, Backup, Synch&Share

Copyright 2014 FUJITSU

Accelerate time-to-deployment & improve TCO with the Fujitsu ETERNUS CD10000

Page 29: Ceph Day SF 2015 - Building your own disaster? The safe way to make Ceph ready!

28