Framework Agnostic Discovery

Post on 12-Apr-2017

480 views 0 download

Transcript of Framework Agnostic Discovery

FRAMEWORK-AGNOSTICDISCOVERY

| Product Manager, | Tim Gross Joyent @0x74696d

CONTAINER-NATIVE?Containers are a first class citizen.

Each container is an equal peer on the network.

Discovery should be framework-agnostic.

REMEMBER: YOUR MISSION ISNOT "MANAGE VMs."

Your mission is what your application does for yourorganization.

Infrastructure (undifferentiated heavy lifting) is incidental costand incidental complexity.

Application containers make the full promise of cloudcomputing possible...

but require new ways of working.

Triton Elastic Container Service

Run Linux containers securelyon bare-metal in public cloud

Or run on-premise (it's opensource!)

Director of DevOps

... Docker in production since Oct 2013

WHAT DOCKER SOLVED FOR US:Human-and-machine-readable build documentation.

No more "works on my machine."

Fix dependency isolation.

Interface-based approach to application deployment.

Deployments are fast!

DevOps kool-aid for everyone!

OK, WHAT'S WRONG?

NAT

NATDocker's use of bridging and NAT noticeably increases the

transmit path length; vhost-net is fairly efficient at transmittingbut has high overhead on the receive side... In real network-

intensive workloads, we expect such CPU overhead to reduceoverall performance.

IBM Research Report: An Updated Performance Comparisonof Virtual Machines and Linux Containers

CAN WE AVOID NAT?--host networking

port conflicts

port mapping at LB

CAN WE AVOID NAT?Bridge (not --bridge)

networking

Can get IP per container

May need 2nd NIC

Scaling w/ subnet per host

DNSSimple discovery! But...

Can't address individual hosts behind a record.*

No health checking.*

TTL caching.

NETWORKING STILL SUCKS!Containers don't have their own NIC on the data center

network

Pass through proxy for all outbound requests

All packets go through NAT or port forwarding

THE CONTAINER-NATIVEALTERNATIVE?

Cut the cruft!

Push responsibility of the application topology away from thenetwork infrastructure and into the application itself where it

belongs.

RESPONSIBILITIES OF ACONTAINER

Registration

Self-introspection

Heartbeats

Look for change

Respond to change

NO SIDECARSSidecar needs to reach intoapplication container

Unsuited for multi-tenantsecurity

Deployment of sidecar bound todeployment of app

APPLICATION-AWARE HEALTHCHECKS

No packaging tooling into another service

App container lifecycle separate from discovery service

Respond quickly to changes

LEGACY PRE-CONTAINER APPSRegistration: wrap start of app in a shell script

Self-introspection: self-test?

Heartbeats: um...

Look for change: ???

Respond to change: profit?

http://containerbuddy.io

CONTAINERBUDDY:A shim to help make existing apps container-native

Registration: registers to Consul on startup

Self-introspection: execute user-defined health check

Heartbeats: send health status w/ TTL to Consul

Look for change: poll Consul for changes

Respond to change: execute user-defined responsebehavior

NO SUPERVISIONContainerbuddy is PID1

Returns exit code of shimmed processback to Docker Engine (or Triton) and

dies

Attaches stdout/stderr from app tostdout/stderr of container

{ "consul": "consul:8500", "services": [ { "name": "nginx", "port": 80, "health": "/usr/bin/curl --fail -s http://localhost/health", "poll": 10, "ttl": 25 } ], "backends": [ { "name": "app", "poll": 7, "onChange": "/opt/containerbuddy/reload-nginx.sh" } ]}

$ cat ./nginx/opt/containerbuddy/reload-nginx.sh

# fetch latest virtualhost template from Consul k/vcurl -s --fail consul:8500/v1/kv/nginx/template?raw \ > /tmp/virtualhost.ctmpl

# render virtualhost template using values from Consul and reload Nginxconsul-template \ -once \ -consul consul:8500 \ -template \ "/tmp/virtualhost.ctmpl:/etc/nginx/conf.d/default.conf:nginx -s reload"

$ less ./nginx/default.ctmpl

# for each service, create a backend{{range services}}upstream {{.Name}} { # write the health service address:port pairs for this backend {{range service .Name}} server {{.Address}}:{{.Port}}; {{end}}}{{end}}

server { listen 80; server_name _;

# need ngx_http_stub_status_module compiled-in location /health { stub_status on; allow 127.0.0.1; deny all; }

{{range services}} location /{{.Name}}/ { proxy_pass http://{{.Name}}/; proxy_redirect off; } {{end}}}

nginx: image: 0x74696d/containerbuddy-demo-nginx mem_limit: 512m ports: - 80 links: - consul:consul restart: always environment: - CONTAINERBUDDY=file:///opt/containerbuddy/nginx.json command: > /opt/containerbuddy/containerbuddy nginx -g "daemon off;"

echo 'Starting Consul.'docker-compose -p example up -d consul

# get network info from consul. alternately we can push this into# a DNS A-record to bootstrap the clusterCONSUL_IP=$(docker inspect example_consul_1 \ | json -a NetworkSettings.IPAddress)

echo "Writing template values to Consul at ${CONSUL_IP}"curl --fail -s -X PUT --data-binary @./nginx/default.ctmpl \ http://${CONSUL_IP}:8500/v1/kv/nginx/template

echo 'Opening consul console'open http://${CONSUL_IP}:8500/ui

Starting application servers and Nginxexample_consul_1 is up-to-dateCreating example_nginx_1...Creating example_app_1...Waiting for Nginx at 72.2.115.34:80 to pick up initial configuration....................Opening web page... the page will reload every 5 seconds with any updates.Try scaling up the app!docker-compose -p example scale app=3

echo 'Starting application servers and Nginx'docker-compose -p example up -d

# get network info from Nginx and poll it for livenessNGINX_IP=$(docker inspect example_nginx_1 \ | json -a NetworkSettings.IPAddress)echo "Waiting for Nginx at ${NGINX_IP} to pick up initial configuration."while : do sleep 1 curl -s --fail -o /dev/null "http://${NGINX_IP}/app/" && break echo -ne .doneechoecho 'Opening web page... the page will reload every 5 seconds'echo 'with any updates.'open http://${NGINX_IP}/app/

DOES IT BLEND SCALE?$ docker-compose -p example scale app=3Creating and starting 2... doneCreating and starting 3... done

The Old Way The Container-Native WayExtra network hop fromLB or local proxy

Direct container-to-containercommmunication

NAT Containers have their own IPDNS TTL Topology changes propogate

immediatelyHealth checks in the LB Applications report their own

healthTwo build & orchestrationpipelines

Focus on your app alone

VMs Secure multi-tenant bare-metal

http://0x74696d.com/talk-kubecon-2015/