Conduit如何工作

conduit install

安装conduit的control plane到集群. 提供Dashboard和服务.

服务

基础服务

  • tap: 将代理规则下发到proxy
  • telemery: 从集群获取Pod列表, 包括ip, deployment, status, 最后上报时间等; 输出proxy上报的状态到metrics, 由prometheus拉取
  • destination: 通过namespace和service从集群获取endpoints

代理服务

  • public api, 代理基础服务为dashboard提供服务
  • proxy api, 代理基础服务为proxy提供服务

组件

Dashboard

  • 展示集群中的deployment, pod和proxy的状态
  • 展示各个proxy的metrics数据

Proxy

处理重定向请求

待补充

conduit inject

向原deployment配置中注入conduit的proxy, 为每个Pod添加一个proxy容器, 使用initContainer向容器添加iptables配置:

  • 将所有端口的外来的请求(除了忽略的端口如80, 4190)重定向到proxy监听的端口4143
  • 从所有端口发出的请求重定向到proxy监听的端口4140. 忽略loopback

Proxy监听三个端口:

  • 4143 (public 0.0.0.0)
  • 4140 (private 127.0.0.1)
  • 4190 (public 0.0.0.0) 监听来自tap服务的规则更新请求

原deployment配置:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: emoji-svc
  namespace: emojivoto
spec:
  replicas: 1
  selector:
    matchLabels:
      app: emoji-svc
  template:
    metadata:
      labels:
        app: emoji-svc
    spec:
      containers:
      - name: emoji-svc
        image: buoyantio/emojivoto-emoji-svc:v2
        env:
        - name: GRPC_PORT
          value: "8080"
        ports:
        - name: grpc
          containerPort: 8080

Injected配置:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  name: emoji-svc
  namespace: emojivoto
spec:
  replicas: 1
  selector:
    matchLabels:
      app: emoji-svc
  strategy: {}
  template:
    metadata:
      annotations:
        conduit.io/created-by: conduit/cli v0.1.0
        conduit.io/proxy-version: v0.1.0
      creationTimestamp: null
      labels:
        app: emoji-svc
        conduit.io/controller: conduit
        conduit.io/plane: data
    spec:
      containers:
      - env:
        - name: GRPC_PORT
          value: "8080"
        image: buoyantio/emojivoto-emoji-svc:v2
        name: emoji-svc
        ports:
        - containerPort: 8080
          name: grpc
        resources: {}
      - env:
        - name: CONDUIT_PROXY_LOG
          value: trace,h2=debug,mio=info,tokio_core=info
        - name: CONDUIT_PROXY_CONTROL_URL
          value: tcp://proxy-api.conduit.svc.cluster.local:8086
        - name: CONDUIT_PROXY_CONTROL_LISTENER
          value: tcp://0.0.0.0:4190
        - name: CONDUIT_PROXY_PRIVATE_LISTENER
          value: tcp://127.0.0.1:4140
        - name: CONDUIT_PROXY_PUBLIC_LISTENER
          value: tcp://0.0.0.0:4143
        - name: CONDUIT_PROXY_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: CONDUIT_PROXY_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: CONDUIT_PROXY_POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: gcr.io/runconduit/proxy:v0.1.0
        imagePullPolicy: IfNotPresent
        name: conduit-proxy
        ports:
        - containerPort: 4143
          name: conduit-proxy
        resources: {}
        securityContext:
          runAsUser: 2102
      initContainers:
      - args:
        - -p
        - "4143"
        - -o
        - "4140"
        - -i
        - 80,4190
        - -u
        - "2102"
        image: gcr.io/runconduit/proxy-init:v0.1.0
        imagePullPolicy: IfNotPresent
        name: conduit-init
        resources: {}
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
          privileged: false
status: {}