Workspaces and Kubernetes

This is a work in progress - may not be 100% accurate

Kubernetes is developing many of the facilities for an abstraction like workspaces.

  • Logical isolation of clusters of process containers.
  • Service proxies to distinguish between interior and exterior promises.
  • Self-repairing desired-end-state semantics, with replication controllers.
  • Distributed clusters.

It's main drawback, presently, is that it is designed as a top-down management system, rather than a bottom-up cooperative. The logical picture might not reflect the physical picture. Particularly, the use of TLS certificates as the basis for identity (and CAs = TTPs) is a serious source complexity.

Apparently missing from kubernetes for the unification of embedded, IoT services:

  • Bottom-up name management of service points (interior and exterior).
  • Dissociation with IPv4-based server semantics.
  • Policy based management of service nexus (complex interconnectivity of microservices).
  • Knowledge-oriented desired end-state workflow (DSL model).
  • The ability to incorporate the modelling of device resources, and network capacity into service estimates.

How real and virtual clusters map to workspaces

The abilty to manage a workspace as a cluster depends on the ability to:

  1. Define new clusters/workspaces without changing the hardware. Subsets and private tenancies are managed using namespaces walk-through and namespace sharing.
  2. Add or remove specialized hardware into only certain clusters/workspaces -- analogous to adding new remote cloud or bare metal instances to a resource cluster (Kubernetes node management: Labels on nodes can be used in conjunction with node selectors on pods to control scheduling, e.g. to constrain a Pod to only be eligible to run on a subset of the nodes; this is a top down decision. A node must also be able to say which pods it is willing to accept from which namespaces as a matter of policy). e.g. could a camera join a workspace cluster?
  3. Apply to join someone else's cluster as a member.
    • As a user -- get account, based on client certificate (.cn=username), OpenID (email=username), etc. See Authenticating Across Clusters.
    • User credentials must be mapped to a namespaces and workspaces on every member node.
    • Workspace credentials work from an authorized source like an identity provider (OpenID) authentication. This is initiated by the first user.
    • Resource providers (apps, storage services, virtual interfaces, etc) need to accept security perimeter promise/responsibilities from workspace managers.
Kubernetes starts with a physical set of hardware and places a controller on top of it.

Comparing Kubernetes to a pure promise-oriented system (e.g. CFEngine)

Comparing some of the features of Kubernetes with a promise-oriented bottom-up system like CFEngine. Kubernetes has remarkably many of the same features, but its priorities are reverse compared to CFEngine.

  • CFEngine makes self-healing and autonomous, bottom-up behaviour priority, and makes sharing optional.
  • Kubernetes makes clustering and top-down sharing a priority, and makes self-healing optional.
Kubernetes

As documented March 2016
Promise-oriented system

(CFEngine is a point of reference here)
Cluster manager: views a distributed system as a tenancy provider
sharing out resources as containers/pods that are networked together
Resource manager: views a distributed system as a logical aggregation of resources from host-private sources
implies no automatic connectivity, just logical association.
push-agent service oriented
API / kubectl command line
pull-agent service oriented
Domain Specific Language
Replication controller wrappers for state maintenance Builtin self-repairing semantics (desired end state)
Cooperation architecture
Assumed top down. Centralized master architecture.
Makes some attempt to allow integration with IaaS providers, to act as a top-lay controller.
Cooperation architecture
Must be engineered through policy, and client-server exchanges. De-centralized peer architecture.
timescale: milliseconds - seconds timescale: seconds - minutes
resources: Containers (pods), storage mounts
resources: files, processes, containers, operations, storage mounts
workspace abstraction: top-down "communities", or contexts, provides a scope for
  • A namespace (for names!)
  • Delegation of access rights to trusted users from top defined users
  • Limits on resource consumption from top delegated resources
Each community manages
  • Resources
  • Policies
  • Constraints
Kubernetes adds a single set of credentials or sews together a master-community of identities with access rights to shared infrastructure.
workspace abstraction: host/container oriented autonomy, with voluntary cooperation
  • Resources automatically scoped per OS instance (HM/VM/container)
  • Each node manages its own resources, and may voluntarily subordinate by policy
  • Resource sharing is managed by individual service (no common user abstraction)
No a priori shared community abstraction. Every service for itself.
context: triplet (cluster, user, namespace)
Selectors based on labels set manually
context: logical expressions based on labels,
Selection based on labels set by probes or manual aliasing
Sample config/API interactions Speculative promise model, like "DSL"
Examples taken from kubernetes.io:
apiVersion: v1
kind: ReplicationController

metadata:
  name: my-nginx

spec:

  replicas: 2

  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

bundle ReplicationController my_nginx
{
meta:
   "app" string => "nginx";

replica_sets:
   "nginx"
      pod_template => nginx_template,
          replicas => "2";
}

bundle pod_template nginx_template
{
meta:
   "name" string => "nginx";

containers:
   "nginx"
               image => "nginx",
     container_ports => { "80" };
}

apiVersion: v1
kind: Service

metadata:
  name: nginxsvc
  labels:
    app: nginx

spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 80
    protocol: TCP
    name: http
  - port: 443
    protocol: TCP
    name: https
  selector:
    app: nginx

---

apiVersion: v1
kind: ReplicationController

metadata:
  name: my-nginx
spec:
  replicas: 1

  template:
    metadata:
      labels:
        app: nginx

    spec:
      volumes:
      - name: secret-volume
        secret:
          secretName: nginxsecret

      containers:
      - name: nginxhttps
        image: bprashanth/nginxhttps:1.0
        ports:
        - containerPort: 443
        - containerPort: 80

        volumeMounts:
        - mountPath: /etc/nginx/ssl
          name: secret-volume

bundle ServiceProxy nginxsvc # is a transducer port in/out
{
meta:

   "app" string => "nginx";

labels:
   "app_nginx" expression => "true";

exterior_service_port:

 app_nginx::

  "8080"
    map_to_port => "80",
       protocol => "tcp",
           name => "http";

  "443"
    protocol => "tcp",
    name => "https";
}



bundle ReplicationController my-nginx
{
replica_sets:
   "nginx"
      pod_template => nginx_template,
          replicas => "1";
}

bundle pod_template nginx_template
{
meta:
   "app" string => "nginx";

containers:
    "nginxhttps"
                  image => "bprashanth/nginxhttps:1.0",
        container_ports => { 443, 80};

volumes:  # storage:

    "/etc/nginx/ssl"
          volume => "secret-volume";
}

{
"kind": "Service",
"apiVersion": "v1",
"metadata": 
        {
        "name": "my-service"
        },

"spec": 
   {
   "selector": 
        {
        "app": "MyApp"
        },

   "ports": 
        [
          {
          "protocol": "TCP",
          "port": 80,
          "targetPort": 9376,
          "nodePort": 30061
          }
        ],

   "clusterIP": "10.0.171.239",     # conflicts with LoadBalancer???
                                    # mistake?
   "loadBalancerIP": "78.11.24.19", # attempt to acquire this IP
   "type": "LoadBalancer"
   },

"status": 
   {
   "loadBalancer": 
       {
       "ingress": 
          [
             {
             "ip": "146.148.47.155"
             }
          ]
       }
   }
}


bundle ServiceProxy my_service
{
exterior_balancer_port:

 MyApp::

 "78.11.24.19/80"
     map_to_port => "9376",
        protocol => "tcp",
        nodeport => "30061";  #???

 # "clusterIP": "10.0.171.239",   

status: 

  "loadBalancer"                         ## ???? what is this?
    ingress => { "146.148.47.155" };
}