diff --git a/README.md b/README.md
index b13cc01..97edac7 100644
--- a/README.md
+++ b/README.md
@@ -1,22 +1,24 @@
-[](https://goreportcard.com/report/nginxinc/nginx-k8s-edge-controller)
-
# nginx-k8s-edge-controller
## Welcome to the Nginx Kubernetes Load Balancer project !
-This repo contains source code and documents for a new Kubernetes Controller, that provides TCP load balancing external to a k8s cluster. It is a replacement for a Cloud Providers "Service Type Loadbalancer", that is missing from On Premises Kubernetes Clusters.
+This repo contains source code and documents for a new Kubernetes Controller, that provides TCP load balancing external to a Kubernetes Cluster running On Premises.
+
+
+
+>>**This is a replacement for a Cloud Providers "Service Type Loadbalancer", that is missing from On Premises Kubernetes Clusters.**
## Overview
-- Create a new K8s Controller, that will monitor specified k8s Service Endpoints, and then send API calls to an external NginxPlus server to manage Nginx Upstream server blocks.
-- This is will synchronize the K8s Service Endpoint list, with the Nginx LB server's Upstream block server list.
+- Create a new K8s Controller, that will monitor specified k8s Services, and then send API calls to an external Nginx Plus server to manage Nginx Upstream servers automatically.
+- This is will `synchronize` the K8s Service Endpoint list, with the Nginx LB server's Upstream server list.
- The primary use case is for tracking the NodePort IP:Port definitions for the Nginx Ingress Controller's `nginx-ingress Service`.
-- With the NginxPlus Server located external to the K8s cluster, this new controller LB function would provide an alternative TCP "Load Balancer Service" for On Premises k8s clusters, which do not have access to a Cloud providers "Service Type LoadBalancer".
-- Make the solution a native Kubernetes Component, configured and managed with standard K8s tools.
+- With the Nginx Plus Server located external to the K8s cluster, this new controller LB function would provide an alternative TCP "Load Balancer Service" for On Premises K8s clusters, which do not have access to a Cloud providers "Service Type LoadBalancer".
+- Make the solution a native Kubernetes Component, running, configured and managed with standard K8s commands.
@@ -24,12 +26,14 @@ This repo contains source code and documents for a new Kubernetes Controller, th
-
+
## Sample Screenshots of Runtime
+
+
### Configuration with 2 Nginx LB Servers defined (HA):

@@ -46,7 +50,7 @@ Legend:
- Indigo - nodeport and upstreams for https traffic
- Green - logs for api calls to LB Server #1
- Orange - Nginx LB Server upstream dashboard details
-- Kubernetes nodes are 10.1.1.8 and 10.1.1.10
+- Kubernetes Worker Nodes are 10.1.1.8 and 10.1.1.10
@@ -54,15 +58,17 @@ Legend:
Please see the /docs folder for detailed documentation.
+
+
## Installation
-Please see the /docs folder for detailed documentation.
+Please see the /docs folder for Installation Guide.
## Development
-No contributions are being accepted at this time.
+Contributions are being accepted at this time.
Read the [`CONTRIBUTING.md`](https://github.com/nginxinc/nginx-k8s-edge-controller/blob/main/CONTRIBUTING.md) file.
diff --git a/docs/InstallationGuide.md b/docs/InstallationGuide.md
new file mode 100644
index 0000000..7618b53
--- /dev/null
+++ b/docs/InstallationGuide.md
@@ -0,0 +1,392 @@
+# Nginx Kubernetes Loadbalancer Solution
+
+
+
+## This is the `Installation Guide` for the Nginx Kubernetes Loadbalancer Controller Solution. It contains detailed instructions for implementing the different components for the Solution.
+
+
+
+## Pre-Requisites
+
+- Working kubernetes cluster, with admin privleges
+- Running nginx-ingress controller, either OSS or Plus. This install guide follows the instructions for deploying an Nginx Ingress Controller here: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/
+- Demo application, this install guide uses the Nginx Cafe example, found here: https://github.com/nginxinc/kubernetes-ingress/tree/main/examples/ingress-resources/complete-example
+- A bare metal Linux server or VM for the external LB Server, connected to a network external to the cluster. Two of these will be required if High Availability is needed, as shown here.
+- Nginx Plus software loaded on the LB Server(s). This install guide follows the instructions for installing Nginx Plus on Centos 7, located here: https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-plus/
+- The Nginx Kubernetes Loadbalancer (NKL) Controller, new software for this Solution.
+
+
+
+## Kubernetes Cluster
+
+A standard K8s cluster is all that is required. There must be enough resources available to run the Nginx Ingress Controller, and the Nginx Kubernetes Loadbalancer Controller. You must have administrative access to be able to create the namespace, services, and deployments for this Solution. This Solution was tested on Kubernetes version 1.23. Most recent versions => v1.21 should work just fine.
+
+
+
+## Nginx Ingress Controller
+
+The Nginx Ingress Controller in this Solution is the destination target for traffic (north-south) that is being sent to the cluster. The installation of the actual Ingress Controller is outside the scope of this installation guide, but we include the links to the docs for your reference. `The NIC installation must follow the documents exactly as written,` as this Solution refers to the `nginx-ingress` namespace and service objects. **Only the very last step is changed.**
+
+NOTE: This Solution only works with nginx-ingress from Nginx. It will `not` work with the Community version of Ingress, called ingress-nginx. If you are unsure which Ingress Controller you are running, check out the blog on Nginx.com:
+https://www.nginx.com/blog/guide-to-choosing-ingress-controller-part-4-nginx-ingress-controller-options
+
+
+>Important! The very last step in the NIC deployment with Manifests, is to deploy the nodeport.yaml Service file. `This file must be changed! It is not the default nodeport file.` Instead, use the `nodeport-nkl.yaml` manifest file that is provided here with this Solution. The "ports name" in the Nodeport manifest `MUST` be in the correct format for this Solution to work correctly. The port name is the mapping from NodePorts to the LB Server's upstream blocks. The port names are intentionally changed to avoid conflicts with other NodePort definitions.
+
+Review the new `nkl-nodeport.yaml` Service defintion file:
+
+```yaml
+# NKL Nodeport Service file
+# NodePort port name must be in the format of
+# nkl-
+# Chris Akker, Jan 2023
+#
+apiVersion: v1
+kind: Service
+metadata:
+ name: nginx-ingress
+ namespace: nginx-ingress
+spec:
+ type: NodePort
+ ports:
+ - port: 80
+ targetPort: 80
+ protocol: TCP
+ name: nkl-nginx-lb-http # Must be changed to this
+ - port: 443
+ targetPort: 443
+ protocol: TCP
+ name: nkl-nginx-lb-https # Must be changed to this
+ selector:
+ app: nginx-ingress
+
+```
+
+
+```bash
+kubectl apply -f nodeport-nkl.yaml
+```
+
+
+
+
+
+## Demo Application
+
+This is not part of the actual Solution, but it is useful to have a well-known application running in the cluster, as a useful target for test commands. The example provided here is used by the Solution to demonstrate proper traffic flows, and application health check monitoring, to determine if the application is running in the cluster. If you choose a different Application to test with, the health checks provided here will NOT work, and will need to be modified to work correctly.
+
+- Deploy the Nginx Cafe Demo application, found here:
+
+https://github.com/nginxinc/kubernetes-ingress/tree/main/examples/ingress-resources/complete-example
+
+- Do not use the `cafe-ingress.yaml` file. Rather, use the `cafe-virtualserver.yaml` file that is provided here. It uses the Nginx CRDs to define a VirtualServer, and the related Routes and Redirects needed. The `redirects are required` for the LB Server's health checks to work correctly!
+
+```yaml
+#Example virtual server with routes for Cafe Demo
+#For NKL Solution, redirects required for LB Server health checks
+#Chris Akker, Jan 2023
+#
+apiVersion: k8s.nginx.org/v1
+kind: VirtualServer
+metadata:
+ name: cafe-vs
+spec:
+ host: cafe.example.com
+ tls:
+ secret: cafe-secret
+ redirect:
+ enable: true #Redirect from http > https
+ code: 301
+ upstreams:
+ - name: tea
+ service: tea-svc
+ port: 80
+ lb-method: round_robin
+ slow-start: 20s
+ healthCheck:
+ enable: true
+ path: /tea
+ interval: 20s
+ jitter: 3s
+ fails: 5
+ passes: 2
+ connect-timeout: 30s
+ read-timeout: 20s
+ - name: coffee
+ service: coffee-svc
+ port: 80
+ lb-method: round_robin
+ healthCheck:
+ enable: true
+ path: /coffee
+ interval: 10s
+ jitter: 3s
+ fails: 3
+ passes: 2
+ connect-timeout: 30s
+ read-timeout: 20s
+ routes:
+ - path: /
+ action:
+ redirect:
+ url: https://cafe.example.com/coffee
+ code: 302 #Redirect from / > /coffee
+ - path: /tea
+ action:
+ pass: tea
+ - path: /coffee
+ action:
+ pass: coffee
+```
+
+
+
+## Linux VM or bare-metal LB Server
+
+This is a standard Linux OS system, based on the Linux Distro and Technical Specs required for Nginx Plus, which can be found here: https://docs.nginx.com/nginx/technical-specs/
+
+This installation guide followed the "Installation of Nginx Plus on Centos/Redhat/Oracle" steps for installing Nginx Plus.
+
+>NOTE: This solution will not work with Nginx OpenSource, as OpenSource does not have the API that is used in this Solution. Installation on unsupported Distros is not recommended or supported.
+
+
+
+## Nginx Plus LB Server
+
+This is the configuration required for the LB Server, external to the cluster. It must be configured for the following.
+
+- Move the Nginx default Welcome page from port 80 to port 8080. Port 80 will be used by the stream context, instead of the http context.
+- API write access enabled on port 9000.
+- Plus Dashboard enabled, used for testing, monitoring, and visualization of the solution working.
+- The `Stream` context is enabled, for TCP loadbalancing.
+- Stream context is configured.
+
+After the new installation of Nginx Plus, make the following configuration changes:
+
+- Change Nginx's http default server to port 8080. See the included `default.conf` file. After reloading nginx, the default `Welcome to Nginx` page will be located at http://localhost:8080.
+
+- Use the dashboard.conf file provided. It will enable the /api endpoint, change the port to 9000, and provide access to the Plus dashboard. Place this file in the /etc/nginx/conf.d folder, and reload nginx. The Plus dashboard is now accessible at :9000/dashboard.html. It should look similar to this:
+
+
+
+- Create a new folder for the stream config .conf files. /etc/nginx/stream was used in this Solution.
+
+```bash
+mkdir /etc/nginx/stream
+```
+
+- Create 2 new `STATE` files for Nginx. These are used to backup the configuration, in case Nginx restarts/reloads.
+
+ Nginx State Files Required for Upstreams
+ - state file /var/lib/nginx/state/nginx-lb-http.state
+ - state file /var/lib/nginx/state/nginx-lb-https.state
+
+```bash
+touch /var/lib/nginx/state/nginx-lb-http.state
+touch /var/lib/nginx/state/nginx-lp-https.state
+```
+
+- Enable the `stream` context for Nginx, which provides TCP load balancing. See the included nginx.conf file. Notice that the stream context is no longer commented out, the new folder is included, and a new stream.log logfile is used to track requests/responses.
+
+- Configure Nginx Stream for TCP loadbalancing for this Solution. Place this file in the /etc/nginx/stream folder.
+
+```bash
+# NginxK8sLB Stream configuration, for L4 load balancing
+# Chris Akker, Jan 2023
+# TCP Proxy and load balancing block
+# Nginx Kubernetes Loadbalancer
+# State File for persistent reloads/restarts
+# Health Check Match example for cafe.example.com
+#
+#### nginxk8slb.conf
+
+ upstream nginx-lb-http {
+ zone nginx-lb-http 256k;
+ state /var/lib/nginx/state/nginx-lb-http.state;
+ }
+
+ upstream nginx-lb-https {
+ zone nginx-lb-https 256k;
+ state /var/lib/nginx/state/nginx-lb-https.state;
+ }
+
+ server {
+ listen 80;
+ status_zone nginx-lb-http;
+ proxy_pass nginx-lb-http;
+ health_check match=cafe;
+ }
+
+ server {
+ listen 443;
+ status_zone nginx-lb-https;
+ proxy_pass nginx-lb-https;
+ health_check match=cafe;
+ }
+
+ match cafe {
+ send "GET cafe.example.com/ HTTP/1.0\r\n";
+ expect ~ "30*";
+ }
+
+```
+
+
+
+## Nginx Kubernetes Loadbalancing Controller
+
+
+
+This is the new Controller, which is configured to watch the k8s environment, the nginx-ingress Service object, and send API updates to the Nginx LB Server when there are changes. It only requires three things.
+
+- New kubernetes namespace and RBAC
+- NKL ConfigMap, to configure the Controller
+- NKL Deployment, to deploy and run the Controller
+
+Create the new K8s namespace:
+
+```bash
+kubectl create namespace nkl
+```
+
+Apply the manifests for Secret, Service, ClusterRole, and ClusterRoleBinding:
+
+```bash
+kubectl apply -f secret.yaml serviceaccount.yaml clusterrole.yaml clusterrolebinding.yaml
+```
+
+Modify the ConfigMap manifest to match your Network environment. Change the `nginx-hosts` IP address to match your Nginx LB Server IP. If you have 2 or more LB Servers, separate them with a comma. Keep the port number for the Plus API endpoint, and the `/api` URL as shown.
+
+```yaml
+
+apiVersion: v1
+kind: ConfigMap
+data:
+ nginx-hosts:
+ "http://10.1.1.4:9000/api,http://10.1.1.5:9000/api" # change IP(s) to match Nginx LB Server(s)
+metadata:
+ name: nkl-config
+ namespace: nkl
+
+```
+
+Apply the updated ConfigMap:
+
+```bash
+kubectl apply -f nkl-configmap.yaml
+```
+
+Deploy the NKL Controller:
+
+```bash
+kubectl apply -f nkl-deployment.yaml
+```
+
+Check to see if the NKL Controller is running with the updated ConfigMap:
+
+```bash
+kubectl get pods -n nkl
+```
+```bash
+kubectl describe cm nkl-config -n nkl
+```
+
+The status should show "running", your nginx-hosts should have the LB Server IP:Port/api.
+
+
+
+To make it easy to watch the NKL controller log messages, add the following bash alias:
+
+```bash
+alias nkl-follow-logs='kubectl -n nkl get pods | grep nkl-deployment | cut -f1 -d" " | xargs kubectl logs -n nkl --follow $1'
+```
+
+Using a Terminal, watch the NKL Controller logs:
+
+```bash
+nkl-follow-logs
+```
+
+Leave this Terminal window open, so you can watch the log messages!
+
+Create the NKL compatible NODEPORT Service, using the `nodeport-nkl.yaml` manifest provided:
+
+```bash
+kubectl apply -f nodeport-nkl.yaml
+```
+
+Verify that the `nginx-ingress` NodePort Service is properly defined:
+
+```bash
+kubectl get svc nginx-ingress -n nginx-ingress
+```
+
+
+
+
+
+
+## Testing the Solution
+
+When you are finished, the Nginx Plus Dashboard on the LB Server should look similar to the following image:
+
+
+
+Important items for reference:
+- Orange are the upstream server blocks, from the `etc/nginx/stream/nginxk8slb.conf` file.
+- Blue is the IP:Port of the NodePort Service for http.
+- Indigo is the IP:Port of the NodePort Service for https.
+
+>Note: In this example, there is a 3-Node K8s cluster, with one Control Node, and 2 Worker Nodes. The NKL Controller only configures `Worker Node` IP addresses, which are:
+- 10.1.1.8
+- 10.1.1.10
+
+
+Configure DNS, or the local hosts file, for cafe.example.com > NginxLB Server IP Address. In this example:
+
+```bash
+
+cat /etc/hosts
+
+10.1.1.4 cafe.example.com
+
+```
+
+Open a browser tab to cafe.example.com. It should redirect to https://cafe.example.com/coffee.
+
+The Dashboard's `TCP/UDP Upstreams Connection counters` will increase as you refresh the browser page.
+
+Using a Terminal, delete the `nginx-ingress nodeport service` definition.
+
+```bash
+kubectl delete -f nodeport-nkl.yaml
+```
+
+Now the `nginx-ingress` Service is gone, and the upstream list will be empty in the Dashboard.
+
+
+
+The NKL log messages confirm the deletion of the NodePorts:
+
+
+
+If you refresh the cafe.example.com browser page, it will Time Out. There are NO upstreams for Nginx to send the request to!
+
+Add the `nginx-ingress` Service back to the cluster:
+
+```
+kubectl apply -f nodeport-nkl.yaml
+```
+
+Verify the nginx-ingress Service is re-created. Notice the the Port Numbers have changed !
+
+The NKL Controller detects this change, and modifies the upstreams. The Dashboard will show you the new Port numbers, matching the new NodePort definitions. The NKL logs show these messages, confirming the changes:
+
+
+
+
+
+The Completes the Testing Section.
+
+
+
diff --git a/docs/NginxK8sLBcontroller-Overview-V1.pptx b/docs/NginxK8sLBcontroller-Overview-V1.pptx
deleted file mode 100644
index e9bc209..0000000
Binary files a/docs/NginxK8sLBcontroller-Overview-V1.pptx and /dev/null differ
diff --git a/docs/NginxKubernetesLoadbalancer.md b/docs/NginxKubernetesLoadbalancer.md
index a23063a..9d57295 100644
--- a/docs/NginxKubernetesLoadbalancer.md
+++ b/docs/NginxKubernetesLoadbalancer.md
@@ -15,7 +15,7 @@
- This is will synchronize the K8s Service Endpoint list, with the Nginx LB server's Upstream block server list.
- The primary use case is for tracking the NodePort IP:Port definitions for the Nginx Ingress Controller's `nginx-ingress Service`.
- With the NginxPlus Server located external to the K8s cluster, this new controller LB function would provide an alternative TCP "Load Balancer Service" for On Premises k8s clusters, which do not have access to a Cloud providers "Service Type LoadBalancer".
-- Make the solution a native Kubernetes Component, configured and managed with standard K8s tools.
+- The solution works as a native Kubernetes Controller object, configured and managed with standard K8s tools.
@@ -27,7 +27,7 @@ When using a Cloud Provider's Loadbalancer Service Type, it provides 3 basic fun
1. Public IP address allocation, visible from the Internet
2. DNS record management for this Public IP (usually A records for FQDNs)
-3. TCP loadbalancing, from the PublicIP:wellknownports, to the NodePort:highnumberports of the cluster nodes.
+3. TCP loadbalancing, from the PublicIP:well-known-ports, to the NodePort:high-number-ports of the cluster nodes.
This is often called "NLB", a term used in AWS for Network Load Balancer, but functions nearly identical in all Public Cloud Provider networks. It is not actually a component of K8s, rather, it is a service provided by the Cloud Providers SDN (Software Defined Network), but is managed by the user with K8s Service Type LoadBalancer definitions/declarations.
@@ -43,7 +43,7 @@ Note: This solution is not for Cloud-based K8s clusters, it is only for On Premi
-
+
@@ -146,7 +146,7 @@ Preface - Define access parameters for NKL Controller to communicate with Nginx
-Here are some examples of using cURL to the NginxPlus API to control Upstream server blocks:
+Here are some examples of using cURL to the Nginx Plus API to control Upstream server blocks:
@@ -189,7 +189,7 @@ curl -X PATCH -d '{ "drain": true }' -s 'http://172.16.1.15:9000/api/4/stream/up
Response is:
{"id":2,"server":"127.0.0.1:8083","weight":1,"max_conns":0,"max_fails":1,"fail_timeout":"10s","slow_start":"0s","route":"","backup":false,"down":false,"drain":true}
-Note: During recent testing with R28 and API version 8, the Drain command was 404 - not found.
+Note: During recent testing with R28 and API version 8, the Drain command was 404 - not found for Stream Upstreams. According to docs, DRAIN is only supported on HTTP Upstreams - to be verified.
To `CHANGE the LB WEIGHT` of an Upstream Server with ID = 2:
curl -X PATCH -d '{ "weight": 3 }' -s 'http://172.16.1.15:9000/api/4/stream/upstreams/nginx-lb-http/servers/2'
@@ -236,26 +236,22 @@ Nginx Upstream API examples: http://nginx.org/en/docs/http/ngx_http_api_module.
## Sample NginxPlus LB Server configuration ( server and upstream blocks )
```bash
-# NginxLB Stream configuration, for TCP load balancing
+# NginxK8sLB Stream configuration, for L4 load balancing
# Chris Akker, Jan 2023
# TCP Proxy and load balancing block
# Nginx Kubernetes Loadbalancer
-# backup servers allow Nginx to start
-# State file used to preserve config across restarts
+# State File for persistent reloads/restarts
+# Health Check Match example for cafe.example.com
#
-#### nginxlb.conf
+#### nginxk8slb.conf
upstream nginx-lb-http {
zone nginx-lb-http 256k;
- #placeholder
- #server 1.1.1.1:32080 backup;
state /var/lib/nginx/state/nginx-lb-http.state;
}
upstream nginx-lb-https {
zone nginx-lb-https 256k;
- #placeholder
- #server 1.1.1.1:32443 backup;
state /var/lib/nginx/state/nginx-lb-https.state;
}
@@ -263,18 +259,24 @@ Nginx Upstream API examples: http://nginx.org/en/docs/http/ngx_http_api_module.
listen 80;
status_zone nginx-lb-http;
proxy_pass nginx-lb-http;
+ health_check match=cafe;
}
-
+
server {
listen 443;
status_zone nginx-lb-https;
proxy_pass nginx-lb-https;
+ health_check match=cafe;
+ }
+
+ match cafe {
+ send "GET cafe.example.com/ HTTP/1.0\r\n";
+ expect ~ "30*";
}
-#Sample Nginx State for Upstreams
-# configuration file /var/lib/nginx/state/nginx-lb-http.state:
-server 1.1.1.1:32080 backup down;
+# Nginx State Files Required for Upstreams
+# state file /var/lib/nginx/state/nginx-lb-http.state
-# configuration file /var/lib/nginx/state/nginx-lb-https.state:
-server 1.1.1.1:30443 backup down;
+# state file /var/lib/nginx/state/nginx-lb-https.state
+```
diff --git a/docs/cafe-virtualserver.yaml b/docs/cafe-virtualserver.yaml
new file mode 100644
index 0000000..bacd37b
--- /dev/null
+++ b/docs/cafe-virtualserver.yaml
@@ -0,0 +1,55 @@
+#Example virtual server with routes for Cafe Demo
+#For NKL Solution, redirects required for LB Server health checks
+#Chris Akker, Jan 2023
+#
+apiVersion: k8s.nginx.org/v1
+kind: VirtualServer
+metadata:
+ name: cafe-vs
+spec:
+ host: cafe.example.com
+ tls:
+ secret: cafe-secret
+ redirect:
+ enable: true #Redirect from http > https
+ code: 301
+ upstreams:
+ - name: tea
+ service: tea-svc
+ port: 80
+ lb-method: round_robin
+ slow-start: 20s
+ healthCheck:
+ enable: true
+ path: /tea
+ interval: 20s
+ jitter: 3s
+ fails: 5
+ passes: 2
+ connect-timeout: 30s
+ read-timeout: 20s
+ - name: coffee
+ service: coffee-svc
+ port: 80
+ lb-method: round_robin
+ healthCheck:
+ enable: true
+ path: /coffee
+ interval: 10s
+ jitter: 3s
+ fails: 3
+ passes: 2
+ connect-timeout: 30s
+ read-timeout: 20s
+ routes:
+ - path: /
+ action:
+ redirect:
+ url: https://cafe.example.com/coffee
+ code: 302 #Redirect from / > /coffee
+ - path: /tea
+ action:
+ pass: tea
+ - path: /coffee
+ action:
+ pass: coffee
diff --git a/docs/media/nginxlb-dashboard.png b/docs/media/nginxlb-dashboard.png
new file mode 100644
index 0000000..bbdbc71
Binary files /dev/null and b/docs/media/nginxlb-dashboard.png differ
diff --git a/docs/media/nginxlb-nklv1.png b/docs/media/nginxlb-nklv1.png
deleted file mode 100644
index 1539c6c..0000000
Binary files a/docs/media/nginxlb-nklv1.png and /dev/null differ
diff --git a/docs/media/nginxlb-nklv2.png b/docs/media/nginxlb-nklv2.png
new file mode 100644
index 0000000..b9830d2
Binary files /dev/null and b/docs/media/nginxlb-nklv2.png differ
diff --git a/docs/media/nginxlb-upstreams.png b/docs/media/nginxlb-upstreams.png
new file mode 100644
index 0000000..bd9d14c
Binary files /dev/null and b/docs/media/nginxlb-upstreams.png differ
diff --git a/docs/media/nkl-background.png b/docs/media/nkl-background.png
new file mode 100644
index 0000000..741415b
Binary files /dev/null and b/docs/media/nkl-background.png differ
diff --git a/docs/media/nkl-logs-created.png b/docs/media/nkl-logs-created.png
new file mode 100644
index 0000000..a803746
Binary files /dev/null and b/docs/media/nkl-logs-created.png differ
diff --git a/docs/media/nkl-logs-deleted.png b/docs/media/nkl-logs-deleted.png
new file mode 100644
index 0000000..3a7074e
Binary files /dev/null and b/docs/media/nkl-logs-deleted.png differ
diff --git a/docs/media/nkl-no-nodeport.png b/docs/media/nkl-no-nodeport.png
new file mode 100644
index 0000000..5287748
Binary files /dev/null and b/docs/media/nkl-no-nodeport.png differ
diff --git a/docs/media/nkl-nodeport.png b/docs/media/nkl-nodeport.png
new file mode 100644
index 0000000..b0a4586
Binary files /dev/null and b/docs/media/nkl-nodeport.png differ
diff --git a/docs/nginxlb.conf b/docs/nginxlb.conf
deleted file mode 100644
index c59b6bf..0000000
--- a/docs/nginxlb.conf
+++ /dev/null
@@ -1,44 +0,0 @@
-# NginxK8sLB Stream configuration, for L4 load balancing
-# Chris Akker, Jan 2023
-# TCP Proxy and load balancing block
-# Nginx Kubernetes Loadbalancer
-# Example health check match for cafe.example.com
-#
-#### nginxk8slb.conf
-
- upstream nginx-lb-http {
- zone nginx-lb-http 256k;
- state /var/lib/nginx/state/nginx-lb-http.state;
- }
-
- upstream nginx-lb-https {
- zone nginx-lb-https 256k;
- state /var/lib/nginx/state/nginx-lb-https.state;
- }
-
- server {
- listen 80;
- status_zone nginx-lb-http;
- proxy_pass nginx-lb-http;
- health_check match=cafe;
- }
-
- server {
- listen 443;
- status_zone nginx-lb-https;
- proxy_pass nginx-lb-https;
- health_check match=cafe;
- }
-
- match cafe {
- send "GET cafe.example.com/ HTTP/1.0\r\n";
- expect ~ "30*";
- }
-
-
-#Sample Nginx State for Upstreams
-# configuration file /var/lib/nginx/state/nginx-lb-http.state:
-server 1.1.1.1:32080 backup down;
-
-# configuration file /var/lib/nginx/state/nginx-lb-https.state:
-server 1.1.1.1:30443 backup down;
diff --git a/docs/nodeport-nkl.yaml b/docs/nodeport-nkl.yaml
index 9ebcd92..01179f0 100644
--- a/docs/nodeport-nkl.yaml
+++ b/docs/nodeport-nkl.yaml
@@ -1,5 +1,5 @@
# NKL Nodeport Service file
-# NodePort name must be in the format of
+# NodePort port name must be in the format of
# nkl-
# Chris Akker, Jan 2023
#
diff --git a/docs/nodeport.yaml b/docs/nodeport.yaml
index 5042188..610f0a8 100644
--- a/docs/nodeport.yaml
+++ b/docs/nodeport.yaml
@@ -1,5 +1,5 @@
# This the default nodeport.yaml manifest for nginx-ingress.
-#The port name MUST be changed to match the new LB Controller.
+# The port name MUST be changed to match the new LB Controller.
# See the new nodeport-nkl.yaml file example.
#
apiVersion: v1
diff --git a/docs/udf-loadtests.md b/docs/udf-loadtests.md
deleted file mode 100644
index 60eb22f..0000000
--- a/docs/udf-loadtests.md
+++ /dev/null
@@ -1,50 +0,0 @@
-## Quick WRK load tests from Ubuntu Jumphost
-## to Nginx LB server
-## and direct to each k8s node
-## using WRK in a container
-
-### 10.1.1.4 is the Nginx LB Server's IP addr
-
-
-
-docker run --rm williamyeh/wrk -t4 -c50 -d2m -H 'Host: cafe.example.com' --timeout 2s https://10.1.1.4/coffee
-Running 2m test @ https://10.1.1.4/coffee
- 4 threads and 50 connections
- Thread Stats Avg Stdev Max +/- Stdev
- Latency 19.73ms 11.26ms 172.76ms 81.04%
- Req/Sec 626.50 103.68 1.03k 75.60%
- 299460 requests in 2.00m, 481.54MB read
-`Requests/sec: 2493.52`
-Transfer/sec: 4.01MB
-
-
-
-## Direct to knode1
-
-ubuntu@k8-jumphost:~$ docker run --rm williamyeh/wrk -t4 -c50 -d2m -H 'Host: cafe.example.com' --timeout 2s https://10.1.1.8:31269/coffee
-Running 2m test @ https://10.1.1.8:31269/coffee
- 4 threads and 50 connections
- Thread Stats Avg Stdev Max +/- Stdev
- Latency 17.87ms 10.63ms 151.45ms 80.16%
- Req/Sec 698.98 113.22 1.05k 75.67%
- 334080 requests in 2.00m, 537.22MB read
-`Requests/sec: 2782.35`
-Transfer/sec: 4.47MB
-
-
-
-## Direct to knode2
-
-ubuntu@k8-jumphost:~$ docker run --rm williamyeh/wrk -t4 -c50 -d2m -H 'Host: cafe.example.com' --timeout 2s https://10.1.1.10:31269/coffee
-Running 2m test @ https://10.1.1.10:31269/coffee
- 4 threads and 50 connections
- Thread Stats Avg Stdev Max +/- Stdev
- Latency 17.62ms 10.01ms 170.99ms 80.32%
- Req/Sec 703.96 115.07 1.09k 74.17%
- 336484 requests in 2.00m, 541.41MB read
-`Requests/sec: 2801.89`
-Transfer/sec: 4.51MB
-
-
-
-Note: Slight decrease in Proxy vs Direct.