GCP: How Ingress is useful in routing external HTTP(S) traffic to internal services in GKE?

Akarsh Seggemu, M.Sc.
3 min readMay 27, 2024

--

Ingress is a built-in feature GKE (Google Kubernetes Engine). You can benefit from the feature, if you can understand how to use Ingress for internal and external application Load Balancers.

External clients traffice is routed via Ingress to loadbalancer in GKE

I was confused when I was trying to understand how to route external HTTP(S) traffic to GKE nodes. Then after reading the GKE documentation guide — Set up an external Application Load Balancer with Ingress it became clear that routing external HTTP(S) traffic to GKE nodes is possible with Ingress. But, the guide did not mention how to secure Ingress with TLS. There was a link to Kubernetes TLS in the “Remarks” section in the GKE documentation guide — Set up an external Application Load Balancer with Ingress. I wish there was as an example instead of adding Kubernetes TLS link as an “Remark”. It would reduce the confusion for users o GCP GKE.

Through trial and error, I was able to find a configuration that is able to secure GKE Ingress.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: akarsh-example-service-ingress
namespace: akarsh-example
spec:
defaultBackend:
service:
name: akarsh-example-service-loadbalancer
port:
number: 80
tls:
secretName: akarsh-example-tls

In the above configuration, we created a service of kind ingress. It is directing the http and https traffic to the port 80 of the service loadbalancer `akarsh-example-service-loadbalancer`. The TLS certificate is present in the GCP secret manager in the secret name `akarsh-example-tls`.

For the above to work we also need a loadbalancer service as follows,

apiVersion: v1
kind: Service
metadata:
name: akarsh-example-service-loadbalancer
namespace: akarsh-example
annotations:
networking.gke.io/load-balancer-type: "Internal"
spec:
type: LoadBalancer
externalTrafficPolicy: Cluster
selector:
app: akarsh-example-app
ports:
- name: tcp-port
protocol: TCP
port: 80
targetPort: 80

In the above configuration, we created a service loadbalancer to load balance the traffic coming from the internal port 80 to the target port 80 which targets the nodes running the container images which can accept the requests on port 80. More details refer to documentation guide — Ingress for internal Application Load Balancers.

apiVersion: apps/v1
kind: Deployment
metadata:
name: akarsh-example-deployment
namespace: akarsh-example
labels:
app: akarsh-example-app
spec:
replicas: 3
selector:
matchLabels:
app: akarsh-example-app
template:
metadata:
labels:
app: akarsh-example-app
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

In the above configuration, we created a deployment that has an container image which is accepting requests on port 80. The service finds the app using selector label app. More details refer to Creating a deployment.

You should see the nginx welcome page when the above configuration is deployed in GKE. You will get a external IP in the service ingress. Map the external IP to the DNS entry of your preferred hosting domain. Also change the TLS certificate according to the hosting domain. In my example, above I mapped akarsh-example.com to the generated external IP in IP table in GCP.

If you like my articles, please follow me on Medium, you can also watch my videos on YouTube and you can also support me by buying me a coffee.

--

--

Akarsh Seggemu, M.Sc.

IT Team Lead | Post graduate Computer Science from TU Berlin | Telugu writings are found in this account https://medium.com/@akarshseggemu_telugu