Why is Ingress in Kubernetes

Create an ingress controller for an internal virtual network in Azure Kubernetes Service (AKS)

  • 8 minutes to read

An ingress controller is a software component that provides a reverse proxy, configurable traffic routing, and TLS termination for Kubernetes services. Inbound Kubernetes data resources are used to configure inbound rules and routes for individual Kubernetes services. By using an ingress controller and ingress rules, a single IP address can be used to route traffic to multiple services in a Kubernetes cluster.

This article describes how to deploy the NGINX ingress controller in an Azure Kubernetes Service (AKS) cluster. The ingress controller has been configured for an internal private virtual network and an IP address. External access is not permitted. There are two applications running in the AKS cluster, each accessible via a single IP address.

You can also:


In this article, Helm 3 is used to install the NGINX entry controller. Make sure you are using the latest version of Helm and on the Helm repository ingress-nginx can access. For more information about configuring and using Helm, see Install applications with Helm on Azure Kubernetes Service (AKS).

The article also requires at least version 2.0.64 of the Azure CLI. Run to determine the version. To learn how to perform an installation or upgrade, see Install the Azure CLI, if necessary.

Create an entry controller

By default, an NGINX ingress controller is created with a dynamic public IP address assigned. A common configuration requirement is to use an internal private network and an IP address. This approach allows you to restrict access to your services to internal users so that external access is not possible.

Create a file named internal-ingress.yaml using the following sample manifest file. In this example of the resource loadBalancerIP assigned. Enter your own internal IP address for use by the ingress controller. Make sure that this IP address is not already in use in your virtual network. Additionally, if you are using an existing virtual network and subnet, you will need to configure your AKS cluster with the correct permissions to manage the virtual network and subnet. For more information, see Use kubenet networks with your own IP address ranges in Azure Kubernetes Service (AKS) or Configure Azure CNI networks in Azure Kubernetes Service (AKS).

Now put that nginx-ingressDiagram with helmet ready. To use the manifest file created in the previous step, add the parameter. For additional redundancy, two replicas of the NGINX ingress controllers are provided with the parameter. To take full advantage of running replicas of the ingress controller, ensure that there is more than one node in the AKS cluster.

The ingress controller must also be planned on a Linux node. Windows Server nodes must not be running on the ingress controller. A node selector is specified with the parameter to instruct the Kubernetes scheduler to run the NGINX ingress controller on a Linux-based node.


The following example uses the Kubernetes namespace ingress-basic created for the input resources. If necessary, provide a namespace for your own environment. If Kubernetes role-based access control (RBAC) is not enabled in your AKS cluster, add Helm commands.


To enable the client source IP to be kept for requests to containers in your cluster, add it to the Helm installation command. The client source IP is shown in the request header under X-Forwarded-For saved. TLS pass-through does not work when using an ingress controller with client source IP keeping enabled.

When the Kubernetes load balancer is created for the NGINX ingress controller, your internal IP address is assigned. You get the public IP address with the command.

It takes a few minutes for the IP address to be assigned to the service, as shown in the following sample output:

Ingress rules have not yet been created, so the NGINX ingress controller's default 404 page will appear when you navigate to the internal IP address. Ingress rules are configured in the following steps.

Run demo applications

To see the ingress controller in action, run two demo applications on the AKS cluster. In this example, you use to create multiple instances of a simple Hello WorldApplication to run.

Create the file aks-helloworld.yamland copy and paste the following sample YAML code into it:

Create the file ingress-demo.yaml, and copy and paste the following sample YAML code into it:

Run the two demo applications with:

Create an entry route

Both applications are now running in your Kubernetes cluster. To route traffic to each application, you create an inbound Kubernetes resource. The inbound resource configures rules that direct traffic to one of the two applications.

The following example routes traffic to the address to the service named. Traffic to the address is forwarded to the service.

Create a file named and paste in the following sample YAML code.

Create the input resource with the command.

The following sample output shows how the input resource is created.

Testing the entrance controller

Use a web client to navigate to the two applications to test the routes for the ingress controller. If necessary, you can quickly test this exclusively internal function from a pod in the AKS cluster. Create a test pod and attach a terminal session to it:

Install with in the pod:

Now access the address of the Kubernetes entry controller with, e.g. . Provide your own internal IP address that you used when you deployed the ingress controller in the first step of this article.

The address was specified without an additional path, so the ingress controller defaults the route / used. The first demo application is returned, as shown in the following abbreviated sample output:

Now add the path / hello-world-two to the address, e.g. . The second demo application is returned with the custom title as shown in the following abbreviated sample output:

Cleaning up resources

This article uses Helm to install the entrance components. When you deploy a Helm diagram, a number of Kubernetes resources are created. These resources include pods, deployments, and services. You can either delete the entire sample namespace or individual resources to clean up the resources.

Delete the sample namespace and all resources

Use the command with the namespace name to delete the entire sample namespace. All resources in the namespace are deleted.

Deleting individual resources

Another approach, in which you delete individual resources, offers more control. List the Helm releases with the command.

Look for charts by name nginx-ingress and aks-helloworldas shown in the following sample output:

Uninstall the versions with the command.

The following example uninstalls NGINX inbound delivery.

Next, remove the two sample applications:

Remove the ingress route that forwarded traffic to the sample apps:

Finally, you can delete the namespace yourself. To do this, use the command with the namespace name:

Next Steps

This article takes into account some external components in AKS. You can find more information about these components on the following project pages:

You can also: