Azure subscription
If you have an Azure subscription
Also please authenticate your Azure CLI by running the command below on your machine and following the instructions.
az login
Kubernetes basics
There is an assumption of some prior knowledge of Kubernetes and its concepts. If you are new to Kubernetes, the following documentation can take you quickly through the basic concepts required to understand how it works https://aka.ms/LearnAKS
If you are a more experienced Kubernetes developer or administrator, you may have a look at the best practices guide https://aka.ms/aks/bestpractices
Tasks
Deploy Kubernetes to Azure, using CLI or Azure portal using the latest Kubernetes version available in AKS
Get the latest available Kubernetes version
region=<targeted AKS region>
az aks get-versions
-l$region
-o
table
kubernetesVersionLatest=$(az aks get-versions
-l${region}
--query
'orchestrators[-1].orchestratorVersion'
-o
tsv
)
Create a Resource Group
az group create
--nameakschallenge
--location$region
Create AKS using the latest version and enable the monitoring addon
az aks create
--resource-groupakschallenge
--name<unique-aks-cluster-name>
--enable-addonsmonitoring
--kubernetes-version$kubernetesVersionLatest
--generate-ssh-keys
--location
eastus
Important: If you are using Service Principal authentication, for example in a lab environment, you’ll need to use an alternate command to create the cluster with your existing Service Principal passing in the
Application Id
and the Application Secret Key
.az aks create
--resource-groupakschallenge
--name<unique-aks-cluster-name>
--enable-addonsmonitoring
--kubernetes-version$kubernetesVersionLatest
--generate-ssh-keys
--location
eastus
--service-principalAPP_ID
--client-secret"APP_SECRET"
Install the Kubernetes CLI
az aks install-cli
Note If you’re running this on the Azure Cloud Shell, you may receive a “Permission Denied” error as the Kubernetes CLI (
kubectl
) is already installed. If this is the case, just go to the next step.
Ensure you and your colleagues can connect to the cluster using kubectl
Resources
- https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough
- https://docs.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-create
- https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal
Deploy MongoDB
You need to deploy MongoDB in a way that is scalable and production ready. There are a couple of ways to do so.
Hints
- Be careful with the authentication settings when creating MongoDB. It is recommended that you create a standalone username/password and database.
- Important: If you install using Helm and then delete the release, the MongoDB data and configuration persists in a Persistent Volume Claim. You may face issues if you redploy again using the same release name because the authentication configuration will not match. If you need to delete the Helm deployment and start over, make sure you delete the Persistent Volume Claims created otherwise you’ll run into issues with authentication due to stale configuration. Find those claims using
kubectl get pvc
.
Tasks
Deploy an instance of MongoDB to your cluster. The application expects a database called akschallenge
Resources
- https://docs.microsoft.com/en-us/azure/aks/kubernetes-helm
- https://github.com/helm/charts/tree/master/stable/mongodb#replication
Deploy the Order Capture API
You need to deploy the Order Capture API (azch/captureorder). This requires an external endpoint, exposing the API on port 80 and needs to write to MongoDB.
Container images and source code
In the table below, you will find the Docker container images provided by the development team on Docker Hub as well as their corresponding source code on GitHub.
Component
|
Docker Image
|
Source Code
|
Build Status
|
Order Capture API
|
Environment variables
The Order Capture API requires certain environment variables to properly run and track your progress. Make sure you set those environment variables.
TEAMNAME="[YourTeamName]"
- Track your team’s progress. Use your assigned team name.
CHALLENGEAPPINSIGHTS_KEY="[AsSpecifiedAtTheEvent]"
- Application Insights key if provided by proctors. This is used to track your team’s progress. If not provided, just delete it.
MONGOHOST="<hostname of mongodb>"
- MongoDB hostname.
MONGOUSER="<mongodb username>"
- MongoDB username.
MONGOPASSWORD="<mongodb password>"
- MongoDB password.
Hint: The Order Capture API exposes the following endpoint for health-checks:
http://[PublicEndpoint]:[port]/healthz
Tasks
Provision the captureorder
deployment and expose a public endpoint
Deployment
Save the YAML below as
captureorder-deployment.yaml
or download it from captureorder-deployment.yamlapiVersion:apps/v1
kind:Deployment
metadata:
name:
captureorder
spec:
selector:
matchLabels:
app:
captureorder
replicas:
2
template:
metadata:
labels:
app:
captureorder
spec:
containers:
-
name:
captureorder
image:
azch/captureorder
imagePullPolicy:
Always
readinessProbe:
httpGet:
port:
8080
path:
/healthz
livenessProbe:
httpGet:
port:
8080
path:
/healthz
resources:
requests:
memory:
"128Mi"
cpu:
"100m"
limits:
memory:
"256Mi"
cpu:
"500m"
env:
-
name:
TEAMNAME
value:
"team-azch"
#- name: CHALLENGEAPPINSIGHTS_KEY # uncomment and set value only if you've been provided a key
# value: "" # uncomment and set value only if you've been provided a key
-
name:
MONGOHOST
value:
"orders-mongo-mongodb.default.svc.cluster.local"
-
name:
MONGOUSER
value:
"orders-user"
-
name:
MONGOPASSWORD
value:
"orders-password"
ports:
-
containerPort:
8080
And deploy it using
kubectl apply
-fcaptureorder-deployment.yaml
Verify that the pods are up and running
kubectl get pods
-lapp=
captureorder
Hint If the pods are not starting, not ready or are crashing, you can view their logs using
kubectl logs <pod name>
and kubectl describe pod <pod name>
.
Service
apiVersion:v1
kind:Service
metadata:
name:
captureorder
spec:
selector:
app:
captureorder
ports:
-
protocol:
TCP
port:
80
targetPort:
8080
type:
LoadBalancer
And deploy it using
kubectl apply
-fcaptureorder-service.yaml
Retrieve the External-IP of the Service
Use the command below. Make sure to allow a couple of minutes for the Azure Load Balancer to assign a public IP.
kubectl get service captureorder
-ojsonpath="{.status.loadBalancer.ingress[*].ip}"
Ensure orders are successfully written to MongoDB
Send a
POST
request using Postman or curl to the IP of the service you got from the previous commandcurl
-d'{"EmailAddress": "email@domain.com", "Product": "prod-1", "Total": 100}'
-H
"Content-Type: application/json"
-X
POST http://[Your Service Public LoadBalancer IP]/v1/order
You should get back the created order ID
{
"orderId": "5beaa09a055ed200016e582f"
}
Resources
- https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
- https://kubernetes.io/docs/concepts/services-networking/service/
Deploy the frontend using Ingress
You need to deploy the Frontend (azch/frontend). This requires an external endpoint, exposing the website on port 80 and needs to write to connect to the Order Capture API public IP.
Container images and source code
In the table below, you will find the Docker container images provided by the development team on Docker Hub as well as their corresponding source code on GitHub.
Component
|
Docker Image
|
Source Code
|
Build Status
|
Frontend
|
Environment variables
The frontend requires certain environment variables to properly run and track your progress. Make sure you set those environment variables.
CAPTUREORDERSERVICEIP="<public IP of order capture service>"
Tasks
Provision the frontend
deployment
Deployment
apiVersion:apps/v1
kind:Deployment
metadata:
name:
frontend
spec:
selector:
matchLabels:
app:
frontend
replicas:
1
template:
metadata:
labels:
app:
frontend
spec:
containers:
-
name:
frontend
image:
azch/frontend
imagePullPolicy:
Always
readinessProbe:
httpGet:
port:
8080
path:
/
livenessProbe:
httpGet:
port:
8080
path:
/
resources:
requests:
memory:
"128Mi"
cpu:
"100m"
limits:
memory:
"256Mi"
cpu:
"500m"
env:
-
name:
CAPTUREORDERSERVICEIP
value:
"<public IP of order capture service>"
ports:
-
containerPort:
8080
And deploy it using
kubectl apply
-ffrontend-deployment.yaml
Verify that the pods are up and running
kubectl get pods
-lapp=
frontend
Hint If the pods are not starting, not ready or are crashing, you can view their logs using
kubectl logs <pod name>
and kubectl describe pod <pod name>
.
Expose the frontend on a hostname
Instead of accessing the frontend through an IP address, you would like to expose the frontend over a hostname. Explore using Kubernetes Ingress with AKS HTTP Application Routing add-on to achieve this purpose.
When you enable the add-on, this deploys two components:a Kubernetes Ingress controller and an External-DNS controller.
- Ingress controller: The Ingress controller is exposed to the internet by using a Kubernetes service of type LoadBalancer. The Ingress controller watches and implements Kubernetes Ingress resources, which creates routes to application endpoints.
- External-DNS controller: Watches for Kubernetes Ingress resources and creates DNS A records in the cluster-specific DNS zone using Azure DNS.
Enable the HTTP routing add-on on your cluster
az aks enable-addons
--resource-groupakschallenge
--name<unique-aks-cluster-name>
--addonshttp_application_routing
This will take a few minutes.
Service
Note Since you’re going to expose the deployment using an Ingress, there is no need to use a public IP for the Service, hence you can set the type of the service to be
ClusterIP
instead of LoadBalancer
.apiVersion:v1
kind:Service
metadata:
name:
frontend
spec:
selector:
app:
frontend
ports:
-
protocol:
TCP
port:
80
targetPort:
8080
type:
ClusterIP
And deploy it using
kubectl apply
-ffrontend-service.yaml
Ingress
The HTTP application routing add-on may only be triggered on Ingress resources that are annotated as follows:
annotations:
kubernetes.io/ingress.class:
addon-http-application-routing
Retrieve your cluster specific DNS zone name by running the command below
az aks show
--resource-groupakschallenge
--name<unique-aks-cluster-name>
--queryaddonProfiles.httpApplicationRouting.config.HTTPApplicationRoutingZoneName
-otable
You should get back something like
9f9c1fe7-21a1-416d-99cd-3543bb92e4c3.eastus.aksapp.io
.
Create an Ingress resource that is annotated with the required annotation and make sure to replace
<CLUSTER_SPECIFIC_DNS_ZONE>
with the DNS zone name you retrieved from the previous command.
Additionally, make sure that the
serviceName
and servicePort
are pointing to the correct values as the Service you deployed previously.apiVersion:extensions/v1beta1
kind:Ingress
metadata:
name:
frontend
annotations:
kubernetes.io/ingress.class:
addon-http-application-routing
spec:
rules:
-
host:
frontend.<CLUSTER_SPECIFIC_DNS_ZONE>
http:
paths:
-
backend:
serviceName:
frontend
servicePort:
80
path:
/
And create it using
kubectl apply
-ffrontend-ingress.yaml
Verify that the DNS records are created
View the logs of the External DNS pod
kubectl logs
-fdeploy/addon-http-application-routing-external-dns
-nkube-system
It should say something about updating the A record. It may take a few minutes.
time="2019-02-13T01:58:25Z"level=
info
msg="Updating A record named 'frontend' to '13.90.199.8' for Azure DNS zone 'b3ec7d3966874de389ba.eastus.aksapp.io'."
time="2019-02-13T01:58:26Z"level=
info
msg="Updating TXT record named 'frontend' to '"heritage=external-dns,external-dns/owner
=default
"' for Azure DNS zone 'b3ec7d3966874de389ba.eastus.aksapp.io'."
You should also be able to find the new records created in the Azure DNS zone for your cluster.
Browse to the public hostname of the frontend and watch as the number of orders change
Once the Ingress is deployed and the DNS records propagated, you should be able to access the frontend at http://frontend.[cluster_specific_dns_zone], for example http://frontend.9f9c1fe7-21a1-416d-99cd-3543bb92e4c3.eastus.aksapp.io
If it doesn’t work from the first trial, give it a few more minutes or try a different browser.
No comments:
Post a Comment