Make sure you have installed kOps and installed kubectl.
- Slack Kubernetes Download
- Slack Kubernetes Definition
- Kubernetes Slack Integration
- Kubernetes Slack Invite
- Kubernetes Sig
- Slack Kubernetes Login
- Kubernetes Slack Server
Setup your environment ¶
AWS ¶
In order to correctly prepare your AWS account for kops
, we require you toinstall the AWS CLI tools, and have API credentials for an account that hasthe permissions to create a new IAM account for kops
later in the guide.
Individually We Specialize; Together We TransformWe firmly believe that.
Once you've installed the AWS CLI tools and have correctly setupyour system to use the official AWS methods of registering security credentialsas defined here we'll be ready to run kops
, as it uses the Go AWS SDK.
Setup IAM user ¶
- Kubernetes Meetup Tokyo's detail page.
- Install BotKube Slack app to your Slack workspace Click the Add to Slack button provided to install BotKube Slack application to your workspace. Once you have authorized the application, you will be provided a BOT Access token. Kindly note down that token as it will be required while deploying BotKube backend to your Kubernetes cluster.
In order to build clusters within AWS we'll create a dedicated IAM user forkops
. This user requires API credentials in order to use kops
. Createthe user, and credentials, using the AWS console.
The kops
user will require the following IAM permissions to function properly:
You can create the kOps IAM user from the command line using the following:
You should record the SecretAccessKey and AccessKeyID in the returned JSONoutput, and then use them below:
Configure DNS ¶
In order to build a Kubernetes cluster with kops
, we need to preparesomewhere to build the required DNS records. There are three scenariosbelow and you should choose the one that most closely matches your AWSsituation.
Note: if you want to use gossip-based DNS, you can skip this section.
Scenario 1a: A Domain purchased/hosted via AWS ¶
If you bought your domain with AWS, then you should already have a hosted zonein Route53. If you plan to use this domain then no more work is needed.
Slack Kubernetes Download
In this example you own example.com
and your records for Kubernetes wouldlook like etcd-us-east-1c.internal.clustername.example.com
Scenario 1b: A subdomain under a domain purchased/hosted via AWS ¶
In this scenario you want to contain all kubernetes records under a subdomainof a domain you host in Route53. This requires creating a second hosted zonein route53, and then setting up route delegation to the new zone.
In this example you own example.com
and your records for Kubernetes wouldlook like etcd-us-east-1c.internal.clustername.subdomain.example.com
This is copying the NS servers of your SUBDOMAIN up to the PARENTdomain in Route53. To do this you should:
- Create the subdomain, and note your SUBDOMAIN name servers (If you have already done this you can also get the values)
- Note your PARENT hosted zone id
- Create a new JSON file with your values (
subdomain.json
)
Note: The NS values here are for the SUBDOMAIN
- Apply the SUBDOMAIN NS records to the PARENT hosted zone.
Now traffic to *.subdomain.example.com
will be routed to the correct subdomain hosted zone in Route53.
Scenario 2: Setting up Route53 for a domain purchased with another registrar ¶
If you bought your domain elsewhere, and would like to dedicate the entire domain to AWS you should follow the guide here
Scenario 3: Subdomain for clusters in route53, leaving the domain at another registrar ¶
If you bought your domain elsewhere, but only want to use a subdomain in AWSRoute53 you must modify your registrar's NS (NameServer) records. We'll createa hosted zone in Route53, and then migrate the subdomain's NS records to yourother registrar.
You might need to grab jqfor some of these instructions.
- Create the subdomain, and note your name servers (If you have already done this you can also get the values)
You will now go to your registrar's page and log in. You will need to create a new SUBDOMAIN, and use the 4 NS records received from the above command for the new SUBDOMAIN. This MUST be done in order to use your cluster. Do NOT change your top level NS record, or you might take your site offline.
Information on adding NS records with Godaddy.com
- Information on adding NS records with Google Cloud Platform
Using Public/Private DNS (kOps 1.5+) ¶
By default the assumption is that NS records are publicly available. If yourequire private DNS records you should modify the commands we run later in thisguide to include:
If you have a mix of public and private zones, you will also need to include the --dns-zone
argument with the hosted zone id you wish to deploy in:
Slack Kubernetes Definition
Testing your DNS setup ¶
This section is not required if a gossip-based cluster is created.
You should now be able to dig your domain (or subdomain) and see the AWS NameServers on the other end.
Should return something similar to:
This is a critical component when setting up clusters. If you are experiencingproblems with the Kubernetes API not coming up, chances are something is wrongwith the cluster's DNS.
Please DO NOT MOVE ON until you have validated your NS records! This is not required if a gossip-based cluster is created.
Cluster State storage ¶
In order to store the state of your cluster, and the representation of yourcluster, we need to create a dedicated S3 bucket for kops
to use. Thisbucket will become the source of truth for our cluster configuration. Inthis guide we'll call this bucket example-com-state-store
, but you shouldadd a custom prefix as bucket names need to be unique.
We recommend keeping the creation of this bucket confined to us-east-1,otherwise more work will be required.
Note: S3 requires --create-bucket-configuration LocationConstraint=<region>
for regions other than us-east-1
.
Note: We STRONGLY recommend versioning your S3 bucket in case you ever needto revert or recover a previous state store.
Information regarding cluster state store location must be set when using kops
cli. See state store for further information.
Using S3 default bucket encryption ¶
kops
supports default bucket encryption to encrypt its state in an S3 bucket. This way, the default server side encryption set for your bucket will be used for the kOps state too. You may want to use this AWS feature, e.g., for easily encrypting every written object by default or when you need to use specific encryption keys (KMS, CMK) for compliance reasons.
If your S3 bucket has a default encryption set up, kOps will use it:
If the default encryption is not set or it cannot be checked, kOps will resort to using server-side AES256 bucket encryption with Amazon S3-Managed Encryption Keys (SSE-S3).
Sharing an S3 bucket across multiple accounts ¶
It is possible to use a single S3 bucket for storing kOps state for clusterslocated in different accounts by using cross-account bucket policies.
kOps will be able to use buckets configured with cross-account policies by default.
In this case you may want to override the object ACLs which kOps places on thestate files, as default AWS ACLs will make it possible for an account that hasdelegated access to write files that the bucket owner cannot read.
To do this you should set the environment variable KOPS_STATE_S3_ACL
to thepreferred object ACL, for example: bucket-owner-full-control
.
For available canned ACLs please consult Amazon's S3documentation.
Creating your first cluster ¶
Prepare local environment ¶
We're ready to start creating our first cluster! Let's first set up a fewenvironment variables to make the process easier.
For a gossip-based cluster, make sure the name ends with k8s.local
. For example:
Kubernetes Slack Integration
Note: You don’t have to use environmental variables here. You can always definethe values using the –name and –state flags later.
Create cluster configuration ¶
We will need to note which availability zones are available to us. In thisexample we will be deploying our cluster to the us-west-2 region.
Below is a create cluster command. We'll use the most basic example possible,with more verbose examples in high availability.The below command will generate a cluster configuration, but will not start buildingit. Make sure you have generated an SSH key pair before creating your cluster.
All instances created by kops
will be built within ASG (Auto Scaling Groups),which means each instance will be automatically monitored and rebuilt by AWS ifit suffers any failure.
Customize Cluster Configuration ¶
Now we have a cluster configuration, we can look at every aspect that definesour cluster by editing the description.
This opens your editor (as defined by $EDITOR) and allows you to edit theconfiguration. The configuration is loaded from the S3 bucket we createdearlier, and automatically updated when we save and exit the editor.
We'll leave everything set to the defaults for now, but the rest of kops
documentation covers additional settings and configuration you can enable.
Build the Cluster ¶
Now we take the final step of actually building the cluster. This'll take awhile. Once it finishes you'll have to wait longer while the booted instancesfinish downloading Kubernetes components and reach a 'ready' state.
Use the Cluster ¶
Remember when you installed kubectl
earlier? The configuration for yourcluster was automatically generated and written to ~/.kube/config
for you!
A simple Kubernetes API call can be used to check if the API is online andlistening. Let's use kubectl
to check the nodes.
You will see a list of nodes that should match the --zones
flag definedearlier. This is a great sign that your Kubernetes cluster is online andworking.
kops
also ships with a handy validation tool that can be ran to ensure yourcluster is working as expected.
You can look at all system components with the following command.
Delete the Cluster ¶
Running a Kubernetes cluster within AWS obviously costs money, and so you maywant to delete your cluster if you are finished running experiments.
You can preview all of the AWS resources that will be destroyed when the clusteris deleted by issuing the following command.
When you are sure you want to delete your cluster, issue the delete commandwith the --yes
flag. Note that this command is very destructive, and willdelete your cluster and everything contained within it!
Next steps ¶
Now that you have a working kOps cluster, read through the recommendations for production setups guide
Feedback ¶
There's an incredible team behind kOps and we encourage you to reach out to thecommunity on the KubernetesSlack(http://slack.k8s.io/). Bring yourquestions, comments, and requests and meet the people behind the project!
Legal ¶
AWS Trademark used with limited permission under the AWS TrademarkGuidelines
Kubernetes Logo used with permission under the Kubernetes BrandingGuidelines
Install BotKube to the Slack workspace
Follow the steps below to install BotKube Slack app to your Slack workspace.
Install BotKube Slack app to your Slack workspace
Click the Add to Slack button provided to install BotKube Slack application to your workspace. Once you have authorized the application, you will be provided a BOT Access token. Kindly note down that token as it will be required while deploying BotKube backend to your Kubernetes cluster.
Alternatively, you can install BotKube Slack app from Slack app directory.
Add BotKube user to a Slack channel
After installing BotKube app to your Slack workspace, you could see a new bot user with the name “BotKube” added in your workspace. Add that bot to a Slack channel you want to receive notification in.
(You can add it by inviting @BotKube in a channel)
Install BotKube Backend in Kubernetes cluster
Using helm
- We will be using helm to install BotKube in Kubernetes. Follow this guide to install helm if you don’t have it installed already
- Add infracloudio chart repository
- Deploy BotKube backend using helm install in your cluster.
where,
- SLACK_CHANNEL_NAME is the channel name where @BotKube is added
- SLACK_API_TOKEN_FOR_THE_BOT is the Token you received after installing BotKube app to your Slack workspace
- CLUSTER_NAME is the cluster name set in the incoming messages
- ALLOW_KUBECTL set true to allow kubectl command execution by BotKube on the cluster
Configuration syntax is explained here.Complete list of helm options is documented here.
Kubernetes Slack Invite
Send @BotKube ping in the channel to see if BotKube is running and responding.
With the default configuration, BotKube will watch all the resources in all the namespaces for create, delete and error events.
If you wish to monitor only specific resources, follow the steps given below:
Create new file config.yaml and add resource configuration as described on the configuration page.
(You can refer sample config from https://raw.githubusercontent.com/infracloudio/botkube/v0.12.1/helm/botkube/sample-res-config.yaml)
- Open downloaded deploy-all-in-one.yaml and update the configuration.
Set SLACK_ENABLED, SLACK_CHANNEL, SLACK_API_TOKEN, clustername, kubectl.enabled and update the resource events configuration you want to receive notifications for in the configmap.
where,
- SLACK_ENABLED set true to enable Slack support for BotKube
- SLACK_CHANNEL is the channel name where @BotKube is added
- SLACK_API_TOKEN is the Token you received after installing BotKube app to your Slack workspace
- clustername is the cluster name set in the incoming messages
- kubectl.enabled set true to allow kubectl command execution by BotKube on the cluster
Configuration syntax is explained here.
Kubernetes Sig
- Deploy the resources
- Check pod status in botkube namespace. Once running, send @BotKube ping in the Slack channel to confirm if BotKube is responding correctly.
Remove BotKube from Slack workspace
Slack Kubernetes Login
- Goto Slack manage apps page
- Click on “BotKube” and click on “Remove App” button
Remove BotKube from Kubernetes cluster
Using helm
If you have installed BotKube backend using helm, execute following command to completely remove BotKube and related resources from your cluster.