Erik's Thoughts and Musings

Apple, DevOps, Technology, and Reviews

Installing Jenkins in Minikube (M1 Mac)

Introduction

I always wanted to try and setup Jenkins in a Minikube instance. While not necessary, it is important to get DevOps type tools running in a cluster.

Prerequisites

These tools are recommended to get Minikube running:

  • Homebrew - self described "Missing Package Manager for macOS"
  • Minikube - tool to easily create a single node Kubernetes cluster
  • Docker Desktop for M1 - container runtime
  • kubectl - Kubernetes command-line tool (CLI)
  • kubectx/kubens - Convenience tool for changing contexts and namespaces
  • helm - defacto Kubernetes package manager

Homebrew

Homebrew is arguably the best package manager for the Mac to install terminal applications. It has a very simple :

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Minikube

Minikube is an easy to deploy single node Kubernetes cluster. It used Homebrew to install:

brew install minikube

Docker Desktop

Docker Desktop is required to run Minikube on M1 Macs. Here is the direct link to the installer disk image for for M1 Macs:

https://desktop.docker.com/mac/main/arm64/Docker.dmg

Note that as of the time of this writing. You can't run Minikube in Virtual Box on M1 Macs. You will get this error:

$ minikube start --vm-driver=virtualbox
πŸ˜„  minikube v1.24.0 on Darwin 12.0.1 (arm64)
✨  Using the virtualbox driver based on user configuration

❌  Exiting due to DRV_UNSUPPORTED_OS: The driver 'virtualbox' is not supported on darwin/arm64

You must use Docker.

kubectl

kubectl is the command-line tool for interacting with your Kubernetes API. It is easily installable via homebrew:

brew install kubectl

kubectx / kubens

The next two tools are not required per se. In my mind they are the easy shortcuts that should have been included with any install of kubectl. Every docker image that I build that

  • kubectx - Change the kubernetes context from one cluster to another
  • kubens - Easily change the default kubernetes namespace

You can do both using kubectl config ... commands, but these

They are both easily installable via homebrew using 1 command

brew install kubectx

More details about kubectx/kubens tools at the Github repository.

Helm

In the same way that Homebrew is the defacto package manager for macOS, helm is for Kubernetes.

brew install helm

Helm will be used to install Jenkins onto the Kubernetes cluster.

Docker Preferences

Kubernetes needs a bunch of resources to run in Docker. For example, if you run this command you will get the following error:

$ minikube start --memory 8192 --cpus 4 --vm-driver=docker
πŸ˜„  minikube v1.24.0 on Darwin 12.0.1 (arm64)
✨  Using the docker driver based on user configuration

❌  Exiting due to MK_USAGE: Docker Desktop has only 1988MB memory but you specified 8192MB

To fix:

  • Launch Docker Desktop
  • Under the Preferences cog in the upper right choose "Resources"
  • Set CPUs to something suitable for Kubernetes. I chose 4 CPUs and 10 GB of RAM.
  • "Apply and Restart"

Minikube Start

It should now be possible to start minikube:

$ minikube start --memory 8192 --cpus 4 --vm-driver=docker
πŸ˜„  minikube v1.24.0 on Darwin 12.0.1 (arm64)
✨  Using the docker driver based on user configuration
πŸ‘  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
    > gcr.io/k8s-minikube/kicbase: 321.58 MiB / 321.58 MiB  100.00% 2.38 MiB p/
πŸ”₯  Creating docker container (CPUs=4, Memory=8192MB) ...
    > kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubectl: 41.44 MiB / 41.44 MiB [---------------] 100.00% 3.80 MiB p/s 11s
    > kubeadm: 40.50 MiB / 40.50 MiB [---------------] 100.00% 3.14 MiB p/s 13s
    > kubelet: 107.26 MiB / 107.26 MiB [-------------] 100.00% 5.41 MiB p/s 20s

    β–ͺ Generating certificates and keys ...
    β–ͺ Booting up control plane ...
    β–ͺ Configuring RBAC rules ...
πŸ”Ž  Verifying Kubernetes components...
    β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Minikube Dashboard / k9s

At this point you probably want to decide which tool you want to use to help you with troubleshooting issues with the cluster you created. Most minikube documentation points to using the Minikube Dashboard, which is easily invoked:

minikube dashboard

The UI will launch in your default web browser and is very similar to the Kubernetes Dashboard in design.

However, being a Terminal person, I tend to use k9s. It is easy to install, lightweight, and most reminds me of using something like top to get a handle of what is happening in your cluster. In fact I often run it in another pane of my Terminal. Here is how you install it:

brew install k9s

Launching k9s from the Terminal will give you a curses view where you can bounce around. Here is a more complete Medium post on how to use it:

K9s β€” the powerful terminal UI for Kubernetes

Installing Jenkins into Minikube

Now that all of the pre-requisites are out of the way, it is time to install Jenkins using Helm. The first step is to add the location of the helm repo to your helm install:

helm repo add jenkins https://charts.jenkins.io
helm repo update

You then can search for the latest helm chart by doing the following:

helm search repo jenkins
NAME            CHART VERSION   APP VERSION DESCRIPTION
jenkins/jenkins 3.9.0           2.303.3     Jenkins - Build great things at any scale! The ..

Pull the chart from the repo:

helm pull jenkins/jenkins

This should create a helm chart in the local folder. In this case it is name jenkins-3.9.0.tgz

Helm works by overiding the values from the chart to set your own values. Simply create the values for the chart by doing the following:

helm show values jenkins/jenkins > jenkins-values.yaml

It should have a bunch of stuff that is disabled by default. My values.yaml file was almost 900 lines long with a lot of comments. It is fine to remove most of this. For minikube I mainly want to override the namespace and the persistent volume (PV) that we want to use in the cluster, but before we do that we have to create them. I want jenkins to be installed in the jenkins namespace and I want the PV to be installed locally in my home folder.

First create the namespace and a suitable definition for the PV:

$ kubectl apply -f jenkins-namespace.yaml
namespace/jenkins created
$ kubectl apply -f jenkins-volume.yaml
persistentvolume/jenkins-volume created

Minikube does not come with a LoadBalancer by default so you also have to change the service to a NodePort.

$ helm install jenkins ./jenkins-3.9.0.tgz -n jenkins -f jenkins-values.yaml
NAME: jenkins
LAST DEPLOYED: Sat Nov 27 17:29:40 2021
NAMESPACE: jenkins
STATUS: deployed
REVISION: 1
NOTES:
1. Get your 'admin' user password by running:
  kubectl exec --namespace jenkins -it svc/jenkins -c jenkins -- /bin/cat /run/secrets/chart-admin-password && echo
...

3. Login with the password from step 1 and the username: admin
4. Configure security realm and authorization strategy
5. Use Jenkins Configuration as Code by specifying configScripts in your values.yaml file, see documentation: http:///configuration-as-code and examples: https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos

For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine

For more information about Jenkins Configuration as Code, visit:
https://jenkins.io/projects/jcasc/


NOTE: Consider using a custom image with pre-installed plugins

As it mentions in the info above, you have to get the default admin password that is auto-generated. In my case:

$ kubectl exec --namespace jenkins -it svc/jenkins -c jenkins -- /bin/cat /run/secrets/chart-admin-password && echo
5HrilnfS7eHAVfwDfyKv9B

Due to Kubernetes being launched in Docker, you need to use the minikube service command to tunnel in to launch the Jenkins UI in your default browser:

$ minikube service jenkins -n jenkins
|-----------|---------|-------------|---------------------------|
| NAMESPACE |  NAME   | TARGET PORT |            URL            |
|-----------|---------|-------------|---------------------------|
| jenkins   | jenkins | http/8080   | http://192.168.49.2:30897 |
|-----------|---------|-------------|---------------------------|
πŸƒ  Starting tunnel for service jenkins.
|-----------|---------|-------------|------------------------|
| NAMESPACE |  NAME   | TARGET PORT |          URL           |
|-----------|---------|-------------|------------------------|
| jenkins   | jenkins |             | http://127.0.0.1:52330 |
|-----------|---------|-------------|------------------------|
πŸŽ‰  Opening service jenkins/jenkins in default browser...
❗  Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

Use the admin user and the password above to login to the UI.

You can now create a Freestyle or Pipeline Job that will launch agent images within the cluster that execute the build.

You can see the Jenkins Kubernetes cluster configuration under Manage Jenkins > Manage Nodes and Clouds > Configure Clouds > Kubernetes Cloud Details

Uninstall Jenkins and Minikube

If at any point you need to uninstall the K8s, kill the minikube service command with a Ctrl-C and then uninstall the jenkins:

helm uninstall jenkins -n jenkins

And then kill minikube detritus:

minikube stop
minikube delete

Create an AWS VPC From Scratch

Today I went through the process of doing something I have never done before. Using some videos I found on Udemy, I created an AWS VPC from scratch. It is not that I am new to AWS networking, it is just I have always based my instances off existing VPCs, subnets, and network security. To be able to do it from scratch feels like a minor accomplishment. Here is the rough workflow:

  1. Created a VPC

  2. Create the subnets:

    • 3 public subnets in Availability Zone 1a, 1b, and 1c.
    • 3 private subnets in Availability Zone 1d, 1e, 1f
  3. Don't forget that the public subnets have to autoassign IPs (Actions > Modify Auto-assign IPs > Enable auto-assign public IPv4 address)

  4. Create Internet Gateway and attach to VPC (Actions > Attach to VPC)

  5. Edit the default routing table for the public subnets and make sure it can route out the Internet Gateway

  6. Create a routing table for the private subnets that can't go out the Internet Gateway. Associate the private subnets

  7. Create a public security group that allows inbound rules for SSH from my personal IP.

  8. Create a private security group that allows inbound rules for SSH from my personal IP.

After all of that I was able to spin up a quick and dirty terraform file that build a t2.micro instance in the VPC and suprisingly it worked on the first time.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.27"
    }
  }

  required_version = ">= 0.14.9"
}

provider "aws" {
  profile = "default"
  region  = "us-east-1"
}

data "aws_subnet" "public_subnet_1" {
  id = "subnet-XYZ-public"
}

resource "aws_instance" "webserver" {
  ami             = var.ami_id
  instance_type   = var.instance_type
  subnet_id       = data.aws_subnet.public_subnet_1.id
  security_groups = ["sg-public"]
  key_name        = var.key_name

  tags = {
    Name        = "webserver"
    Environment = "prod"
  }
}

xargs

I always wanted to learn how to pass a value to xargs somewhere in the middle of the command. The -I option can do it where the {} is just a token replacer. For example, here is a way to search for an IP prefix in a bunch of text files:

echo 192.168.1. | xargs -I{} grep {} *.txt

Here is how to pass -l to the middle of an ls {} -F command:

$ echo "-l" | xargs -I{} ls {} -F
total 0
drwxr-xr-x@ 7 emartin  staff  224 Nov  6 22:58 Folder1/
drwxr-xr-x@ 7 emartin  staff  224 Nov  6 23:58 Folder2/

I am really going to find this handy to do things like doing a find and then copying the item to a base folder.

You can actually use almost any token for the -I option.

(HT: Stack Exchange)

New Python Blogger Tool

Earlier this week, I created a new Blogger tool to support this new Pelican-based blog. It is a simple Python3 tool that accepts a variable number of arguments and then creates a Pelican friendly Markdown file. It will then launch that new file in the Markdown editor of your choice. In my case it launches Visual Studio Code. Here is an example:

$ blogger.py New Python Blogger Tool
Creating: 2021-04-17-new-python-blogger-tool.md

And since I hate typing repetitive words, in my .zshrc I have the following line to simplify launching the blog editor to simply b:

alias b=blogger.py

Sample call to the tool:

$ b New Python Blogger Tool
Creating: 2021-04-17-new-python-blogger-tool.md

The code is not too advanced so I will just put it below. The script requires:

  • Python 3.6+ for f-strings
  • pyyaml from pip

You will probably notice a custom util and path module. Those are just simple helper modules. The main functions used.

  • path.canonize() - Converts paths like ~/folder/../folder2 into /Users/emartin/folder2. See os.path.normpath() and os.path.expanduser() in the built-in os module.
  • util.log() - My console logger with time stamps. Easily replaceable with a print() statement.

I would post the link to my Github repo, but it is a private and it would take me a while to extract secrets. Maybe one day soon. Here is the code:

#!/usr/bin/env python3
import argparse
import os

import path
import util
import run

# pip install pyyaml
import yaml

# Backup configuration needs to live in the same location as script
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
config_yaml = "blogger_config.yaml"
config_yaml_path = os.path.join(__location__, config_yaml)

def blogger(config, title):
  source_folder = path.canonize(config['source_folder'])

  # Validate source folder exists
  if not os.path.exists(source_folder):
    util.log(f'ERROR: Source folder does not exist: {source_folder}')
    return -1

  content_folder =  os.path.join(source_folder, 'content')

  # Validate content folder exists
  if not os.path.exists(content_folder):
    util.log(f'ERROR: Content folder does not exist: {content_folder}')
    return -1

  # Generate the file name from the date and passed in title
  today = util.todaysDate()
  year = '{:%Y}'.format(today)
  month = '{:%m}'.format(today)
  day = '{:%d}'.format(today)
  name = title.lower().replace(" ","-")
  filename = f'{year}-{month}-{day}-{name}.md'

  new_blog_file = os.path.join(content_folder, filename)

  # Only create the file if the file doesn't exist otherwise just open it
  if not os.path.exists(new_blog_file):
    # Get the metadata defaults from the config file
    category = config['category']
    tag = config['tag']
    author = config['author']

    # Write metadata to top of the file in yaml format
    print(f'Creating: {filename}')
    with open(new_blog_file, 'a') as f:
      f.write(f'Title: {title}\n')
      f.write(f'Date: {today}\n')
      f.write(f'Category: {category}\n')
      f.write(f'Tags: {tag}\n')
      f.write(f'Authors: {author}\n')

  # Launch Markdown Tool
  markdown_tool = path.canonize(config['markdown_tool'])
  run.launch_app(markdown_tool, new_blog_file)

# Main function
if __name__ == '__main__':
  # Parse arguments
  parser = argparse.ArgumentParser(description='Script to automate blogging.')
  parser.add_argument('title', help=f'The title of the blog.', nargs='*', default='')
  args = parser.parse_args()

  # Dump Help if no parameters were passed
  if (len(args.title) == 0):
    parser.print_help()
    exit(0)

  # Load configuration file
  config = {}
  with open(config_yaml_path) as file:
   config = yaml.full_load(file)

  # Run Blogger
  title = " ".join(args.title)
  return_code = blogger(config, title)
  exit(return_code)

And the blogger_config.yaml looks like this:

source_folder: ~/Source/blog.emrt.in
category: "Misc|Media|Technology"
tag: "Blog"
author: Erik Martin
markdown_tool: /Applications/Visual Studio Code.app/Contents/Resources/app/bin/code

Descriptions of the configuration:

  • source_folder - Where the Pelican blog source is located. Files are created in the content folder.
  • category - category of the post (blog entry metadata)
  • tag - comma separated tags of the post (blog entry metadata)
  • author - default author of the post (blog entry metadata)
  • markdown_tool - The path to the markdown tool that launches the created .md file

The Expanse Revisited

A number of years ago I started The Expanse series of books. I read Leviathan Wakes the year it was published on a recommedation from the virtual book club The series is space opera sci-fi that I felt started out with a bang, but lost me around book 3 (Abaddon's Gate). Looking back I was probably simply overloaded with genre fiction and I never re-engaged, until this year.

Now that there are signs that we are slowly getting out of the pandemic, I was looking for a book series that was a bit more noir-ish, so I restarted the series. I have now passed where I stopped. I am about 10% into Cibola Burn and I can honestly say that it was a mistake to not continue with the series. The nuanced characterizations, the good people put in bad situations (Holden), there is no need to tell everyones back story in book 1. If I would have a complaint about anything is that the authors, James S.A. Corey, pull in new major characters every book and you have to learn the series dynamics from new. That may have been why I backed away from the series.

The cool bit is now there is a 5 season TV series on Amazon Prime to watch to complement the books. I find myself getting ahead in the book series and then playing catchup with the TV series. As usual, some of the differences are a bit jarring or disappointing, but I guess I am aged enough to not let that turn me off the TV series completely. The logistics of having a cast be busy throughout a season necessitates some fluff or some consolidation of storylines. I do like how the TV series introduced Avasarala early in season 1. It also helps that Shohreh Aghdashloo is a fantastic actress with such screen presence. The addition of the Drummer character was probably the most jarring of the changes for me because I wasn't sure at first who she was supposed to be. At first I thought she was Sam, but then Bull in season 3. Oh well, the character actor, Cara Gee is so wonderful that I learned to sit back and enjoy the ride with her storyline.

I am also listening to the series on audio book. The voice actor is really good at capturing the Belter language. When I read the words in the ebook and then listen to the same dialog in the audio book, it is not how I would have pictured it being said.

New Tool For Blogging

Today I finished off the first draft of my new blogger tool that I wrote in python to simplify the workflow of creating new Markdown files for the blog. Features:

  • Loads a yaml configuration file
  • Parses the command-line for the new blog title
  • Creates the new blog file in the Pelican content folder
  • Add the Pelican metadata at the top of the Markdown file with the Title, Date, Category, and Author.
  • Launches the configured Markdown tool. In this case Visual Studio Code.

It took me in total about 20 minutes to build and test it because it is very similar to my customized Journal scripts. I want to make some changes and add some polish to make it easy to handle titles. I'll do another post soon with more details.

Update: Done. Added polish.

Pelican

After 7 long years of using Octopress on this blog, today I moved it to Pelican.

I love static site generators. No real hacking threat like Wordpress. No need to have a database. All you need is a little markdown, a little metadata, and an engine to generate the site.

I like Pelican because it is based on Python instead of Ruby. Also, I could never really get used to Jekyll. The dependency handling was just way too complicated for my needs.

The command line to publish a new blog post is right up my alley. Here is all you have to d to create a new post:

cd blog.emrt.in
# create/edit a blog post
vi content/2021-04-11.md
pelican content
make rsync_upload

In typical Python fashion, the code is really easy to decipher what is going on. I found a nice Octopress theme for Pelican that made it super simple to change my old blog to the new theme. The most difficult part of the whole process is I had to convert the Markdown from using a yaml based header to a simple text header.

I do need to create some quick and dirty Python scripts to help with creating the skeleton for a new blog post. I also need to get all of this moved to Github.

Git-Flow

My cousin, Chris, and I were talking last night about how I need to ramp up on Git. I use Git for this blog and I know how to do basic stuff, but I also need to come up to speed on some of the technologies and methodologies surrounding the software. Chris suggested that I check out git-flow first outlined in this blog post:

http://jeffkreeftmeijer.com/2010/why-arent-you-using-git-flow/

The main idea to git-flow is that you have a main develop branch and master is only for production releases. Features go on a feature branch and are merged back to the develop branch. There is also a release branch where you can put bug fixes and a hotfix branch where a branch is created from master and then it is commited back to master on release and back to develop.

There are Git extensions you can install with Brew:

$ brew install git-flow

Or manually integrate from github:

https://github.com/nvie/gitflow

With the extension you can create an empty repository that follows the git-flow methodology:

$ git flow init

It will then ask to create the branch prefix for each of the Flow branches. Then you can continue to use the extensions to do things like start a feature branch:

$ git flow feature start hot-new-feature
Switched to a new branch 'feature/hot-new-feature'

And then finish it, merging it back to develop and deleting the feature branch

$ git flow feature finish hot-new-feature
Switched to branch 'develop'

To create a release branch that tags it with a version:

$ git flow relase start 1.0
Switched to a new branch 'release/0.1.0'

You can then manually merge to master, or you can finish the release:

$ git flow release finish 1.0
Switched to branch 'master'

And from the blog post, here is all of what happens:

Boom. git-flow pulls from origin, merges the release branch into master, tags the release and back-merges everything back into develop before removing the release branch.

Hotfixes have a similar syntax, the difference is they are based off of master not develop.

Considering how powerful Git is when it comes to branching and merging compared to Subversion, it seems like a great idea to have these transient branches.

Maven Research

Maven is a project manager, not a build manager like Ant, hence the pom.xml file (Project object model). It tries to use a certain format to where the files are on the system to make it simpler to include dependencies.

$ mvn archetype:generate

This generates a simple template. archetype is the plugin, generate is the goal.

$ mvn help:effective-pom

This generates the entire POM that includes the minimal pom.xml including any parent POMs, user settings, and active profiles.

Maven doesn’t know how to compile your code or make a JAR file, but the plugins it uses do. Basic maven is a basic shell that knows how to:

parse the command-line manage a classpath parse a POM file download Maven plugins Maven Lifecycle Plugin goals can be attached to lifecycle phases (e.g. install, test). Each phase may have 0 or more goals attached to it. Example phases:

  • process-resources
  • compile
  • process-classes
  • process-test-resources
  • test-compile
  • test
  • prepare-package
  • package

When mvn fires is launches all phases before it in the lifecycle.

You can also just specify the plugin goals, but that is a lot more tedious

$ mvn resources:resources compiler:compile resources:testResources surefire:test jar:jar install:install

4 (really 5) definitions create the coordinate system for a product:

  1. groupId - reverse domain name
  2. artifactId - unique identifier under the group representing a single project
  3. packaging - package type (jar, war, ear)
  4. version - a specific release of a project
  5. classifier (rarely used)

Maven download artifacts and plugins from a remote repository to a your local machine and stores them in your local Maven repository (~/.m2/repository). install phase installs a .jar into the local repository. A repository contains the .jar and the .pom of what it depends on.

You can generate a site and documentation by doing the following:

$ mvn site

Generates javadoc, and other custom reports

This can tell you what all of the dependencies are:

$ mvn dependency:resolve

A prettier way to see the dependency tree

$ mvn dependency:tree

I got a lot of useful information for this post from Maven By Example