Erik's Thoughts and Musings

Apple, DevOps, Technology, and Reviews

Windows Container Image - RabbitMQ

After years of using Docker, today was my first day of debugging a Docker container build for Windows. In fact I was so shocked it wasn't a Linux based container. I was like a deer in headlights of how to debug it when it was causing problems. It was a Rabbit MQ image. Fortunately the container has both cmd.exe and powershell.exe. It took a little web searching how to do certain things like cat and tail -f in PowerShell, but before long I was looking at the logs:

docker exec -it rabbitmq powershell.exe
cd \Users\ContainerAdministrator\AppData\Roaming\RabbitMQ\log
Get-Content .\rabbit@localhost.log -tail 100 -Wait

The log:

=WARNING REPORT==== 28-Dec-2023::06:00:39 ===
closing AMQP connection <0.25045.3> (10.0.83.5:60070 -> 172.25.205.23:5672, vhost: '/', user: 'admin'):
client unexpectedly closed TCP connection
=WARNING REPORT==== 28-Dec-2023::06:00:40 ===
closing AMQP connection <0.15059.3> (10.0.83.5:63367 -> 172.25.205.23:5672, vhost: '/', user: 'admin'):
client unexpectedly closed TCP connection
=INFO REPORT==== 28-Dec-2023::06:01:51 ===
accepting AMQP connection <0.25498.3> (10.0.83.5:60211 -> 172.25.205.23:5672)
=INFO REPORT==== 28-Dec-2023::06:01:51 ===
connection <0.25498.3> (10.0.83.5:60211 -> 172.25.205.23:5672): user 'admin' authenticated and granted access to vhost '/'

And then a little more searching how to get the command-line history when wanted to re-run commands:

doskey /history

Azure Workload Identity Federation

I started working on switching out our Azure DevOps service connections to used federated workload identities. There is a good page and Video in Azure about how Workload Identity Federation works:

https://learn.microsoft.com/en-us/entra/workload-id/workload-identity-federation

Basically the way it works is there is a trust that is setup between Azure DevOps and our Azure subscriptions by using these parameters and their examples:

  • Issuer URL: https://vstoken.dev.azure.com/abcdefc4-ffff-fff-...
  • Subject: sc://org/product/test-emartin-federated
  • Audience: api://AzureADTokenExchange (always)

You then tie that to a service principal in AD that will be used as the identity for doing actions in the subscription. Here is another resource about how to set it up using Terraform:

https://techcommunity.microsoft.com/t5/azure-devops-blog/introduction-to-azure-devops-workload-identity-federation-oidc/ba-p/3908687

Why use Workload identity federation? Up until now the only way to avoid storing service principal secrets for Azure DevOps pipelines was to use a self-hosted Azure DevOps agents with managed identities. Now with Workload identity federation we remove that limitation and enable you to use short-lived tokens for authenticating to Azure. This significantly improves your security posture and removes the need to figure out how to share and rotate secrets. Workload identity federation works with many Azure DevOps tasks, not just the Terraform ones we are focussing on in this article, so you can use it for deploying code and other configuration tasks. I encourage you to learn more about the supported tasks here.

What is Workload identity federation and how does it work Workload identity federation is an OpenID Connect implementation for Azure DevOps that allow you to use short-lived credential free authentication to Azure without the need to provision self-hosted agents with managed identity. You configure a trust between your Azure DevOps organisation and an Azure service principal. Azure DevOps then provides a token that can be used to authenticate to the Azure API.

Here is the terraform:

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = ">=3.0.0"
    }
    azuredevops = {
      source = "microsoft/azuredevops"
      version = ">= 0.9.0"
    }
  }
}

provider "azurerm" {
  features {}
}

resource "azuredevops_project" "example" {
  name               = "Example Project"
  visibility         = "private"
  version_control    = "Git"
  work_item_template = "Agile"
  description        = "Managed by Terraform"
}

resource "azurerm_resource_group" "identity" {
  name     = "identity"
  location = "UK South"
}

resource "azurerm_user_assigned_identity" "example" {
  location            = azurerm_resource_group.identity.location
  name                = "example-identity"
  resource_group_name = azurerm_resource_group.identity.name
}

resource "azuredevops_serviceendpoint_azurerm" "example" {
  project_id                             = azuredevops_project.example.id
  service_endpoint_name                  = "example-federated-sc"
  description                            = "Managed by Terraform"
  service_endpoint_authentication_scheme = "WorkloadIdentityFederation"
  credentials {
    serviceprincipalid = azurerm_user_assigned_identity.example.client_id
  }
  azurerm_spn_tenantid      = "00000000-0000-0000-0000-000000000000"
  azurerm_subscription_id   = "00000000-0000-0000-0000-000000000000"
  azurerm_subscription_name = "Example Subscription Name"
}

resource "azurerm_federated_identity_credential" "example" {
  name                = "example-federated-credential"
  resource_group_name = azurerm_resource_group.identity.name
  parent_id           = azurerm_user_assigned_identity.example.id
  audience            = ["api://AzureADTokenExchange"]
  issuer              = azuredevops_serviceendpoint_azurerm.example.workload_identity_federation_issuer
  subject             = azuredevops_serviceendpoint_azurerm.example.workload_identity_federation_subject
}

AWS - Reserved Instances

I have been using AWS at work for over 3 years to varying degrees. While I feel comfortable using and administering most things, I realized it is time for me to get serious and fill the gaps in my knowledge. AWS has so many bells and whistles that it is daunting to think you can learn everything and keep that knowledge relevant when day-to-day you probably only use 5% of the features.

To fix these gaps, last month I started taking an AWS course on Udemy that will prepare me for one of the lower level AWS DevOps certifications. Due to distractions with kids and life happening, I am still at the beginning 10% of the course still going over the basics. I am using my main AWS account as a sandbox for trying things out in the course. Today my class got to the EC2 section, I realized that I have not been smart when it comes to saving money on my own AWS workloads. For the last 9 months, this blog has been running in AWS. I have been using On-Demand Pricing not a Reserved Instance. I can save 30% of the cost of the server by getting a reserved instance for a year and roughly 60% for 3 years. I plunked down the money to pay for 1 year.

AWS - Modifying EC2 DeleteOnTermination

Delete on Termination

I created my web server late last year on an EC2 instance on AWS. While I built the instance with terraform, I didn't set the EBS for the EC2 instance's "Delete on Termination" flag to false. That would mean if I would terminate the instance instead of stop it, that my main EBS volume would just disappear. While that's not that big of a deal because I built the webserver with automation and could easily regenerate it quicky. I didn't necessarily want to lose things like server logs.

I started poking around the console looking for how to switch the flag and I was perplexed how to set it after the fact. I went poking around the web and found there was no way to do it! You have to use the aws ec2 modify-instance-attribute CLI command to change it

Parameters for the CLI

You need two things to be able to use the AWS CLI command

  • EC2 instance ID
  • Storage device name

The instance ID was easy to get either by using the console or in a roughshod way using the AWS CLI:

$ aws ec2 describe-instances --output yaml | grep Instance
  Instances:
...
    InstanceId: i-04753
    InstanceType: t2.micro
...

The device name is also easy to find in the console by going to the Storage tab, but can also be found via the CLI:

$ aws ec2 describe-instances --output yaml | grep -A 6 BlockDeviceMappings
    BlockDeviceMappings:
    - DeviceName: /dev/xvda
      Ebs:
        AttachTime: '2021-11-28T03:03:28+00:00'
        DeleteOnTermination: true
        Status: attached
        VolumeId: vol-0e40

That would mean our two parameters would be:

  • EC2 instance ID: i-04753
  • Storage device name: /dev/xvda

Running the CLI

First you need to create a json file that specifies the device name and the DeleteOnTermination flag:

[
  {
    "DeviceName": "/dev/xvda",
    "Ebs": {
      "DeleteOnTermination": false
      }
  }
]

And then you invoke the comand:

aws ec2 modify-instance-attribute --instance-id i-04753 --block-device-mappings file://storage.json

There is no output on a successful change, but you can confirm that the change was made with the same command as above:

$ aws ec2 describe-instances --output yaml | grep -A 6 BlockDeviceMappings
    BlockDeviceMappings:
    - DeviceName: /dev/xvda
      Ebs:
        AttachTime: '2021-11-28T03:03:28+00:00'
        DeleteOnTermination: false
        Status: attached
        VolumeId: vol-0e40

Notice DeleteOnTermination is now set to false.

(HT to Pete Wilcock)

Installing Jenkins in Minikube (M1 Mac)

Introduction

I always wanted to try and setup Jenkins in a Minikube instance. While not necessary, it is important to get DevOps type tools running in a cluster.

Prerequisites

These tools are recommended to get Minikube running:

  • Homebrew - self described "Missing Package Manager for macOS"
  • Minikube - tool to easily create a single node Kubernetes cluster
  • Docker Desktop for M1 - container runtime
  • kubectl - Kubernetes command-line tool (CLI)
  • kubectx/kubens - Convenience tool for changing contexts and namespaces
  • helm - defacto Kubernetes package manager

Homebrew

Homebrew is arguably the best package manager for the Mac to install terminal applications. It has a very simple :

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Minikube

Minikube is an easy to deploy single node Kubernetes cluster. It used Homebrew to install:

brew install minikube

Docker Desktop

Docker Desktop is required to run Minikube on M1 Macs. Here is the direct link to the installer disk image for for M1 Macs:

https://desktop.docker.com/mac/main/arm64/Docker.dmg

Note that as of the time of this writing. You can't run Minikube in Virtual Box on M1 Macs. You will get this error:

$ minikube start --vm-driver=virtualbox
😄  minikube v1.24.0 on Darwin 12.0.1 (arm64)  Using the virtualbox driver based on user configuration

❌  Exiting due to DRV_UNSUPPORTED_OS: The driver 'virtualbox' is not supported on darwin/arm64

You must use Docker.

kubectl

kubectl is the command-line tool for interacting with your Kubernetes API. It is easily installable via homebrew:

brew install kubectl

kubectx / kubens

The next two tools are not required per se. In my mind they are the easy shortcuts that should have been included with any install of kubectl. Every docker image that I build that

  • kubectx - Change the kubernetes context from one cluster to another
  • kubens - Easily change the default kubernetes namespace

You can do both using kubectl config ... commands, but these

They are both easily installable via homebrew using 1 command

brew install kubectx

More details about kubectx/kubens tools at the Github repository.

Helm

In the same way that Homebrew is the defacto package manager for macOS, helm is for Kubernetes.

brew install helm

Helm will be used to install Jenkins onto the Kubernetes cluster.

Docker Preferences

Kubernetes needs a bunch of resources to run in Docker. For example, if you run this command you will get the following error:

$ minikube start --memory 8192 --cpus 4 --vm-driver=docker
😄  minikube v1.24.0 on Darwin 12.0.1 (arm64)  Using the docker driver based on user configuration

❌  Exiting due to MK_USAGE: Docker Desktop has only 1988MB memory but you specified 8192MB

To fix:

  • Launch Docker Desktop
  • Under the Preferences cog in the upper right choose "Resources"
  • Set CPUs to something suitable for Kubernetes. I chose 4 CPUs and 10 GB of RAM.
  • "Apply and Restart"

Minikube Start

It should now be possible to start minikube:

$ minikube start --memory 8192 --cpus 4 --vm-driver=docker
😄  minikube v1.24.0 on Darwin 12.0.1 (arm64)  Using the docker driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
    > gcr.io/k8s-minikube/kicbase: 321.58 MiB / 321.58 MiB  100.00% 2.38 MiB p/
🔥  Creating docker container (CPUs=4, Memory=8192MB) ...
    > kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubectl: 41.44 MiB / 41.44 MiB [---------------] 100.00% 3.80 MiB p/s 11s
    > kubeadm: 40.50 MiB / 40.50 MiB [---------------] 100.00% 3.14 MiB p/s 13s
    > kubelet: 107.26 MiB / 107.26 MiB [-------------] 100.00% 5.41 MiB p/s 20s

     Generating certificates and keys ...
     Booting up control plane ...
     Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
     Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Minikube Dashboard / k9s

At this point you probably want to decide which tool you want to use to help you with troubleshooting issues with the cluster you created. Most minikube documentation points to using the Minikube Dashboard, which is easily invoked:

minikube dashboard

The UI will launch in your default web browser and is very similar to the Kubernetes Dashboard in design.

However, being a Terminal person, I tend to use k9s. It is easy to install, lightweight, and most reminds me of using something like top to get a handle of what is happening in your cluster. In fact I often run it in another pane of my Terminal. Here is how you install it:

brew install k9s

Launching k9s from the Terminal will give you a curses view where you can bounce around. Here is a more complete Medium post on how to use it:

K9s — the powerful terminal UI for Kubernetes

Installing Jenkins into Minikube

Now that all of the pre-requisites are out of the way, it is time to install Jenkins using Helm. The first step is to add the location of the helm repo to your helm install:

helm repo add jenkins https://charts.jenkins.io
helm repo update

You then can search for the latest helm chart by doing the following:

helm search repo jenkins
NAME            CHART VERSION   APP VERSION DESCRIPTION
jenkins/jenkins 3.9.0           2.303.3     Jenkins - Build great things at any scale! The ..

Pull the chart from the repo:

helm pull jenkins/jenkins

This should create a helm chart in the local folder. In this case it is name jenkins-3.9.0.tgz

Helm works by overiding the values from the chart to set your own values. Simply create the values for the chart by doing the following:

helm show values jenkins/jenkins > jenkins-values.yaml

It should have a bunch of stuff that is disabled by default. My values.yaml file was almost 900 lines long with a lot of comments. It is fine to remove most of this. For minikube I mainly want to override the namespace and the persistent volume (PV) that we want to use in the cluster, but before we do that we have to create them. I want jenkins to be installed in the jenkins namespace and I want the PV to be installed locally in my home folder.

First create the namespace and a suitable definition for the PV:

$ kubectl apply -f jenkins-namespace.yaml
namespace/jenkins created
$ kubectl apply -f jenkins-volume.yaml
persistentvolume/jenkins-volume created

Minikube does not come with a LoadBalancer by default so you also have to change the service to a NodePort.

$ helm install jenkins ./jenkins-3.9.0.tgz -n jenkins -f jenkins-values.yaml
NAME: jenkins
LAST DEPLOYED: Sat Nov 27 17:29:40 2021
NAMESPACE: jenkins
STATUS: deployed
REVISION: 1
NOTES:
1. Get your 'admin' user password by running:
  kubectl exec --namespace jenkins -it svc/jenkins -c jenkins -- /bin/cat /run/secrets/chart-admin-password && echo
...

3. Login with the password from step 1 and the username: admin
4. Configure security realm and authorization strategy
5. Use Jenkins Configuration as Code by specifying configScripts in your values.yaml file, see documentation: http:///configuration-as-code and examples: https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos

For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine

For more information about Jenkins Configuration as Code, visit:
https://jenkins.io/projects/jcasc/


NOTE: Consider using a custom image with pre-installed plugins

As it mentions in the info above, you have to get the default admin password that is auto-generated. In my case:

$ kubectl exec --namespace jenkins -it svc/jenkins -c jenkins -- /bin/cat /run/secrets/chart-admin-password && echo
5HrilnfS7eHAVfwDfyKv9B

Due to Kubernetes being launched in Docker, you need to use the minikube service command to tunnel in to launch the Jenkins UI in your default browser:

$ minikube service jenkins -n jenkins
|-----------|---------|-------------|---------------------------|
| NAMESPACE |  NAME   | TARGET PORT |            URL            |
|-----------|---------|-------------|---------------------------|
| jenkins   | jenkins | http/8080   | http://192.168.49.2:30897 |
|-----------|---------|-------------|---------------------------|
🏃  Starting tunnel for service jenkins.
|-----------|---------|-------------|------------------------|
| NAMESPACE |  NAME   | TARGET PORT |          URL           |
|-----------|---------|-------------|------------------------|
| jenkins   | jenkins |             | http://127.0.0.1:52330 |
|-----------|---------|-------------|------------------------|
🎉  Opening service jenkins/jenkins in default browser...
❗  Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

Use the admin user and the password above to login to the UI.

You can now create a Freestyle or Pipeline Job that will launch agent images within the cluster that execute the build.

You can see the Jenkins Kubernetes cluster configuration under Manage Jenkins > Manage Nodes and Clouds > Configure Clouds > Kubernetes Cloud Details

Uninstall Jenkins and Minikube

If at any point you need to uninstall the K8s, kill the minikube service command with a Ctrl-C and then uninstall the jenkins:

helm uninstall jenkins -n jenkins

And then kill minikube detritus:

minikube stop
minikube delete

Create an AWS VPC From Scratch

Today I went through the process of doing something I have never done before. Using some videos I found on Udemy, I created an AWS VPC from scratch. It is not that I am new to AWS networking, it is just I have always based my instances off existing VPCs, subnets, and network security. To be able to do it from scratch feels like a minor accomplishment. Here is the rough workflow:

  1. Created a VPC

  2. Create the subnets:

    • 3 public subnets in Availability Zone 1a, 1b, and 1c.
    • 3 private subnets in Availability Zone 1d, 1e, 1f
  3. Don't forget that the public subnets have to autoassign IPs (Actions > Modify Auto-assign IPs > Enable auto-assign public IPv4 address)

  4. Create Internet Gateway and attach to VPC (Actions > Attach to VPC)

  5. Edit the default routing table for the public subnets and make sure it can route out the Internet Gateway

  6. Create a routing table for the private subnets that can't go out the Internet Gateway. Associate the private subnets

  7. Create a public security group that allows inbound rules for SSH from my personal IP.

  8. Create a private security group that allows inbound rules for SSH from my personal IP.

After all of that I was able to spin up a quick and dirty terraform file that build a t2.micro instance in the VPC and suprisingly it worked on the first time.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.27"
    }
  }

  required_version = ">= 0.14.9"
}

provider "aws" {
  profile = "default"
  region  = "us-east-1"
}

data "aws_subnet" "public_subnet_1" {
  id = "subnet-XYZ-public"
}

resource "aws_instance" "webserver" {
  ami             = var.ami_id
  instance_type   = var.instance_type
  subnet_id       = data.aws_subnet.public_subnet_1.id
  security_groups = ["sg-public"]
  key_name        = var.key_name

  tags = {
    Name        = "webserver"
    Environment = "prod"
  }
}

xargs

I always wanted to learn how to pass a value to xargs somewhere in the middle of the command. The -I option can do it where the {} is just a token replacer. For example, here is a way to search for an IP prefix in a bunch of text files:

echo 192.168.1. | xargs -I{} grep {} *.txt

Here is how to pass -l to the middle of an ls {} -F command:

$ echo "-l" | xargs -I{} ls {} -F
total 0
drwxr-xr-x@ 7 emartin  staff  224 Nov  6 22:58 Folder1/
drwxr-xr-x@ 7 emartin  staff  224 Nov  6 23:58 Folder2/

I am really going to find this handy to do things like doing a find and then copying the item to a base folder.

You can actually use almost any token for the -I option.

(HT: Stack Exchange)

New Python Blogger Tool

Earlier this week, I created a new Blogger tool to support this new Pelican-based blog. It is a simple Python3 tool that accepts a variable number of arguments and then creates a Pelican friendly Markdown file. It will then launch that new file in the Markdown editor of your choice. In my case it launches Visual Studio Code. Here is an example:

$ blogger.py New Python Blogger Tool
Creating: 2021-04-17-new-python-blogger-tool.md

And since I hate typing repetitive words, in my .zshrc I have the following line to simplify launching the blog editor to simply b:

alias b=blogger.py

Sample call to the tool:

$ b New Python Blogger Tool
Creating: 2021-04-17-new-python-blogger-tool.md

The code is not too advanced so I will just put it below. The script requires:

  • Python 3.6+ for f-strings
  • pyyaml from pip

You will probably notice a custom util and path module. Those are just simple helper modules. The main functions used.

  • path.canonize() - Converts paths like ~/folder/../folder2 into /Users/emartin/folder2. See os.path.normpath() and os.path.expanduser() in the built-in os module.
  • util.log() - My console logger with time stamps. Easily replaceable with a print() statement.

I would post the link to my Github repo, but it is a private and it would take me a while to extract secrets. Maybe one day soon. Here is the code:

#!/usr/bin/env python3
import argparse
import os

import path
import util
import run

# pip install pyyaml
import yaml

# Backup configuration needs to live in the same location as script
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
config_yaml = "blogger_config.yaml"
config_yaml_path = os.path.join(__location__, config_yaml)

def blogger(config, title):
  source_folder = path.canonize(config['source_folder'])

  # Validate source folder exists
  if not os.path.exists(source_folder):
    util.log(f'ERROR: Source folder does not exist: {source_folder}')
    return -1

  content_folder =  os.path.join(source_folder, 'content')

  # Validate content folder exists
  if not os.path.exists(content_folder):
    util.log(f'ERROR: Content folder does not exist: {content_folder}')
    return -1

  # Generate the file name from the date and passed in title
  today = util.todaysDate()
  year = '{:%Y}'.format(today)
  month = '{:%m}'.format(today)
  day = '{:%d}'.format(today)
  name = title.lower().replace(" ","-")
  filename = f'{year}-{month}-{day}-{name}.md'

  new_blog_file = os.path.join(content_folder, filename)

  # Only create the file if the file doesn't exist otherwise just open it
  if not os.path.exists(new_blog_file):
    # Get the metadata defaults from the config file
    category = config['category']
    tag = config['tag']
    author = config['author']

    # Write metadata to top of the file in yaml format
    print(f'Creating: {filename}')
    with open(new_blog_file, 'a') as f:
      f.write(f'Title: {title}\n')
      f.write(f'Date: {today}\n')
      f.write(f'Category: {category}\n')
      f.write(f'Tags: {tag}\n')
      f.write(f'Authors: {author}\n')

  # Launch Markdown Tool
  markdown_tool = path.canonize(config['markdown_tool'])
  run.launch_app(markdown_tool, new_blog_file)

# Main function
if __name__ == '__main__':
  # Parse arguments
  parser = argparse.ArgumentParser(description='Script to automate blogging.')
  parser.add_argument('title', help=f'The title of the blog.', nargs='*', default='')
  args = parser.parse_args()

  # Dump Help if no parameters were passed
  if (len(args.title) == 0):
    parser.print_help()
    exit(0)

  # Load configuration file
  config = {}
  with open(config_yaml_path) as file:
   config = yaml.full_load(file)

  # Run Blogger
  title = " ".join(args.title)
  return_code = blogger(config, title)
  exit(return_code)

And the blogger_config.yaml looks like this:

source_folder: ~/Source/blog.emrt.in
category: "Misc|Media|Technology"
tag: "Blog"
author: Erik Martin
markdown_tool: /Applications/Visual Studio Code.app/Contents/Resources/app/bin/code

Descriptions of the configuration:

  • source_folder - Where the Pelican blog source is located. Files are created in the content folder.
  • category - category of the post (blog entry metadata)
  • tag - comma separated tags of the post (blog entry metadata)
  • author - default author of the post (blog entry metadata)
  • markdown_tool - The path to the markdown tool that launches the created .md file

The Expanse Revisited

A number of years ago I started The Expanse series of books. I read Leviathan Wakes the year it was published on a recommedation from the virtual book club The series is space opera sci-fi that I felt started out with a bang, but lost me around book 3 (Abaddon's Gate). Looking back I was probably simply overloaded with genre fiction and I never re-engaged, until this year.

Now that there are signs that we are slowly getting out of the pandemic, I was looking for a book series that was a bit more noir-ish, so I restarted the series. I have now passed where I stopped. I am about 10% into Cibola Burn and I can honestly say that it was a mistake to not continue with the series. The nuanced characterizations, the good people put in bad situations (Holden), there is no need to tell everyones back story in book 1. If I would have a complaint about anything is that the authors, James S.A. Corey, pull in new major characters every book and you have to learn the series dynamics from new. That may have been why I backed away from the series.

The cool bit is now there is a 5 season TV series on Amazon Prime to watch to complement the books. I find myself getting ahead in the book series and then playing catchup with the TV series. As usual, some of the differences are a bit jarring or disappointing, but I guess I am aged enough to not let that turn me off the TV series completely. The logistics of having a cast be busy throughout a season necessitates some fluff or some consolidation of storylines. I do like how the TV series introduced Avasarala early in season 1. It also helps that Shohreh Aghdashloo is a fantastic actress with such screen presence. The addition of the Drummer character was probably the most jarring of the changes for me because I wasn't sure at first who she was supposed to be. At first I thought she was Sam, but then Bull in season 3. Oh well, the character actor, Cara Gee is so wonderful that I learned to sit back and enjoy the ride with her storyline.

I am also listening to the series on audio book. The voice actor is really good at capturing the Belter language. When I read the words in the ebook and then listen to the same dialog in the audio book, it is not how I would have pictured it being said.

New Tool For Blogging

Today I finished off the first draft of my new blogger tool that I wrote in python to simplify the workflow of creating new Markdown files for the blog. Features:

  • Loads a yaml configuration file
  • Parses the command-line for the new blog title
  • Creates the new blog file in the Pelican content folder
  • Add the Pelican metadata at the top of the Markdown file with the Title, Date, Category, and Author.
  • Launches the configured Markdown tool. In this case Visual Studio Code.

It took me in total about 20 minutes to build and test it because it is very similar to my customized Journal scripts. I want to make some changes and add some polish to make it easy to handle titles. I'll do another post soon with more details.

Update: Done. Added polish.