Erik's Thoughts and Musings

Apple, DevOps, Technology, and Reviews

Pelican

After 7 long years of using Octopress on this blog, today I moved it to Pelican.

I love static site generators. No real hacking threat like Wordpress. No need to have a database. All you need is a little markdown, a little metadata, and an engine to generate the site.

I like Pelican because it is based on Python instead of Ruby. Also, I could never really get used to Jekyll. The dependency handling was just way too complicated for my needs.

The command line to publish a new blog post is right up my alley. Here is all you have to d to create a new post:

cd blog.emrt.in
# create/edit a blog post
vi content/2021-04-11.md
pelican content
make rsync_upload

In typical Python fashion, the code is really easy to decipher what is going on. I found a nice Octopress theme for Pelican that made it super simple to change my old blog to the new theme. The most difficult part of the whole process is I had to convert the Markdown from using a yaml based header to a simple text header.

I do need to create some quick and dirty Python scripts to help with creating the skeleton for a new blog post. I also need to get all of this moved to Github.

Git-Flow

My cousin, Chris, and I were talking last night about how I need to ramp up on Git. I use Git for this blog and I know how to do basic stuff, but I also need to come up to speed on some of the technologies and methodologies surrounding the software. Chris suggested that I check out git-flow first outlined in this blog post:

http://jeffkreeftmeijer.com/2010/why-arent-you-using-git-flow/

The main idea to git-flow is that you have a main develop branch and master is only for production releases. Features go on a feature branch and are merged back to the develop branch. There is also a release branch where you can put bug fixes and a hotfix branch where a branch is created from master and then it is commited back to master on release and back to develop.

There are Git extensions you can install with Brew:

$ brew install git-flow

Or manually integrate from github:

https://github.com/nvie/gitflow

With the extension you can create an empty repository that follows the git-flow methodology:

$ git flow init

It will then ask to create the branch prefix for each of the Flow branches. Then you can continue to use the extensions to do things like start a feature branch:

$ git flow feature start hot-new-feature
Switched to a new branch 'feature/hot-new-feature'

And then finish it, merging it back to develop and deleting the feature branch

$ git flow feature finish hot-new-feature
Switched to branch 'develop'

To create a release branch that tags it with a version:

$ git flow relase start 1.0
Switched to a new branch 'release/0.1.0'

You can then manually merge to master, or you can finish the release:

$ git flow release finish 1.0
Switched to branch 'master'

And from the blog post, here is all of what happens:

Boom. git-flow pulls from origin, merges the release branch into master, tags the release and back-merges everything back into develop before removing the release branch.

Hotfixes have a similar syntax, the difference is they are based off of master not develop.

Considering how powerful Git is when it comes to branching and merging compared to Subversion, it seems like a great idea to have these transient branches.

Maven Research

Maven is a project manager, not a build manager like Ant, hence the pom.xml file (Project object model). It tries to use a certain format to where the files are on the system to make it simpler to include dependencies.

$ mvn archetype:generate

This generates a simple template. archetype is the plugin, generate is the goal.

$ mvn help:effective-pom

This generates the entire POM that includes the minimal pom.xml including any parent POMs, user settings, and active profiles.

Maven doesn’t know how to compile your code or make a JAR file, but the plugins it uses do. Basic maven is a basic shell that knows how to:

parse the command-line manage a classpath parse a POM file download Maven plugins Maven Lifecycle Plugin goals can be attached to lifecycle phases (e.g. install, test). Each phase may have 0 or more goals attached to it. Example phases:

  • process-resources
  • compile
  • process-classes
  • process-test-resources
  • test-compile
  • test
  • prepare-package
  • package

When mvn fires is launches all phases before it in the lifecycle.

You can also just specify the plugin goals, but that is a lot more tedious

$ mvn resources:resources compiler:compile resources:testResources surefire:test jar:jar install:install

4 (really 5) definitions create the coordinate system for a product:

  1. groupId - reverse domain name
  2. artifactId - unique identifier under the group representing a single project
  3. packaging - package type (jar, war, ear)
  4. version - a specific release of a project
  5. classifier (rarely used)

Maven download artifacts and plugins from a remote repository to a your local machine and stores them in your local Maven repository (~/.m2/repository). install phase installs a .jar into the local repository. A repository contains the .jar and the .pom of what it depends on.

You can generate a site and documentation by doing the following:

$ mvn site

Generates javadoc, and other custom reports

This can tell you what all of the dependencies are:

$ mvn dependency:resolve

A prettier way to see the dependency tree

$ mvn dependency:tree

I got a lot of useful information for this post from Maven By Example

BitTorrent Sync

Last night I installed and have been using BitTorrent Sync, a syncing utility to keep files and folders up to date on all of your machines.

In the past I was using DropBox, but for a number of reasons I dropped them. One of the big ones was that I found myself not storing as many files in their service. Even though they claimed that files were encrypted on their server and could not be read, I never felt comfortable keeping my secure files there.

Last night I saw a tweet by Wil Wheaton. He was talking about how he had a seemless switch to BitTorrent Sync, a product I have heard of, but never researched. The interesting thing about BitTorrent Sync is that it doesn't use a central server to store your files. It uses a peer-to-peer mechanism to sync files from one computer to another. It does this using the BitTorrent protocol.

There is also a pretty handy way to setup an iOS device with the product. All you have to do is download the app from the App Store and then scan a QR code. This connects up your device. The mobile app also has a password you can enter to prevent others from getting your files.

Is BitTorrent Sync perfect? No. The iOS app does not automatically sync files. There isn't even a setting for that. You have to manually select a file and it will start a transfer from one of your other machines. Also, to get around Firewall and NAT issues, BitTorrent Sync makes use of relay servers to get your files from one machine behind a firewall to another machine. Luckily you can disable the usage of relay servers in the settings, but that means that if BitTorrent Sync can't access inside your firewall at home, you can't sync.

For my casual usage of syncing, the pros outweigh the cons. I have been very happy with it so far. I have been able to unify my old DropBox sync and manual sync folders. I also like the name of the default sync folder. It is ~/Sync on Mac. In my opinion that is nicer than the branded ~/DropBox folder.

Using Git

I have been using Git a lot in the last few weeks, for setting up this blog and for some research at work. Here is a brain dump for mainly my own personal reference.

Initial Setup

One thing you’ll probably want to setup on first run, make sure you fill in your name and email address in the Git configuration so that commit notes are assigned to you. In UI clients, it will probably ask you the first time you launch or try to download a repository. At the Terminal, you will want to do this:

$ git config --global user.name "Erik Martin"
$ git config --global user.email emartin@myemailservice.com

Getting a repository

Downloading an existing repository:

$ git clone http://git.server.com/git/myproduct.git

It will create a local folder named myproduct and start copying the files. This will retrieve the master branch. The above URL will be configured as the origin.

If you want to grab a specific branch called feature_branch and place it in the local product_feature_branch folder, you do the following:

$ git clone http://git.server.com/git/myproduct.git -b feature_branch product_feature_branch

Committing and Pushing

After you clone the tree, you can do commits just like Subversion:

$ git commit -m “This is a commit message” file.cpp

Assuming you are still on the master branch, push back to the origin remote like this:

$ git push origin master

Or push all local branches back to the origin:

$ git push origin --all

Remotes

You can easily setup another “backup” remote to a folder on your same machine by doing something like this in the local repository folder:

$ git remote add backup /Users/emartin/Source/backup/myproduct.git

A push of the master branch to the backup remote would look like this:

$ git push backup master

You can list all of a repository’s remotes by going to a local repository folder and typing:

$ git remote -v
backup /Users/emartin/Source/git/backup/myproduct.git (fetch)
backup /Users/emartin/Source/git/backup/myproduct.git (push)
origin http://git.server.com/git/myproduct.git (fetch)
origin http://git.server.com/git/myproduct.git (push)

New Repository Setup

If you want to create a new repository from an existing set of files, in the top level folder do this:

$ git init
$ git add .
$ git commit -m “Initial checkin” .

Then on the server or a local remote, you setup a bare repository, say in a folder named myproduct.git

$ mkdir myproduct.git
$ cd myproduct.git
$ git init --bare

Server or local remotes should be bare or you will get a warning during your push. See more info below in the research section of what a bare repo is and why a push must be bare.

On your local machine, you setup the remote in the new repository top level folder:

$ git remote add origin http://git.server.com/git/myproduct.git

And then assuming the authentication is correctly setup, push:

$ git push origin master

Branching

If you want to create a new branch and set it to the current branch, you just do the following while in the local sandbox:

$ git branch new_feature_branch
$ git checkout new_feature_branch

Quick way to create and set the branch:

$ git checkout -b new_feature_branch

To switch back to the master branch:

$ git checkout master

Git Rebase

Sometimes it makes sense to take commits from a feature branch and 'rebase' them into another branch, like the main development branch. That makes the log look more linear when looking back in the history. Here is an example of a rebase:

http://git-scm.com/book/en/Git-Branching-Rebasing

Submodules

Submodules are analogous to Subversion externals, a way to "attach" external repositories to another repository. Submodules work differently and are not as easy to use as svn externals. More info below in the research section.

To add a new submodule to an existing git repository:

$ git submodule add http://git.server.com/git/third_pary_library.git third_pary_library

This creates a folder called third_party_library and updates a .gitmodules file. .gitmodules is version controlled in the parent repository.

After adding, you have to commit the submodule:

$ git commit -m “Committing the submodule third_party_library” .

This commit locks the submodule to that revision of third_party_library. So if someone clones your parent repository, they get the committed revision of the submodule.

If a repository has submodules, there are two ways to check out. The legacy way:

$ git clone http://git.server.com/git/myproduct.git
$ cd repository
$ git submodule init
$ git submodule update

And the easy way:

$ git clone --recursive http://git.server.com/git/myproduct.git

Migrating to Octopress

I am hooked. Octopress is a lot nicer than Wordpress for a guy like me who likes to tinker with files, directories, the Terminal, and administration of websites. Instead of HTML, Octopress relies on Markdown. A way to “mark up” your text, without using HTML syntax. It is a more natural way to write a blog post.

Another big benefit to Octopress is that it creates a website with static pages and doesn’t depend on an database. That is so attractive to me. Let me explain why.

When I originally took this site down 4.5 years ago, I wasn’t smart and left all my posts in a MySQL DB just sitting on my server. Wordpress had numerous versions since then, including numerous DB updates. I would have had to:

  1. Re-enable my blog.
  2. Upgrade it to the latest Wordpress
  3. Export to get all of the posts out of the site.

Of course, I couldn’t even get past step 1. I tried every trick I knew and I still couldn’t get Wordpress re-initialized using my old files and DB. I ended up going for a different approach. My MySQL experience got me to the point of logging into the PHP UI, and I could export all of my posts to XML. In 2 nights, I was able to write a simple command-line application that took the XML and converted it to Octopress’ .markdown format. Amazingly, my third successful export of the MySQL XML to .markdown got me all the way there. (Yes, I can do this programming thing sometime, even for my personal usage.)

I was left with 1200 .markdown files. Luckily I was able to twiddle that down to about 250 posts by getting rid of the intermediate “revision” posts that Wordpress saves to the database every time you hit Update on the Web UI.

Next step was to filter out the .markdown files. I had some posts that were banal. Some posts that were more suited for my family blog. And some posts that just didn’t really belong in a public blog. It is not that they were bad. They just were polarized viewpoints. February 2009-October 2009 (the months I kept up with the blog) were not the best times to talk about politics in this country. (Is there ever a good time?) I am at the point in my life where putting stuff like that out there doesn’t do any good. You end up looking like an ass to a good percentage of people, because people almost never share your viewpoint, and it is not like a blog is a good place to convert people to your way of thinking. It is better to leave that stuff to private conversation.

I jumped the gun a little bit. I got so excited talking about exporting that I forgot how to get started with Octopress. For a Terminal hacker like me, the easiest way to get Octopress is via git. You just do this from the command-line:

$ git clone git://github.com/imathis/octopress.git octopress
$ cd octopress

I saw online, that a good thing you may want to do is to name your local repository after the website you are creating, like so:

$ git clone git://github.com/imathis/octopress.git mywebsite.com
$ cd mywebsite.com

To make use of Octopress, you just need to have a recent Ruby installed. I had a 2.0 version, but decided to use rbenv and installed the latest Ruby.

After you install Ruby, you call the following commands:

$ gem install bundle
$ rbenv rehash
$ bundle install

The tree layout for the Octopress repository is quite simple:

CHANGELOG.markdown
Gemfile
Gemfile.lock
README.markdown
Rakefile
_config.yml
config.rb
config.ru
plugins/
public/
sass/
source/

The 2 .markdown files are (GitHub) documentation. The Gemfile is what keeps Ruby dependencies in order. The Rakefile is sort of like a Unix Makefile, just with a Ruby syntax. Later on in this post when you call ‘rake’, you are actually executing commands in the Rakefile. The two big folders that you will interact with are these two:

source/
public/

The source folder is where you put posts, pages, and assets (images). The public folder is what is generated by Octopress (Jekyll) and is what gets uploaded to your website.

OK. Now that we covered getting Octopress installed, back to doing Markdown and blog posts. After making modifications to the .markdown files, I simply put them all in the source/_posts folder. To simplify the process of making a new page or post .markdown file you do one of these two from the top level of the octopress folder:

$ rake new_post[“Post Title”]
$ rake new_page[“Page Title”]

A post is a blog post. This is what you will do most of the time to create content. A page is a static page, like an “About” page. It gets put in a special folder so that you can connect it up to the navigation.

After figuring out the markdown files, the next thing you have to modify is the _config.yaml file that is at the root of the git repository. You configure things like:

  • Blog name
  • Blog tagline
  • URL for blog
  • SSH destination

There is a whole slew of things you can modify here. Luckily it is rather straight forward.

Now that all of the files are configuration are in order. It is just a matter of launching this from the command-line:

$ rake generate

For a blog like mine that has over 100 posts, it takes about 5 seconds to generate my entire site.

Anytime you create a new post, or change your blog, this is the command you launch. It takes your markdown files and other assets from the source folder, and compiles it into a site in the public folder. With this all being wrapped up in .git. It makes it super simple to take the source folder and create its own repository (or submodule). Another attractive feature for a backup obsessed person like me.

After rake generate does its thing with Ruby and Jekyll, you can preview your site locally by issuing the following command:

$ rake preview

This opens up a web server on port 4000. You check out your site by launching your browser to here http://localhost:4000. It is that easy. (Well it should be that easy. I had to add thin to the Gemfile, so that the pages displayed in Safari 7).

(BTW, I just discovered that if you keep rake preview running while you are making edits, it will continually scan your source folder looking for changes and regenerate. No need to kill rake preview while editing a post. Just refresh the page after saving the .markdown file.)

Once you are happy with how the blog looks, you just launch this:

$ rake deploy

If your SSH and authorized_keys are configured correctly for copying files to a web server, your static files should be deployed correctly to your website. The process uses rsync, so essentially only changes are uploaded to the server. This makes deployment quite fast.

I still need to come up with a good list of Terminal aliases to make doing rake generate, rake preview, and rake deploy easier. I also should look to see if there is a suitable front end for doing all of this. If I even decided to move my other Wordpress blog to Octopress, my wife wouldn’t like the Terminal.

I also recommend the use of this QuickLook plugin on Mac. It is a great way to view .markdown files in a QuickLook window.

Next step for me is to get proficient in Markdown. I know how to do links, bold, italic and bullet points, but I don’t know how to reference assets like pictures and movies. I also need to check out how plugins work. Assumably they go in the plugins folder above.

Thanks to the Octopress site, mainly this page, for all the valuable information.

(Second) First Post

This is my (second) first post. I am trying out Octopress. I am in the process of migrating my old Wordpress Blog over to this one. I am going to filter out all of the family stuff and make this more about interests specific to me. Technology, hobbies, etc.

Blog And Server Maintenance

Tonight I had to do some maintenance on my web host and blogging software. My brain was kind of mush anyway after work today so I was ready to do something rather menial.

I haven't done a full backup of my web stuff since August so it is way past time. Fortunately the cPanel software that my web hosting service uses is awesome. It is one click to start the full backup and one click to download the .tar.gz compressed file once you get an email notification that the backup completed. The cPanel backup component also does daily backups automatically which is really nice.

In August the .tar.gz was 900 MB. Today the download was 1.3 GB. Most of that new data is video and pictures on another blog I run. There is no way I am going to host any of that stuff on Facebook or YouTube.

I also installed some new Wordpress plugins due to some suggestions that I saw from a friend's inquiry on Facebook. My new favorite plugin is Broken Link Checker. It found 8 dead links (1 false positive) on my two blogs. It was so easy just to double check that the link was bad and than click the "unlink" button. A majority of the dead links were Yahoo links. I'll have to remember that for later.

I was half tempted to reinstate my automatic Twitter posting software, WP to Twitter, but then I remembered that I have only 4 people following me on Twitter. :)

Juliet, Naked Book Review

Last night I finished Juliet, Naked, the latest book from Nick Hornby.

Hornby is the author of High Fidelity and About A Boy, both books that were adapted for movies. I read both books before their counterpart movies and while there were significant changes from novel to screen, the movies didn't diminish the novels. Hornby also wrote Fever Pitch, which while enjoyable as a book didn't translate well to screen.

The last book I tried to read from Hornby was A Long Way Down, a wandering book about commiting suicide by jumping off roofs. I didn't make it that far before I gave up. There was a missing spark to the story. None of the characters were very likable or relatable. Because of this I was a little apprehensive of Juliet, Naked.

Some spoilers follow...

Juliet, Naked started off interesting, but it lost me toward the halfway point. In the book, "Juliet" was the last album of a 1980s musical solo artist named Tucker Crowe. Crowe mysteriously disappeared after the album, never to record another album. In pDuncan, a music aficionado and big fan of the album, creates a website to discuss Crowe and interpret the lyrics from his albums. Duncan has spent the last 20 years obsessed with Crowe. His girlfriend of 15 years, Annie, goes along with the obsession, but at the same time is starting to wonder if she is the third person in the relationship. They are both in their 40s, unmarried, childless.

All of the conflict in the story starts to occur when Duncan gets an advanced copy of Crowe's demos of acoustic versions of Juliet, which is to be released as the album "Juliet, Naked". Duncan publishes to his website a review of the new album and Annie has the opposite view. Both reviews of the album get published on Duncan's website. Because of the reviews, Duncan and Annie both start to analyze their long relationship with each other.

About halfway through the book, you start to discover what happened to Tucker and as expected the reality of his disappearance is nowhere near what theories came about on Duncan's website. Tucker's thread starts to weave with Annie's thread, but by the end there is no clear resolution to almost anything.

Overall for a Nick Hornby book I am a little disappointed. Fever Pitch, About A Boy, and High Fidelity were interesting, relatable, funny, and timely. I can't say the same for this book. It was funny in parts and interesting in the beginning, but it really went south after the midpoint.

I give it 2 out of 4 stars.