Erik's Thoughts and Musings

Apple, DevOps, Technology, and Reviews

BitTorrent Sync

Last night I installed and have been using BitTorrent Sync, a syncing utility to keep files and folders up to date on all of your machines.

In the past I was using DropBox, but for a number of reasons I dropped them. One of the big ones was that I found myself not storing as many files in their service. Even though they claimed that files were encrypted on their server and could not be read, I never felt comfortable keeping my secure files there.

Last night I saw a tweet by Wil Wheaton. He was talking about how he had a seemless switch to BitTorrent Sync, a product I have heard of, but never researched. The interesting thing about BitTorrent Sync is that it doesn't use a central server to store your files. It uses a peer-to-peer mechanism to sync files from one computer to another. It does this using the BitTorrent protocol.

There is also a pretty handy way to setup an iOS device with the product. All you have to do is download the app from the App Store and then scan a QR code. This connects up your device. The mobile app also has a password you can enter to prevent others from getting your files.

Is BitTorrent Sync perfect? No. The iOS app does not automatically sync files. There isn't even a setting for that. You have to manually select a file and it will start a transfer from one of your other machines. Also, to get around Firewall and NAT issues, BitTorrent Sync makes use of relay servers to get your files from one machine behind a firewall to another machine. Luckily you can disable the usage of relay servers in the settings, but that means that if BitTorrent Sync can't access inside your firewall at home, you can't sync.

For my casual usage of syncing, the pros outweigh the cons. I have been very happy with it so far. I have been able to unify my old DropBox sync and manual sync folders. I also like the name of the default sync folder. It is ~/Sync on Mac. In my opinion that is nicer than the branded ~/DropBox folder.

Using Git

I have been using Git a lot in the last few weeks, for setting up this blog and for some research at work. Here is a brain dump for mainly my own personal reference.

Initial Setup

One thing you’ll probably want to setup on first run, make sure you fill in your name and email address in the Git configuration so that commit notes are assigned to you. In UI clients, it will probably ask you the first time you launch or try to download a repository. At the Terminal, you will want to do this:

$ git config --global user.name "Erik Martin"
$ git config --global user.email emartin@myemailservice.com

Getting a repository

Downloading an existing repository:

$ git clone http://git.server.com/git/myproduct.git

It will create a local folder named myproduct and start copying the files. This will retrieve the master branch. The above URL will be configured as the origin.

If you want to grab a specific branch called feature_branch and place it in the local product_feature_branch folder, you do the following:

$ git clone http://git.server.com/git/myproduct.git -b feature_branch product_feature_branch

Committing and Pushing

After you clone the tree, you can do commits just like Subversion:

$ git commit -m “This is a commit message” file.cpp

Assuming you are still on the master branch, push back to the origin remote like this:

$ git push origin master

Or push all local branches back to the origin:

$ git push origin --all

Remotes

You can easily setup another “backup” remote to a folder on your same machine by doing something like this in the local repository folder:

$ git remote add backup /Users/emartin/Source/backup/myproduct.git

A push of the master branch to the backup remote would look like this:

$ git push backup master

You can list all of a repository’s remotes by going to a local repository folder and typing:

$ git remote -v
backup /Users/emartin/Source/git/backup/myproduct.git (fetch)
backup /Users/emartin/Source/git/backup/myproduct.git (push)
origin http://git.server.com/git/myproduct.git (fetch)
origin http://git.server.com/git/myproduct.git (push)

New Repository Setup

If you want to create a new repository from an existing set of files, in the top level folder do this:

$ git init
$ git add .
$ git commit -m “Initial checkin” .

Then on the server or a local remote, you setup a bare repository, say in a folder named myproduct.git

$ mkdir myproduct.git
$ cd myproduct.git
$ git init --bare

Server or local remotes should be bare or you will get a warning during your push. See more info below in the research section of what a bare repo is and why a push must be bare.

On your local machine, you setup the remote in the new repository top level folder:

$ git remote add origin http://git.server.com/git/myproduct.git

And then assuming the authentication is correctly setup, push:

$ git push origin master

Branching

If you want to create a new branch and set it to the current branch, you just do the following while in the local sandbox:

$ git branch new_feature_branch
$ git checkout new_feature_branch

Quick way to create and set the branch:

$ git checkout -b new_feature_branch

To switch back to the master branch:

$ git checkout master

Git Rebase

Sometimes it makes sense to take commits from a feature branch and 'rebase' them into another branch, like the main development branch. That makes the log look more linear when looking back in the history. Here is an example of a rebase:

http://git-scm.com/book/en/Git-Branching-Rebasing

Submodules

Submodules are analogous to Subversion externals, a way to "attach" external repositories to another repository. Submodules work differently and are not as easy to use as svn externals. More info below in the research section.

To add a new submodule to an existing git repository:

$ git submodule add http://git.server.com/git/third_pary_library.git third_pary_library

This creates a folder called third_party_library and updates a .gitmodules file. .gitmodules is version controlled in the parent repository.

After adding, you have to commit the submodule:

$ git commit -m “Committing the submodule third_party_library” .

This commit locks the submodule to that revision of third_party_library. So if someone clones your parent repository, they get the committed revision of the submodule.

If a repository has submodules, there are two ways to check out. The legacy way:

$ git clone http://git.server.com/git/myproduct.git
$ cd repository
$ git submodule init
$ git submodule update

And the easy way:

$ git clone --recursive http://git.server.com/git/myproduct.git

Migrating to Octopress

I am hooked. Octopress is a lot nicer than Wordpress for a guy like me who likes to tinker with files, directories, the Terminal, and administration of websites. Instead of HTML, Octopress relies on Markdown. A way to “mark up” your text, without using HTML syntax. It is a more natural way to write a blog post.

Another big benefit to Octopress is that it creates a website with static pages and doesn’t depend on an database. That is so attractive to me. Let me explain why.

When I originally took this site down 4.5 years ago, I wasn’t smart and left all my posts in a MySQL DB just sitting on my server. Wordpress had numerous versions since then, including numerous DB updates. I would have had to:

  1. Re-enable my blog.
  2. Upgrade it to the latest Wordpress
  3. Export to get all of the posts out of the site.

Of course, I couldn’t even get past step 1. I tried every trick I knew and I still couldn’t get Wordpress re-initialized using my old files and DB. I ended up going for a different approach. My MySQL experience got me to the point of logging into the PHP UI, and I could export all of my posts to XML. In 2 nights, I was able to write a simple command-line application that took the XML and converted it to Octopress’ .markdown format. Amazingly, my third successful export of the MySQL XML to .markdown got me all the way there. (Yes, I can do this programming thing sometime, even for my personal usage.)

I was left with 1200 .markdown files. Luckily I was able to twiddle that down to about 250 posts by getting rid of the intermediate “revision” posts that Wordpress saves to the database every time you hit Update on the Web UI.

Next step was to filter out the .markdown files. I had some posts that were banal. Some posts that were more suited for my family blog. And some posts that just didn’t really belong in a public blog. It is not that they were bad. They just were polarized viewpoints. February 2009-October 2009 (the months I kept up with the blog) were not the best times to talk about politics in this country. (Is there ever a good time?) I am at the point in my life where putting stuff like that out there doesn’t do any good. You end up looking like an ass to a good percentage of people, because people almost never share your viewpoint, and it is not like a blog is a good place to convert people to your way of thinking. It is better to leave that stuff to private conversation.

I jumped the gun a little bit. I got so excited talking about exporting that I forgot how to get started with Octopress. For a Terminal hacker like me, the easiest way to get Octopress is via git. You just do this from the command-line:

$ git clone git://github.com/imathis/octopress.git octopress
$ cd octopress

I saw online, that a good thing you may want to do is to name your local repository after the website you are creating, like so:

$ git clone git://github.com/imathis/octopress.git mywebsite.com
$ cd mywebsite.com

To make use of Octopress, you just need to have a recent Ruby installed. I had a 2.0 version, but decided to use rbenv and installed the latest Ruby.

After you install Ruby, you call the following commands:

$ gem install bundle
$ rbenv rehash
$ bundle install

The tree layout for the Octopress repository is quite simple:

CHANGELOG.markdown
Gemfile
Gemfile.lock
README.markdown
Rakefile
_config.yml
config.rb
config.ru
plugins/
public/
sass/
source/

The 2 .markdown files are (GitHub) documentation. The Gemfile is what keeps Ruby dependencies in order. The Rakefile is sort of like a Unix Makefile, just with a Ruby syntax. Later on in this post when you call ‘rake’, you are actually executing commands in the Rakefile. The two big folders that you will interact with are these two:

source/
public/

The source folder is where you put posts, pages, and assets (images). The public folder is what is generated by Octopress (Jekyll) and is what gets uploaded to your website.

OK. Now that we covered getting Octopress installed, back to doing Markdown and blog posts. After making modifications to the .markdown files, I simply put them all in the source/_posts folder. To simplify the process of making a new page or post .markdown file you do one of these two from the top level of the octopress folder:

$ rake new_post[“Post Title”]
$ rake new_page[“Page Title”]

A post is a blog post. This is what you will do most of the time to create content. A page is a static page, like an “About” page. It gets put in a special folder so that you can connect it up to the navigation.

After figuring out the markdown files, the next thing you have to modify is the _config.yaml file that is at the root of the git repository. You configure things like:

  • Blog name
  • Blog tagline
  • URL for blog
  • SSH destination

There is a whole slew of things you can modify here. Luckily it is rather straight forward.

Now that all of the files are configuration are in order. It is just a matter of launching this from the command-line:

$ rake generate

For a blog like mine that has over 100 posts, it takes about 5 seconds to generate my entire site.

Anytime you create a new post, or change your blog, this is the command you launch. It takes your markdown files and other assets from the source folder, and compiles it into a site in the public folder. With this all being wrapped up in .git. It makes it super simple to take the source folder and create its own repository (or submodule). Another attractive feature for a backup obsessed person like me.

After rake generate does its thing with Ruby and Jekyll, you can preview your site locally by issuing the following command:

$ rake preview

This opens up a web server on port 4000. You check out your site by launching your browser to here http://localhost:4000. It is that easy. (Well it should be that easy. I had to add thin to the Gemfile, so that the pages displayed in Safari 7).

(BTW, I just discovered that if you keep rake preview running while you are making edits, it will continually scan your source folder looking for changes and regenerate. No need to kill rake preview while editing a post. Just refresh the page after saving the .markdown file.)

Once you are happy with how the blog looks, you just launch this:

$ rake deploy

If your SSH and authorized_keys are configured correctly for copying files to a web server, your static files should be deployed correctly to your website. The process uses rsync, so essentially only changes are uploaded to the server. This makes deployment quite fast.

I still need to come up with a good list of Terminal aliases to make doing rake generate, rake preview, and rake deploy easier. I also should look to see if there is a suitable front end for doing all of this. If I even decided to move my other Wordpress blog to Octopress, my wife wouldn’t like the Terminal.

I also recommend the use of this QuickLook plugin on Mac. It is a great way to view .markdown files in a QuickLook window.

Next step for me is to get proficient in Markdown. I know how to do links, bold, italic and bullet points, but I don’t know how to reference assets like pictures and movies. I also need to check out how plugins work. Assumably they go in the plugins folder above.

Thanks to the Octopress site, mainly this page, for all the valuable information.

(Second) First Post

This is my (second) first post. I am trying out Octopress. I am in the process of migrating my old Wordpress Blog over to this one. I am going to filter out all of the family stuff and make this more about interests specific to me. Technology, hobbies, etc.

Blog And Server Maintenance

Tonight I had to do some maintenance on my web host and blogging software. My brain was kind of mush anyway after work today so I was ready to do something rather menial.

I haven't done a full backup of my web stuff since August so it is way past time. Fortunately the cPanel software that my web hosting service uses is awesome. It is one click to start the full backup and one click to download the .tar.gz compressed file once you get an email notification that the backup completed. The cPanel backup component also does daily backups automatically which is really nice.

In August the .tar.gz was 900 MB. Today the download was 1.3 GB. Most of that new data is video and pictures on another blog I run. There is no way I am going to host any of that stuff on Facebook or YouTube.

I also installed some new Wordpress plugins due to some suggestions that I saw from a friend's inquiry on Facebook. My new favorite plugin is Broken Link Checker. It found 8 dead links (1 false positive) on my two blogs. It was so easy just to double check that the link was bad and than click the "unlink" button. A majority of the dead links were Yahoo links. I'll have to remember that for later.

I was half tempted to reinstate my automatic Twitter posting software, WP to Twitter, but then I remembered that I have only 4 people following me on Twitter. :)

Juliet, Naked Book Review

Last night I finished Juliet, Naked, the latest book from Nick Hornby.

Hornby is the author of High Fidelity and About A Boy, both books that were adapted for movies. I read both books before their counterpart movies and while there were significant changes from novel to screen, the movies didn't diminish the novels. Hornby also wrote Fever Pitch, which while enjoyable as a book didn't translate well to screen.

The last book I tried to read from Hornby was A Long Way Down, a wandering book about commiting suicide by jumping off roofs. I didn't make it that far before I gave up. There was a missing spark to the story. None of the characters were very likable or relatable. Because of this I was a little apprehensive of Juliet, Naked.

Some spoilers follow...

Juliet, Naked started off interesting, but it lost me toward the halfway point. In the book, "Juliet" was the last album of a 1980s musical solo artist named Tucker Crowe. Crowe mysteriously disappeared after the album, never to record another album. In pDuncan, a music aficionado and big fan of the album, creates a website to discuss Crowe and interpret the lyrics from his albums. Duncan has spent the last 20 years obsessed with Crowe. His girlfriend of 15 years, Annie, goes along with the obsession, but at the same time is starting to wonder if she is the third person in the relationship. They are both in their 40s, unmarried, childless.

All of the conflict in the story starts to occur when Duncan gets an advanced copy of Crowe's demos of acoustic versions of Juliet, which is to be released as the album "Juliet, Naked". Duncan publishes to his website a review of the new album and Annie has the opposite view. Both reviews of the album get published on Duncan's website. Because of the reviews, Duncan and Annie both start to analyze their long relationship with each other.

About halfway through the book, you start to discover what happened to Tucker and as expected the reality of his disappearance is nowhere near what theories came about on Duncan's website. Tucker's thread starts to weave with Annie's thread, but by the end there is no clear resolution to almost anything.

Overall for a Nick Hornby book I am a little disappointed. Fever Pitch, About A Boy, and High Fidelity were interesting, relatable, funny, and timely. I can't say the same for this book. It was funny in parts and interesting in the beginning, but it really went south after the midpoint.

I give it 2 out of 4 stars.

Crossword Puzzles

For the past several weeks I have been opening the Sunday paper and trying my hand at the two crossword puzzles in my local newspaper. One is a relatively easy puzzle, the other is the re-published NY Times Crossword. Without saying, I have been failing miserably at the NY Times one.

Over the weeks, I have noticed that the crossword designers have been getting a lot more interesting in their puzzle design. One of the puzzles last weekend actually had a bingo card in the middle of the puzzle. And then 9 or 10 of the clues throughout the puzzle it just said Mark your bingo card. For those clues you had to go to the bingo card and spell out in letters what was on the card. For example the card had B-15. One of the answers was BFIFTEEN. O-55 fit another clue OFIFTYFIVE. I thought it was a pretty interesting twist. You had to figure out just enough letters in the surrounding clues to find out which bingo combination worked for that clue.

On Tuesday when I was out I went to the local book store and picked up a crossword puzzle book. Previous to that I did some free crossword puzzles on-line and even downloaded an iPhone app, but for some reason it is not the same thrill as doing it on a piece of paper.

No Posting This Past Weekend

I took advantage of my company's product releasing to keep all of the computer's off this past weekend. About the only surfing that I did was on the iPhone.

I basically just did a bunch of family stuff this weekend. We finished Dirty Dancing. We spent some time getting outside walking around the neighborhood. And we actually made it out to a sit down restaurant. On Sunday, I took my daughter out to the book store so I could give my wife at least a little bit of a break.

I found out late last week that I will be traveling to what I consider the home office next month. That should be fun. It has been a long while since I have been out there. I am thinking it has been almost a year and a half. I am a little anxious having to travel around the start of flu season, but I guess there is not much I can do about that except get a flu shot.

Even though the product I was working on shipped, I am actually looking forward to what I am going to do next. My next project I get to double team it with a co-worker. Usually most of my work I can work independently so I am liking the fact that I can work on it with someone else, especially someone who has been a Mac developer way longer than I have been. In our first design meetings last week, he has already brought up a number of ways to implement the product that I would have never thought of. That's the fun of programming a project together.

Dirty Dancing And Patrick Swayze

In memory of Patrick Swayze, my wife and I started watching Dirty Dancing tonight from the FauxVo. She's seen it a bunch of times and considers it one of her favorite movies. Until today I probably would have told you that I had at least seen it once, but that would have been a total confabulation. I guess I had seen the clips so many times that I filled in the rest.

In the same way that my wife got to see me watching Fight Club, I am getting the same kind of enjoyment on how she reacts to seeing Dirty Dancing again. As much as I am not that big of a fan of dancing flicks it is great to watch a movie through the lens of your spouse. It was even a sharing moment when I filled her in on all the ruckus that happened when Jennifer Grey got the nose job in the late 90s that made all those waves. In retrospect it is pretty laughable considering how many women have fake noses and boobs on TV nowadays.

The movie I will most remember Patrick Swayze for is Point Break. Swayze's role as Bodhi was incandescent at a time in my life when the zen culture of surfing appealed to how I wanted to see the world and live my life. That scene where he is standing at the doorway of the plane and falls away is pretty much burned into my brain and was probably the reason about a year later that I did my first sky dive.

Like all movies and TV shows recently, it will probably take us a couple days to make it through the movie.