{ Josh Rendek }

<3 Go & Kubernetes

I was writing some cucumber features for reru_scrum when I ran into an issue with destroying user records and Mysql2 throwing a Lock error.

The full error:

1Mysql2::Error: Lock wait timeout exceeded; try restarting transaction: UPDATE `users` SET `last_sign_in_at` = '2011-11-22 00:06:32', `current_sign_in_at` = '2011-11-22 00:11:28', `sign_in_count` = 3, `updated_at` = '2011-11-22 00:11:28' WHERE `users`.`id` = 1

A simple solution is to use the database_cleaner gem.

Inside your features/support/env.rb file:

1begin
2  require 'database_cleaner'
3  require 'database_cleaner/cucumber'
4  DatabaseCleaner.strategy = :truncation
5rescue NameError
6  raise "You need to add database_cleaner to your Gemfile (in the :test group) if you wish to use it."
7end

A good idea is to create the before and after hooks to use the DatabaseCleaner.start and DatabaseCleaner.clean methods.

Inside features/support/hooks.rb:

1Before do
2  DatabaseCleaner.start
3end
4
5After do |scenario|
6  DatabaseCleaner.clean
7end

You should then be able to run your features and have your database cleaned between steps.

When defining a event listener for objects in Coffeescript you nede to make sure you use a -> – using a => will result in any references to attr() be “undefined”

Here is an example of some correct on click bindings that use the attr() methods

1  jQuery ($) ->
2  $('[id^=story_]').click ->
3    $( "#" +  $(this).attr("id") + "_loader")
4      .load('/projects/' + $(this).attr('project_id') +
5      '/story_types/' + $(this).attr('story_type_id') +
6      '/stories/'+ $(this).attr('story_id') + '/tasks/new')

If you run into an issue with bundler always installing into a directory then you may have accidentily run:

1bundle install foobar
and now its installing into foobar.

You can run:

1bundle install --system

To go back to installing gems to the system/RVM path.

 1class RenderHelper
 2  class << self
 3    def render(assigns, options, request = {})
 4      request = {
 5        "SERVER_PROTOCOL" => "http",
 6        "REQUEST_URI" => "/",
 7        "SERVER_NAME" => "localhost",
 8        "SERVER_PORT" => 80
 9      }.merge(request)
10
11      av = ActionView::Base.new(ActionController::Base.view_paths, assigns)
12
13      av.config = Rails.application.config.action_controller
14      av.extend ApplicationController._helpers
15      av.controller = ActionController::Base.new
16      av.controller.request = ActionController::Request.new(request)
17      av.controller.response = ActionController::Response.new
18      av.controller.headers = Rack::Utils::HeaderHash.new
19
20      av.class_eval do
21        include Rails.application.routes.url_helpers
22      end
23
24      av.render options
25    end
26  end
27end

Usage

1 html_output = RenderHelper.render({:instance_variable1 => "foo",
2                                                :instance_variable2 => "bar"},
3                                                :template => 'view_to/render')

You can then use you’re favorite PDF generator (I use PDFKit) to take the html output and parse it to a PDF.

MySQL Slave not syncing after reboot

Aug 25, 2011 - 1 minutes

Earlier today I had a MySQL slave go down for a few hours, which wasn’t a big deal. When it was brought back up it wasn’t syncing properly:

1Seconds_Behind_Master: NULL

Apparently there was an issue with a query that was showing up under LAST_ERROR; running

1STOP SLAVE;
2SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;
3START SLAVE;

fixed the issue and then issued another SHOW SLAVE STATUS\G; and got the correct output:

1Seconds_Behind_Master: 27269

About half an hour later the slave was all caught up and replication was working again.

One of the interesting things about switching to a static site (even when I had cacheing and everything tuned in WP) is the load times.

Requests per second

  • Wordpress: 9.5
  • Jekyll: 182.96

Time per request

  • Wordpress: 2631.395 ms
  • Jekyll: 245.954 ms

Time per request (across all concurrent)

  • Wordpress: 105.256 ms
  • Jekyll: 5.466 ms

Basically jekyll is about 1000-2000% faster at rendering pages.

Thats about it on what I really cared about… the website now loads blazingly fast and Jekyll is awesome to write in with markdown.

Migration from WordPress to Jekyll

Aug 18, 2011 - 2 minutes

While I love WordPress - I think it was a bit of overkill for what I was doing on this blog so I converted everything to Jekyll, and threw all my images up on Amazon’s S3. I’ve also migrated all the comments over to Disqus.

One of the problems I ran into was getting the URLs to map the same; the _config.yml that worked for me was:

1pygments: true
2markdown: rdiscount
3permalink: /:year/:month/:title
4paginate: 10

And then to get my /apps/ working again I made a directory structure like this:

 1apps//bluebug:
 2BlueBug.zip	index.markdown
 3
 4apps//greenmail:
 5index.markdown
 6
 7apps//light_logify:
 8index.markdown
 9
10apps//pyultradns:
11index.markdown
12
13apps//quote-of-the-day-tweeter:
14index.markdown
15
16apps//rails_rrdtool:
17index.markdown
18
19apps//server-setup-fu:
20index.markdown
21
22apps//servly:
23index.markdown
24
25apps//ventrilo-ping-analyzer:
26index.markdown

Some nice helper scripts I’ve found:

Creating a new post

 1#!/usr/bin/env ruby
 2
 3# Script to create a jekyll blog post using a template. It takes one input parameter
 4# which is the title of the blog post
 5# e.g. command:
 6# $ ./new.rb "helper script to create new posts using jekyll"
 7#
 8# Author:Khaja Minhajuddin (http://minhajuddin.com)
 9
10# Some constants
11TEMPLATE = "template.markdown"
12TARGET_DIR = "_posts"
13
14# Get the title which was passed as an argument
15title = ARGV[0]
16# Get the filename
17filename = title.gsub(' ','-')
18filename = "#{ Time.now.strftime('%Y-%m-%d') }-#{filename.downcase}.markdown"
19filepath = File.join(TARGET_DIR, filename)
20
21# Create a copy of the template with the title replaced
22new_post = File.read(TEMPLATE)
23new_post.gsub!('TITLE', title);
24
25# Write out the file to the target directory
26new_post_file = File.open(filepath, 'w')
27new_post_file.puts new_post
28new_post_file.close
29
30puts "created => #{filepath}"

Publishing a new post

1jekyll && rsync -avz -e 'ssh -p SSHPORT' --delete . USERNAME@DOMAIN.com:/home/YOURPATH/

Ruby on Rails: Delayed Job issues

Aug 10, 2011 - 3 minutes

Thankfully the issue tracker on Github was a great help, but I recently ran into two issues I’ve never had the pleasure of encountering before while using DJ:

“nil is not a symbol” as a failure message. When running the background jobs for Servly, this was occurring on the status checks (its a pretty long list of checks on a server to reduce false positives, including distributed pings).  The problem with having the longer definition was the way DJ serializes the ruby code to store in the database. It was originally using a TEXT(65535) field as the data type in MySQL – changing this to LongText fixed the issue. The mysterious part about this whole issue was it would run fine from the console, but not from the workers (which makes sense in hindsight because of the limited space available for the serialized object in the delayed_job table).

Here is a snippet of the stack trace (for anyone who might Google it), and the link to the Github Issue:

1nil is not a symbol
2 /usr/local/lib/ruby/gems/1.9.1/gems/delayed_job-2.1.4/lib/delayed/performable_method.rb:20:in <code>perform'
3 /usr/local/lib/ruby/gems/1.9.1/gems/delayed_job-2.1.4/lib/delayed/backend/base.rb:87:in</code>invoke_job'
4 /usr/local/lib/ruby/gems/1.9.1/gems/delayed_job-2.1.4/lib/delayed/worker.rb:120:in <code>block (2 levels) in run'
5 /usr/local/lib/ruby/1.9.1/timeout.rb:57:in</code>timeout'
6 /usr/local/lib/ruby/gems/1.9.1/gems/delayed_job-2.1.4/lib/delayed/worker.rb:120:in <code>block in run'
7 /usr/local/lib/ruby/1.9.1/benchmark.rb:309:in</code>realtime'
8 /usr/local/lib/ruby/gems/1.9.1/gems/delayed_job-2.1.4/lib/delayed/worker.rb:119:in <code>run'
9 /usr/local/lib/ruby/gems/1.9.1/gems/delayed_job-2.1.4/lib/delayed/worker.rb:177:in</code>reserve_and_run_one_job'

The second issue was a combination of things:

The worker resides on a separate node from the main application stack and connects to the database remotely (still over a private LAN). The first thing I noticed was workers dieing silently; a lot of hits on Google pointed to MySQL losing its connection (the default in database.yml/ActiveRecord  is reconnect: false) – changing this to reconnect: true fixed that issue of workers dieing silently.

Another problem with workers dieing off silently was a lack of information; adding these two lines to the delayed_job_config initializer produced a lot more meaningful errors:

1Delayed::Worker.logger = Rails.logger
2Delayed::Worker.logger.auto_flushing = 1

And finally – a version specific bug; delayed_job was running into race conditions on locking jobs it was working on:

1Mysql2::Error: Deadlock found when trying to get lock; try restarting transaction: UPDATE <code>delayed_jobs</code> SET locked_at = '2011-08-08 21:30:10', locked_by = 'delayed_job.10 host:226237 pid:27929' WHERE ((run_at <= '2011-08-08 21:30:10' AND (locked_at IS NULL OR locked_at < '2011-08-08 21:15:10') OR locked_by = 'delayed_job.10 host:226237 pid:27929') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 Mysql2::Error: Deadlock found when trying to get lock; try restarting transaction: UPDATE <code>delayed_jobs</code> SET locked_at = '2011-08-08 21:30:10', locked_by = 'delayed_job.0 host:226237 pid:27869' WHERE ((run_at <= '2011-08-08 21:30:10' AND (locked_at IS NULL OR locked_at < '2011-08-08 21:15:10') OR locked_by = 'delayed_job.0 host:226237 pid:27869') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1

Upgrading from delayed_job 2.1.2 to 2.1.4 fixed the issue; apparently 2.1.3 may have also been affected.

Aside from these few issues delayed job has been wonderful to use in production and will continue to be used for handling all of Servly’s background tasks and processes.

Pandora's horrible interface

Jun 17, 2011 - 1 minutes

Its 2011 and Pandora hasn’t heard about expandable layouts… I love pandora, but their interface just sucks. It especially sucks when you have a huge list of stations.

Do something with all my screen real estate, please!

Graphing objects client side is a great way to avoid generating them server side (since client side scales infinitely). You do however run into issue when you get into thousands, or hundreds of thousands of points (for example displaying 5 minute intervals in a month: 8928). When graphing this many points javascript can hang or cause the browser to seem like its not responding.

This is a simple solution that I’ve been using for a while to average data points from a Active Record model:

 1def generic_graph(column, hours, multiplier = 1)
 2    beginning = Time.now.advance(:hours => -hours)
 3    x = YourModel.where("created_at > ?", beginning
 4    arr = []
 5    timeoffset = Time.zone.utc_offset/(60*60)
 6    Time.now.dst? ? timeoffset += 1 : 0
 7
 8    if hours >= 48 #or whatever number works for you
 9      x.collect.each_with_index do |s,y|
10        tmp_averaged = x[y..y+24].map{|ss| ss[column] } # collect 24 (or however many you want) records, then average them
11        arr << [s.created_at.advance(:hours=>timeoffset).to_i*1000, tmp_averaged.average] # this is for going into the flot.js graphing library
12      end
13    else
14      x.collect { |s| arr << [s.created_at.advance(:hours=>timeoffset).to_i*1000, s[column] ] }
15    end
16
17    arr.to_s #output for flot
18  end