Moved domains to my name
Aug 19, 2012 - 1 minutesMoved everything over to my other domain, joshrendek.com incase you’re wondering why you got redirected.
Moved everything over to my other domain, joshrendek.com incase you’re wondering why you got redirected.
Tired of doing this on every method in ruby? {% codeblock lang:ruby %} class Person def initialize(name) @name = name end end {% endcodeblock %}
Use the awesome power of ruby and metaprogramming to auto set method paramters to instance variables:
{% codeblock lang:ruby %} class Person def initialize(name) method(method).parameters.collect {|x| instance_variable_set(“@#{x[1]}“, eval(x[1].to_s)) } end end {% endcodeblock %}
Now you can access your parameters being passed in as instance variables for an object. You can extract this out into a method to apply to all objects or just make a simple extension to include it in files that you wanted to use it in. While this is a trivial example, for methods with longer signatures this becomes a more appealing approach. I’ll probably extract this out into a gem and post it here later.
Upgrading to OSX Mountain Lion:
Run:
1brew update
2brew link autoconf # optional, not always needed
3brew install automake # optional, not always needed
4brew upgrade
5rvm reinstall 1.9.3 --patch falcon
Let’s start out by logging into our machine and installing some pre-requistes (these can also be found by running rvm requirements as well):
1sudo apt-get -y install build-essential openssl libreadline6 libreadline6-dev curl git-core zlib1g zlib1g-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt-dev autoconf libc6-dev ncurses-dev automake libtool bison subversion git-core mysql-client libmysqlclient-dev libsasl2-dev libsasl2-dev mysql-server
Lets also install nodejs:
1curl -O http://nodejs.org/dist/v0.8.4/node-v0.8.4.tar.gz
2tar xzvf node-v0.8.4.tar.gz
3cd node-v0.8.4.tar.gz
4./configure && make && sudo make install
Now we can install ruby and RVM:
1curl -L https://get.rvm.io | bash -s stable --ruby
2source /home/ubuntu/.rvm/scripts/rvm
3rvm use 1.9.3 --default
4echo 'rvm_trust_rvmrcs_flag=1' > ~/.rvmrc
5# sudo su before this
6echo 'RAILS_ENV=production' >> /etc/environment
7rvm gemset create tester
And lastly nginx:
1sudo apt-get install nginx
Now let’s make a simple rails application back on our development machine with 1 simple root action:
1rails new tester -d=mysql
2echo 'rvm use 1.9.3@tester --create' > tester/.rvmrc
3cd tester
4bundle install
5rails g controller homepage index
6rm -rf public/index.html
7# Open up config/routes.rb and modify the root to to point to homepage#index
8rake db:create
9git init .
10git remote add origin https://github.com/bluescripts/tester.git # replace this with your git repo
11git add .; git ci -a -m 'first'; git push -u origin master
12rails s
Open your browser and go to http://localhost:3000 – all good! Now lets make some modifications to our Gemfile:
1source 'https://rubygems.org'
2gem 'rails', '3.2.6'
3gem 'mysql2'
4group :assets do
5 gem 'sass-rails', '~> 3.2.3'
6 gem 'coffee-rails', '~> 3.2.1'
7 gem 'uglifier', '>= 1.0.3'
8end
9gem 'jquery-rails'
10gem 'capistrano', :group => :development
11gem 'unicorn'
and re-bundle:
1 bundle
Now lets start prepping for deployment and compile our assets.
1capify .
2rake assets:precompile # dont forget to add it to git!
Make a file called config/unicorn.rb:
1# config/unicorn.rb
2# Set environment to development unless something else is specified
3env = ENV["RAILS_ENV"] || "development"
4
5site = 'tester'
6deploy_user = 'ubuntu'
7
8# See http://unicorn.bogomips.org/Unicorn/Configurator.html for complete
9# documentation.
10worker_processes 4
11
12# listen on both a Unix domain socket and a TCP port,
13# we use a shorter backlog for quicker failover when busy
14listen "/tmp/#{site}.socket", :backlog => 64
15
16# Preload our app for more speed
17preload_app true
18
19# nuke workers after 30 seconds instead of 60 seconds (the default)
20timeout 30
21
22pid "/tmp/unicorn.#{site}.pid"
23
24# Production specific settings
25if env == "production"
26 # Help ensure your application will always spawn in the symlinked
27 # "current" directory that Capistrano sets up.
28 working_directory "/home/#{deploy_user}/apps/#{site}/current"
29
30 # feel free to point this anywhere accessible on the filesystem
31 shared_path = "/home/#{deploy_user}/apps/#{site}/shared"
32
33 stderr_path "#{shared_path}/log/unicorn.stderr.log"
34 stdout_path "#{shared_path}/log/unicorn.stdout.log"
35end
36
37before_fork do |server, worker|
38 # the following is highly recomended for Rails + "preload_app true"
39 # as there's no need for the master process to hold a connection
40 if defined?(ActiveRecord::Base)
41 ActiveRecord::Base.connection.disconnect!
42 end
43
44 # Before forking, kill the master process that belongs to the .oldbin PID.
45 # This enables 0 downtime deploys.
46 old_pid = "/tmp/unicorn.#{site}.pid.oldbin"
47 if File.exists?(old_pid) && server.pid != old_pid
48 begin
49 Process.kill("QUIT", File.read(old_pid).to_i)
50 rescue Errno::ENOENT, Errno::ESRCH
51 # someone else did our job for us
52 end
53 end
54end
55
56after_fork do |server, worker|
57 # the following is *required* for Rails + "preload_app true",
58 if defined?(ActiveRecord::Base)
59 ActiveRecord::Base.establish_connection
60 end
61
62 # if preload_app is true, then you may also want to check and
63 # restart any other shared sockets/descriptors such as Memcached,
64 # and Redis. TokyoCabinet file handles are safe to reuse
65 # between any number of forked children (assuming your kernel
66 # correctly implements pread()/pwrite() system calls)
67end
Now lets setup the config/deploy.rb to be more unicorn and git friendly, take note of the default environment settings which are taken from the server when running rvm info modified version of ariejan.net’s:
1require "bundler/capistrano"
2
3set :scm, :git
4set :repository, "[email protected]:bluescripts/tester.git"
5set :branch, "origin/master"
6set :migrate_target, :current
7set :ssh_options, { :forward_agent => true }
8set :rails_env, "production"
9set :deploy_to, "/home/ubuntu/apps/tester"
10set :normalize_asset_timestamps, false
11
12set :user, "ubuntu"
13set :group, "ubuntu"
14set :use_sudo, false
15
16role :web, "192.168.5.113"
17role :db, "192.168.5.113", :primary => true
18
19set(:latest_release) { fetch(:current_path) }
20set(:release_path) { fetch(:current_path) }
21set(:current_release) { fetch(:current_path) }
22
23set(:current_revision) { capture("cd #{current_path}; git rev-parse --short HEAD").strip }
24set(:latest_revision) { capture("cd #{current_path}; git rev-parse --short HEAD").strip }
25set(:previous_revision) { capture("cd #{current_path}; git rev-parse --short HEAD@{1}").strip }
26
27default_environment["RAILS_ENV"] = 'production'
28
29default_environment["PATH"] = "/home/ubuntu/.rvm/gems/ruby-1.9.3-p194/bin:/home/ubuntu/.rvm/gems/ruby-1.9.3-p194@global/bin:/home/ubuntu/.rvm/rubies/ruby-1.9.3-p194/bin:/home/ubuntu/.rvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games"
30default_environment["GEM_HOME"] = "/home/ubuntu/.rvm/gems/ruby-1.9.3-p194"
31default_environment["GEM_PATH"] = "/home/ubuntu/.rvm/gems/ruby-1.9.3-p194:/home/ubuntu/.rvm/gems/ruby-1.9.3-p194@global"
32default_environment["RUBY_VERSION"] = "ruby-1.9.3-p194"
33
34default_run_options[:shell] = 'bash'
35
36namespace :deploy do
37 desc "Deploy your application"
38 task :default do
39 update
40 restart
41 end
42
43 desc "Setup your git-based deployment app"
44 task :setup, :except => { :no_release => true } do
45 dirs = [deploy_to, shared_path]
46 dirs += shared_children.map { |d| File.join(shared_path, d) }
47 run "#{try_sudo} mkdir -p #{dirs.join(' ')} && #{try_sudo} chmod g+w #{dirs.join(' ')}"
48 run "git clone #{repository} #{current_path}"
49 end
50
51 task :cold do
52 update
53 migrate
54 end
55
56 task :update do
57 transaction do
58 update_code
59 end
60 end
61
62 desc "Update the deployed code."
63 task :update_code, :except => { :no_release => true } do
64 run "cd #{current_path}; git fetch origin; git reset --hard #{branch}"
65 finalize_update
66 end
67
68 desc "Update the database (overwritten to avoid symlink)"
69 task :migrations do
70 transaction do
71 update_code
72 end
73 migrate
74 restart
75 end
76
77 task :finalize_update, :except => { :no_release => true } do
78 run "chmod -R g+w #{latest_release}" if fetch(:group_writable, true)
79
80 # mkdir -p is making sure that the directories are there for some SCM's that don't
81 # save empty folders
82 run <<-CMD
83 rm -rf #{latest_release}/log #{latest_release}/public/system #{latest_release}/tmp/pids &&
84 mkdir -p #{latest_release}/public &&
85 mkdir -p #{latest_release}/tmp &&
86 ln -s #{shared_path}/log #{latest_release}/log &&
87 ln -s #{shared_path}/system #{latest_release}/public/system &&
88 ln -s #{shared_path}/pids #{latest_release}/tmp/pids &&
89 ln -sf #{shared_path}/database.yml #{latest_release}/config/database.yml
90 CMD
91
92 if fetch(:normalize_asset_timestamps, true)
93 stamp = Time.now.utc.strftime("%Y%m%d%H%M.%S")
94 asset_paths = fetch(:public_children, %w(images stylesheets javascripts)).map { |p| "#{latest_release}/public/#{p}" }.join(" ")
95 run "find #{asset_paths} -exec touch -t #{stamp} {} ';'; true", :env => { "TZ" => "UTC" }
96 end
97 end
98
99 desc "Zero-downtime restart of Unicorn"
100 task :restart, :except => { :no_release => true } do
101 run "kill -s USR2 `cat /tmp/unicorn.tester.pid`"
102 end
103
104 desc "Start unicorn"
105 task :start, :except => { :no_release => true } do
106 run "cd #{current_path} ; bundle exec unicorn_rails -c config/unicorn.rb -D"
107 end
108
109 desc "Stop unicorn"
110 task :stop, :except => { :no_release => true } do
111 run "kill -s QUIT `cat /tmp/unicorn.tester.pid`"
112 end
113
114 namespace :rollback do
115 desc "Moves the repo back to the previous version of HEAD"
116 task :repo, :except => { :no_release => true } do
117 set :branch, "HEAD@{1}"
118 deploy.default
119 end
120
121 desc "Rewrite reflog so HEAD@{1} will continue to point to at the next previous release."
122 task :cleanup, :except => { :no_release => true } do
123 run "cd #{current_path}; git reflog delete --rewrite HEAD@{1}; git reflog delete --rewrite HEAD@{1}"
124 end
125
126 desc "Rolls back to the previously deployed version."
127 task :default do
128 rollback.repo
129 rollback.cleanup
130 end
131 end
132end
133
134def run_rake(cmd)
135 run "cd #{current_path}; #{rake} #{cmd}"
136end
Now lets try deploying (you may need to login to the server if this is the first time you’ve cloned from git to accept the SSH handshake):
1cap deploy:setup
Create your database config file in shared/database.yml:
1production:
2 adapter: mysql2
3 encoding: utf8
4 reconnect: false
5 database: tester_production
6 pool: 5
7 username: root
8 password:
Go into current and create the database if you haven’t already:
1rake db:create
2# cd down a level
3cd ../
4mkdir -p shared/pids
Now we can run the cold deploy:
1cap deploy:cold
2cap deploy:start
Now we can configure nginx:
Open up /etc/nginx/sites-enabled/default:
1upstream tester {
2 server unix:/tmp/tester.socket fail_timeout=0;
3}
4server {
5 listen 80 default;
6 root /home/ubuntu/apps/tester/current/public;
7 location / {
8 proxy_pass http://tester;
9 proxy_redirect off;
10
11 proxy_set_header Host $host;
12 proxy_set_header X-Real-IP $remote_addr;
13 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
14
15 client_max_body_size 10m;
16 client_body_buffer_size 128k;
17
18 proxy_connect_timeout 90;
19 proxy_send_timeout 90;
20 proxy_read_timeout 90;
21
22 proxy_buffer_size 4k;
23 proxy_buffers 4 32k;
24 proxy_busy_buffers_size 64k;
25 proxy_temp_file_write_size 64k;
26 }
27
28 location ~ ^/(images|javascripts|stylesheets|system|assets)/ {
29 root /home/deployer/apps/my_site/current/public;
30 expires max;
31 break;
32 }
33}
Now restart nginx and visit http://192.168.5.113/ ( replace with your server hostname/IP ). You should be all set!
{% codeblock Time to be Awesome - awesome.rb %} puts “Awesome!” unless lame {% endcodeblock %}
My money’s in that office, right? If she start giving me some bullshit about it ain’t there, and we got to go someplace else and get it, I’m gonna shoot you in the head then and there. Then I’m gonna shoot that bitch in the kneecaps, find out where my goddamn money is. She gonna tell me too. Hey, look at me when I’m talking to you, motherfucker. You listen: we go in there, and that nigga Winston or anybody else is in there, you the first motherfucker to get shot. You understand?
Blockquote is what goes inside this block here would you believe that bullshit?
Well, the way they make shows is, they make one show. That show’s called a pilot. Then they show that show to the people who make shows, and on the strength of that one show they decide if they’re going to make more shows. Some pilots get picked and become television programs. Some don’t, become nothing. She starred in one of the ones that became nothing.
The path of the righteous man is beset on all sides by the iniquities of the selfish and the tyranny of evil men. Blessed is he who, in the name of charity and good will, shepherds the weak through the valley of darkness, for he is truly his brother’s keeper and the finder of lost children. And I will strike down upon thee with great vengeance and furious anger those who would attempt to poison and destroy My brothers. And you will know My name is the Lord when I lay My vengeance upon thee.
Your bones don’t break, mine do. That’s clear. Your cells react to bacteria and viruses differently than mine. You don’t get sick, I do. That’s also clear. But for some reason, you and I react the exact same way to water. We swallow it too fast, we choke. We get some in our lungs, we drown. However unreal it may seem, we are connected, you and I. We’re on the same curve, just on opposite ends.
Do you see any Teletubbies in here? Do you see a slender plastic tag clipped to my shirt with my name printed on it? Do you see a little Asian child with a blank expression on his face sitting outside on a mechanical helicopter that shakes when you put quarters in it? No? Well, that’s what you see at a toy store. And you must think you’re in a toy store, because you’re here shopping for an infant named Jeb.
I had a query that, after adding indexes, was taking anywhere from 1.5 to 5ms to return on my local machine. In production and staging environments it was taking 500+ms to return.
The query was producing different optimizer paths:
1*************************** 2. row ***************************
2 id: 1
3 select_type: SIMPLE
4 table: activities
5 type: ref
6possible_keys: index_activities_on_is_archived,index_activities_on_equipment_id,index_activities_on_date_completed,index_activities_on_shop_id
7 key: index_activities_on_shop_id
8 key_len: 5
9 ref: const
10 rows: 1127
11 filtered: 100.00
12 Extra: Using where
1*************************** 2. row ***************************
2 id: 1
3 select_type: SIMPLE
4 table: activities
5 type: index_merge
6possible_keys: index_activities_on_is_archived,index_activities_on_equipment_id,index_activities_on_date_completed,index_activities_on_shop_id
7 key: index_activities_on_shop_id,index_activities_on_is_archived
8 key_len: 5,2
9 ref: NULL
10 rows: 1060
11 Extra: Using intersect(index_activities_on_shop_id,index_activities_on_is_archived); Using where
My first thought was it might have been the MySQL versions since I was running 5.5 locally and 5.0 in production, but that turned out not to be the case.
Next was to make sure my database was an exact replica of the one in production. After ensuring this I still ended up with the same results from the optimizer.
My last guess was server configuration. The issue ended up being query-cacheing being turned off in production and staging but not on my local machine. Turning this on, restarted mysqld, and re-running the query produced the good optmizer results on both my local machine and production.
You’re a new startup, you’re tight on funds and don’t have the server knowledge to run your own servers, but you plan on growing exponentially very quickly. You have three choices:
But how do you know which path to take?
I’ll be using my experience running Servly for most of this article. I’ve been using dedicated servers and virtual machines and the cloud for over 6 years with Servly and other business ventures.
When using services like EC2 and Amazon, you need to be aware that the support levels are different versus a regular dedicated hosting provider. The last time I had a ticket in with Heroku it had taken well over 4 hours to even get a response.
Support with a dedicated server is different. Of the two hosting providers that I’ve been using (WooServers and Voxel), I have been given top notch support. Tickets are answered in minutes, and 911’s are answered in seconds. Large conglomerate cloud providers just can’t beat that service.
There are other considerations to make as well. With traditional cloud offerings (EC2, Rackspace, EngineYard) and dedicated servers you are given root access to the servers, but with Heroku you’re locked in to their read-only file system and configuration. You miss out on the ability to tweak your configuration for maximum performance.
With dedicated hardware you can control your infastructure in a much more fine grained way than with a PaaS offering. All of this ties back into support; support that is familiar with the hardware and support that isn’t just working for the lowest common denominator in terms of performance based across a massive cloud. With dedicated hardware you get the control and support that one would expect from a paid service, while still being able to customize your system to YOUR needs, and not the needs of the baseline.
Besides knowing how your application’s innards look you also need to know how it performs. Find bottlenecks, memory leaks, optmizations you can make, database indexes you might be missing, etc.
Servly’s main focus is the dashboard, and the API server’s use to communicate their status updates. Every ~5 minutes the server gets bombarded with hundreds of concurrent requests all vying for database access.
One of the issues I noticed was the occasional 502 Gateway error. There were two problems:
There is no magic formula to find out what the right balance is without running your own tests. When I started testing a simple
1ab -c 100 -n 1000 http://foobar.servly.com/
was returning about 78 failed requests out of 1000. Good, but not good enough.
Editing the nginx configuration several more times I got rid of the writev() failed (107: Transport endpoint is not connected) while sending request to upstream error. The nginx worker count is now at 16.
The next error was a upstream timeout that would ocassionally happen during that 5 minute burst period. Modifying the number of unicorn slaves to 24, upping the backlog, and tweaking the timeout has reduced all of the gateway errors.
I was now able to scale up from 100-1000 concurrent requests without any failures being reported from ab.
For very small projects, or throwaway prototypes, Heroku and other free services are great. However once you’re project starts growing, the costs can grow exponentially.
Currently Servly runs on:
I also have a MySQL slave for backups in addition to S3 and several spare VM’s running as a standby.
Software wise there are:
Total: $1,587/mo
Total: $959
I could’ve gone with a smaller medium instance, however I need the IO to be as high as possible. Even on this it’s still going to be pretty terrible, this also applies to Heroku’s offerings as well, since the Fugu database is on the very conservative side and both are layers ontop of Amazon’s EC2. [1]
Total: $145/mo
For the price of running Heroku relatively maxed out on dyno’s, I could get 11 dedicated servers. Thats roughly:
Learn. You may stumble at first, but there are plenty of outlets to get help at. You can go on Freenode’s IRC, Mailing lists, and Stack Exchange. You could even hire a part time sysadmin. Once you really start scaling the cost of using Heroku versus what you could get with bare metal becomes so great that you could eventually just hire a full time sysadmin to manage your dedicated servers.
GitHub was a large player to move from the cloud at EngineYard to a more mixed infastructure of bare metal and spot instances to Rackspace. [2] They were able to get nearly 6x the RAM and nearly 4x the CPU cores; one of their main focal points was cost in addition to control, flexibility, capacity.
There are also plenty of success stories over at Heroku’s success page: http://success.heroku.com/
Temporary data crunching. Have a sudden spike in your job queue? Crank up some more virtual machines to plow through them, then turn them off to save money. All of this can be automated with tools like Blueprint, Puppet, and Chef.
Backups. With the price of S3 being around 8 cents per GB, its entirely feasable to back everything up off site for disaster recovery to the cloud.
You and your team need to evaluate your business needs and decide on what option is best for your company. If you already have a competent sysadmin or a developer who has played both roles before, it would make much more sense to use bare metal. If no one in your team has the experience, or time, to learn devops, then a PaaS solution like Heroku would be a more logical choice.
[1] http://www.krenger.ch/blog/amazon-ec2-io-performance/
[2] https://github.com/blog/493-github-is-moving-to-rackspace
I was working on a C assignment for school and wanted an easy way to test the output of a program running against multiple test cases.
Using RSpec I came up with the following spec:
1describe "Calculator "do
2 before(:all) do
3 `make clean; make;`
4 end
5 it "should accept p1" do
6 `./calc < testing/p1.cal`.should include "accept"
7 end
8
9 it "should reject p2" do
10 `./calc < testing/p2_err.cal`.should include "reject"
11 end
12
13 it "should reject p3" do
14 `./calc < testing/p3_err.cal`.should include "Variable a duplicate declaration"
15 end
16
17 it "should reject p4" do
18 `./calc < testing/p4_err.cal`.should include "Variable b uninitiated at line 5"
19 end
20
21 it "should accept p5" do
22 `./calc < testing/p5.cal`.should include "accept"
23 end
24
25 it "should accept p6" do
26 `./calc < testing/p6.cal`.should include "accept"
27 end
28
29 it "should reject p7" do
30 `./calc < testing/p7_err.cal`.should include "syntax error at line 9"
31 end
32
33 it "should reject p8" do
34 `./calc < testing/p8_err.cal`.should include "Variable d undeclared"
35 end
36
37 it "should reject p9" do
38 `./calc < testing/p9_err.cal`.should include "divide by zero at line 7"
39 end
40
41end
I was then able to run all my tests with a single command and get informative output.
1.........
2
3Finished in 0.49705 seconds
49 examples, 0 failures
Lets first install ctags-exuberant using Homebrew
1brew install ctags-exuberant
Remember the path that ctags got installed to, with version 5.8 on my machine it was in:
1/usr/local/Cellar/ctags/5.8/bin/ctags
Download the TagList plugin from VimOnline.
In your .vimrc file add the following:
1let Tlist_Ctags_Cmd='/usr/local/Cellar/ctags/5.8/bin/ctags'
2
3let g:Tlist_Ctags_Cmd='/usr/local/Cellar/ctags/5.8/bin/ctags'
4
5fu! CTagGen()
6 :execute "!" . g:Tlist_Ctags_Cmd . " -R ."
7endfunction
8
9nmap <silent> :ctg :call CTagGen()
Open up vim/MacVim, and type
1:ctg
You can then go to a controller for example:
Type in:
1:Tlist
And the follow should appear.
Lets say I’ve got my cursor on StoryType and I want to go to the model, I can just hit Ctrl+] to get there. You can now do this for any method (helpers, methods, anything thats in your ctags file!).
I was writing some cucumber features for reru_scrum when I ran into an issue with destroying user records and Mysql2 throwing a Lock error.
The full error:
1Mysql2::Error: Lock wait timeout exceeded; try restarting transaction: UPDATE `users` SET `last_sign_in_at` = '2011-11-22 00:06:32', `current_sign_in_at` = '2011-11-22 00:11:28', `sign_in_count` = 3, `updated_at` = '2011-11-22 00:11:28' WHERE `users`.`id` = 1
A simple solution is to use the database_cleaner gem.
Inside your features/support/env.rb file:
1begin
2 require 'database_cleaner'
3 require 'database_cleaner/cucumber'
4 DatabaseCleaner.strategy = :truncation
5rescue NameError
6 raise "You need to add database_cleaner to your Gemfile (in the :test group) if you wish to use it."
7end
A good idea is to create the before and after hooks to use the DatabaseCleaner.start and DatabaseCleaner.clean methods.
Inside features/support/hooks.rb:
1Before do
2 DatabaseCleaner.start
3end
4
5After do |scenario|
6 DatabaseCleaner.clean
7end
You should then be able to run your features and have your database cleaned between steps.