{ Josh Rendek }

<3 Ruby & Go

Motivation to work on new projects

Apr 26, 2014 - 6 minutes

Whenever I have spare time ( often around Christmas or when I’m on vacation/traveling ), I tend to fill it with working on projects I’ve built up in my backlog. I’m also really trying to keep a continuous streak of OSS commits going on Github (something about filling that chart up makes me want to work harder). Here’s my process and how I go about working on personal projects and try to stay motivated - if you have any ideas I’d love to hear them in the comments!

Have a backlog

I use Evernote for all my ideas and project notes:

Evernote

I have two columns - one for things in progress or to do and one for projects that are done ( with a link to any github repos I published ). When I have some downtime but don’t feel like actually writing any code - I’ll write out plans for what the project needs (use cases, backend needs, software I plan on using, etc) and do research and store all that as a sub-note in Evernote (you can see that with the light green link to the HAProxy Frontend ) under the main page. Plus I can easily share these with friends for feedback by just copying the share URL.

Use small milestones to build up bigger ones

For instance, when I was working on the code for http://ifcfg.net/ I decided there were two major components I would need to create: the web api to access the data, and then a backing library to do some web scraping to gather BGP data. I started out writing a small scraper in Scala for scraping BGP and looking glass info (which involved learning some more SBT, and selenium apis for Scala) and then moved onto learning a small amount of the Play! framework and exposing my library via that api. This let me focus on one small component at a time and finish it ( I have a habit of leaving personal projects unfinished or taking a long time to finish them if I let the scope creep beyond what I deemed as minimum requirements ).

Pick an interesting project

There are some areas I just don’t have an interest in - like writing an application to track golf scores.

So pick something you like - I love doing backend systems and APIs - pick something your passionate about already or a topic you want to learn more about.

Learn

If I’m working on a personal project and not learning anything new (even if its just a new way to test, for instance) - I get bored, really quickly. I’ve been stemming this by trying to pick up new languages as I work on projects and working on projects with broader goals.

For instance, my latest project I’m working on is Patchasaurus ( yes there isn’t a readme yet ). I know theres a gap in the systems world for (open source) patch management, especially focused on Ubuntu and Debian - so I decided to write a small initial version of one. I had been playing around with Go at work (and boy is it nice to get a HTTP API running in a few MB of RAM) and decided to write the agent for patchasaurus in that ( nicknamed trex ). I’ve been learning how to cross compile programs in Go, what libraries don’t work with cross-compilation (looking at you os/user) and a nice work flow for testing these while developing them ( sshfs is great for this with VirtualBox or Vagrant ). I also chose to use Rails 4.1 as the management interface since I wanted to stay up to date with the new Rails features - turns out spring is very nice and a great improvement over the guard work flow I’ve used before.

Don’t focus on processes versus getting things done

I’m a big fan of testing, and TDD, however I’m not always in the mood to do it. Sometimes I just want to see results and I’ll go back and refactor and test later. Picking what works for you on a specific project/component, and getting it done I think is much more important than rigidly following a specific set of guidelines on every project you do ( aka: test first, setup CI before any code, etc ).

Don’t get in a rut

Staring at HackerNews or Reddit all day can be daunting - try and not focus on what everyone else is doing and instead focus on what you’re getting done and how you’re improving yourself.

Also don’t let this influence your technology choices. Sometimes there are articles trending for AngularJS or Ruby on Rails - stick with what you picked ( unless you really want to learn that new tech ) - or figure out ways to incorporate that into smaller components of your project. Don’t throw away all that progress just because you saw a few posts reach the page!

Take breaks

Don’t spend all day coding - take breaks, go for a walk, a run, play with your dog, play a video game - something that can give you a moment to breathe and think about something else or give you time to re-focus on the grand vision you’ve been laboring over. Figure out what works for you to relax and do it to break up that screen glow tan you’re getting.

Talk about what you’re working on

Talk with friends to brainstorm ideas, pair up on some problems, see if theres a more idiomatic way to do a function in the language your using ( for example, I spent some time trying to see if there were any map() equivalents on #go-nuts), and blog about what you’re doing if that’s your style.

Knowing people are using code and software I’ve written is a huge motivating factor to working on future projects ( star/watch counts on Github, downloads on RubyGems, traffic to my blog, etc).

Finish!!

Yes it can be hard, but figure out what finished means to you, and do it. Publish it on Github, submit it to HackerNews, post it to reddit, get it hooked into TravisCI - make sure you come to the finish line of each component or project you’re working on. Building up these small accomplishments can help set a streak for the future so you have the motivation to power through and get items done.

Sometimes you’re more interested in getting an application finished than on the deployment process - throw it on Heroku, a shared hosting provider, etc. There’s nothing wrong with some shared hosting for a small project. Don’t let things like deployment stop you from finishing!

When working on a rails application you can sometimes find duplicated or very similar code between two different controllers (for instance a UI element and an API endpoint). Realizing that you have this duplication there are several things you can do. I’m going to go over how to extract this code out into the query object pattern 1 and clean up our constructor using the builder pattern 2 adapted to ruby.

I’m going to make a few assumptions here, but this should be applicable to any data access layer of your application. I’m also assuming you’re using something like Kaminari for pagination and have a model for People.

 1def index
 2  page = params[:page] || 1
 3  per_page = params[:per_page] || 50
 4  name = params[:name]
 5  sort = params[:sort_by] || 'last_name'
 6  direction = params[:sort_direction] || 'asc'
 7
 8  query = People
 9  query = query.where(name: name) if name.present?
10  @results = query.order("#{sort} #{direction}").page(page).per_page(per_page)
11end

So we see this duplicated elsehwere in the code base and we want to clean it up. Lets first start by extracting this out into a new class called PeopleQuery.

I usually put these under app/queries in my rails application.

 1class PeopleQuery
 2  attr_accessor :page, :per_page, :name, :sort, :direction, :query
 3  def initialize(page, per_page, name, sort, direction)
 4    self.page = page || 1
 5    self.per_page = per_page || 50
 6    self.name = name
 7    self.sort = sort || 'last_name'
 8    self.direction = direction || 'asc'
 9    self.query = People
10  end
11
12  def build
13    self.query = self.query.where(name: self.name) if self.name.present?
14    self.query.order("#{self.sort} #{self.direction}").page(self.page).per_page(self.per_page)
15  end
16end

Now our controller looks like this:

1def index
2  query = PeopleQuery.new(params[:page], params[:per_page], params[:name], params[:sort], params[:direction])
3  @results = query.build
4end

Much better! We’ve decoupled our control from our data access object (People/ActiveRecord), moved some of the query logic outside of the controller and into a specific class meant to deal with building it. But that constructor doesn’t look very nice. We can do better since we’re using ruby.

Our new PeopleQuery class will look like this and will use a block to initialize itself instead of a long list of constructor arguments.

 1class PeopleQuery
 2  attr_accessor :page, :per_page, :name, :sort, :direction, :query
 3  def initialize(&block)
 4    yield self
 5    self.page ||= 1
 6    self.per_page =|| 50
 7    self.sort ||= 'last_name'
 8    self.direction ||= 'asc'
 9    self.query = People
10  end
11
12  def build
13    self.query = self.query.where(name: self.name) if self.name.present?
14    self.query.order("#{self.sort} #{self.direction}").page(self.page).per_page(self.per_page)
15  end
16end

We yield first to let the caller set the values and then after yielding we set our default values if they weren’t passed in. There is another method of doing this with instance_eval but you end up losing variable scope and the constructor looks worse since you have to start passing around the params variable to get access to it, so we’re going to stick with yield.

 1def index
 2  query = PeopleQuery.new do |query|
 3    query.page = params[:page]
 4    query.per_page = params[:per_page]
 5    query.name = params[:name]
 6    query.sort = params[:sort]
 7    query.direction = params[:direction]
 8  end
 9  @results = query.build
10end

And that’s it! We’ve de-duplicated some code (remember we assumed dummy controller’s index method was duplicated elsewhere in an API call in a seperate namespaced controller), extracted out a common query object, decoupled our controller from ActiveRecord, and built up a nice way to construct the query object using the builder pattern.

Parsing HTML in Scala

Oct 31, 2013 - 2 minutes

Is there ever a confusing amount of information out there on parsing HTML in Scala. Here is the list of possible ways I ran across:

  • Hope the document is valid XHTML and use scala.xml.XML to parse it
  • If the document isn’t valid XHTML use something like TagSoup and hope it parses again
  • Still think its valid XHTML? Try using scalaz’s XML parser

All of the answers I found on Google pointed to some type of XML parsing, which won’t always work. Coming from Ruby I know there are tools out there like Selenium that can simulate a web browser for you and give you a rich interface to interact with the returned HTML.

So I went on Maven and found the two Selenium web drivers I wanted for my project and added them to my libraryDependencies:

1    "org.seleniumhq.webdriver" % "webdriver-selenium" % "0.9.7376",
2    "org.seleniumhq.webdriver" % "webdriver-htmlunit" % "0.9.7376"

The project I’m working on is to parse Looking Glass websites for BGP information and AS peering, so I wanted to scrape the data. I also didn’t want to have to use a full blown web browser (ala Selenium + Firefox for instance) - so I stuck with the HtmlUnit driver for the implementation.

Here is a quick code snippet that lets me grab AS #’s and Peer names from an AS:

 1val url = "http://example.com/AS" + as.toString
 2
 3val driver = new HtmlUnitDriver
 4// Proxy for BetaMax when writing tests
 5if (_port != null) {
 6  driver.setProxy("localhost", _port)
 7}
 8driver.get(url)
 9
10val peers = driver.findElementsByXPath("//*[@id=\"table_peers4\"]/tbody/tr/td[position() = 1 or position() = 2]")
11
12// zip up the list in pairs so List(a,b,c,d) becomes List((a,b), (c,d))
13for(peer <- peers zip peers.tail) {
14  println(peer)
15}

No XML to muck with and I get some nice selectors to query the document for. Remember if the source you want data from doesn’t have an API, HTML is an API! Just be respectful of how you query and interact with them (ie: Don’t do 100 requests/second, cache/record responses while writing tests, etc).

Getting started with Scala

Oct 28, 2013 - 3 minutes

Recently I’ve been getting into more Java and (attempting to) Scala development. I always got annoyed with the Scala ecosystem for development and would get fed up and just go back to writing straight Java (*cough*sbtcough). Today I decided to write down everything I did and get a sane process going for Scala development with SBT.

I decided to write a small Scala client for OpenWeatherMap - here is what I went through.

A brief guide on naming conventions is here. I found this useful just to reference conventions since not everything is the same as Ruby (camelCase vs snake_case for instance).

Setting up and starting a project

First make sure you hava a JVM installed, Scala, and SBT. I’ll be using Scala 2.10.2 and SBT 0.12.1 since that is what I have installed.

One of the nice things I like about Ruby on Rails is the project generation ( aka: rails new project [opts] ) so I was looking for something similar with Scala.

Enter giter8: https://github.com/n8han/giter8

giter8 runs through SBT and has templates available for quickstart.

Follow the install instructions and install giter8 into SBT globally and load SBT to make sure it downloads and installs.

Once you do that you can pick a template from the list, or go with the one I chose: fayimora/basic-scala-project which sets up the directories properly and also sets up ScalaTest, a testing framework with a DSL similar to RSpec.

To setup your project you need to run:

1g8 fayimora/basic-scala-project

You’ll be prompted with several questions and then your project will be made. Switch into that directory and run sbt test to make sure the simple HelloWorld passes and everything with SBT is working.

Setting up IntelliJ

For Java and Scala projects I stick with IntelliJ over my usual vim. When using Java IntelliJ is good about picking up library and class path’s and resolving dependencies (especially if you are using Maven). However there isn’t a good SBT plugin (as of writing this) that manages to do all this inside IntelliJ.

The best plugin for SBT I’ve found that does this is sbt-idea. You’re going to need to make a project/plugins.sbt file:

1addSbtPlugin("com.github.mpeltonen" % "sbt-idea" % "1.5.2")

and now you can generate your .idea files by running: sbt gen-idea

IntelliJ should now resolve your project dependencies and you can start coding your project.

Final Result

scala-weather - A simple to use OpenWeatherMap client in Scala set up with Travis-CI and CodeClimate. This is just the first of several projects I plan on working on / open sourcing to get my feet wet with Scala more.

Useful libraries

Notes

By default Bee Client will log everything to STDOUT - you’ll need to configure logback with an XML file located in src/main/resources/logback.xml:

 1<configuration>
 2    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
 3        <encoder>
 4            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
 5        </encoder>
 6    </appender>
 7    <root level="ERROR">
 8        <appender-ref ref="STDOUT" />
 9    </root>
10</configuration>

Testing is one of the most important parts of software development and helps to ensure bugs don’t get into production and that code can be refactored safely. If you’re working on a team with multiple people with different skill sets, you might have people doing testing who only know windows and development is only using OSX or Linux. We want everyone to be able to test - someone in QA who is familiar with Windows shouldn’t have to throw away all that knowledge, install Linux, and start from scratch. Enter JRuby and John.

John is our tester and he is running windows. He wants to help make sure that when a user goes to http://google.com/ that a button appears with the text “Google Search”. The quick way to do this is to open his browser, navigate to http://google.com/ glance through the page for the button and confirm that its there. John has a problem though, he has 30 other test cases to run and the developers are pushing code to the frontpage several times a day; John now has to continously do this manually everytime code is touched and his test load is piling up.

So let’s help John out and install Sublime Text 2 and JRuby.

Start by downloading the 64-bit version of Sublime Text. Make sure to add the context menu when going through the install process.

Now we’ll visit the JRuby homepage and download the 64 bit installer.

Go through the installer and let JRuby set your path so you can access ruby from cmd.exe

Now when we open cmd.exe and type jruby -v we’ll be able to see that it was installed.

Now that we have our tools installed lets setup our test directory on the Desktop. Inside our testing folder we’ll create a folder called TestDemo for our tests for the Demo project.

Next we’ll open Sublime Text and go to File > Open Folder and navigate to our TestDemo folder and hit open.

Now we can continue making our directory structure inside Sublime Text. Since we’re going to use rspec we need to create a folder called spec to contain all of our tests. Right click on the TestDemo in the tree navigation and click New Folder.

Call the folder spec in the bottom title bar when it prompts you for the folder name.

Next we’ll create our Gemfile which will declare all of our dependencies - so make a file in the project root called Gemfile and put the our dependencies in it:

1source "https://rubygems.org"
2
3gem "rspec"
4gem "selenium"
5gem "selenium-webdriver"
6gem "capybara"

Once we have that file created, open cmd.exe and switch to your project’s root directory.

Type jgem install bundler to install bundler which manages ruby dependencies.

While still at the command prompt we’re going to bundle to install our dependencies:

After that finishes we need to run one last command for selenium to work properly: selenium install

We also need a spec_helper.rb file inside our spec directory.

1require "rspec"
2require "selenium"
3require "capybara/rspec"
4
5Capybara.default_driver =  :selenium

We’ve now setup our rspec folders, our Gemfile with dependencies, and installed them. Now we can write the test that will save John a ton of time.

Chrome comes with a simple tool to get XPath paths so we’re going to use that to get the XPath for the search button. Right click on the “Google Search” button and click Inspect element

Right click on the highlighted element and hit Copy XPath.

Now we’re going to make our spec file and call it homepage_spec.rb and locate it under spec\integration.

Here is a picture showing the directory structure and files:

Here is the spec file with comments explaining each part:

 1# This loads the spec helper file that we required everything in
 2require "spec_helper"
 3
 4# This is the outer level description of the test
 5# For this example it describes going to the homepage of Google.com
 6# Setting the feature type is necessary if you have
 7# Capybara specs outside of the spec\features folder
 8describe "Going to google.com", :type => :feature do
 9
10  # Context is like testing a specific component of the homepage, in this case
11  # its the search button
12  context "The search button" do
13    # This is our actual test where we give it a meaningful test description
14    it "should contain the text 'Google Search'" do
15      visit "http://google.com/" # Opens Firefox and visits google
16      button = find(:xpath, '//*[@id=gbqfba"') # find an object on the page by its XPath path
17      # This uses an rspec assertion saying that the string returned
18      # by button.text is equal to "Google Search"
19      button.text.should eq("Google Seearch")
20
21    end
22  end
23
24end

Now we can tab back to our cmd.exe prompt and run our tests! rspec spec will run all your tests under the spec folder.

Things to take note of

This example scenario is showing how to automate browser testing to do end-to-end tests on a product using rspec. This is by no means everything you can do with rspec and ruby - you can SSH, hit APIs and parse JSON, and do anything you want with the ability to make assertions.

A lot is going on in these examples - there are plenty of resources out there on google and other websites that provide more rspec examples and ruby examples.

We also showed how to add dependencies and install them using bundler. Two of the best resources for finding libraries and other gems is RubyGems and Ruby-Toolbox - the only thing to take note of is anything saying to be a native C extension (they won’t work with JRuby out of the box).

My last note is that you also need to have firefox installed as well - Selenium will work with Chrome but I’ve found it to be a hassle to setup (and unless you really need Chrome), the default of Firefox will work great.

A simple ruby plugin system

Jul 4, 2013 - 2 minutes

Let’s start out with a simple directory structure:

1.
2├── plugin.rb
3├── main.rb
4└── plugins
5    ├── cat.rb
6    └── dog.rb
7
81 directory, 3 files

All the plugins we will use for our library will be loaded from plugins. Now lets make a simple Plugin class and register our plugins.

 1class Plugin
 2  # Keep the plugin list inside a set so we don't double-load plugins
 3  @plugins = Set.new
 4
 5  def self.plugins
 6    @plugins
 7  end
 8
 9  def self.register_plugins
10    # Iterate over each symbol in the object space
11    Object.constants.each do |klass|
12      # Get the constant from the Kernel using the symbol
13      const = Kernel.const_get(klass)
14      # Check if the plugin has a super class and if the type is Plugin
15      if const.respond_to?(:superclass) and const.superclass == Plugin
16        @plugins << const
17      end
18    end
19  end
20end

We’ve now made a simple class that will contain all of our plugin data when we call register_plugins.

Now for our Dog and Cat classes:

1class DogPlugin < Plugin
2
3  def handle_command(cmd)
4    p "Command received #{cmd}"
5  end
6
7end
1class CatPlugin < Plugin
2
3  def handle_command(cmd)
4    p "Command received #{cmd}"
5  end
6
7end

Now combine this all together in one main entry point and we have a simple plugin system that lets us send messages to each plugin through a set method ( handle_command ).

1require './plugin'
2Dir["./plugins/*.rb"].each { |f| require f }
3Plugin.register_plugins
4
5# Test that we can send a message to each plugin
6Plugin.plugins.each do |plugin|
7  plugin.handle_command('test')
8end

This is a very simple but useful way to make a plugin system to componentize projects like a chat bot for IRC.

Why setuid Is Bad and What You Can Do

Feb 26, 2013 - 11 minutes

Why setuid is Bad

setuid allows a binary to be run as a different user then the one invoking it. For example, ping needs to use low level system interfaces (socket, PF_INET, SOCK_RAW, etc) in order to function properly. We can watch this in action by starting ping in another terminal window ( ping google.com ) and then using strace to see the syscall’s being made:

sudo strace -p PID and we get the following:

1munmap(0x7f329e7ea000, 4096)            = 0stat("/etc/resolv.conf", {st_mode=S_IFREG|0644, st_size=185, ...}) = 0
2socket(PF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 4
3connect(4, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("8.8.8.8")}, 16) = 0

We can find all setuid programs installed by issuing the command:

1sudo find / -xdev \( -perm -4000 \) -type f -print0 -exec ls -l {} \;

This will find all commands that have the root setuid bit set in their permission bit.

Of particular interest in OpenBSD, where a lot of work was done to remove and switch programs from needing to use setuid/gid permissions. OpenIndiana is the worst offender and has the widest vector for attack.

setuid escalation is a common attack vector and can allow unprivileged code to be executed by a regular user, and then escalate itself to root and drop you in on the root shell.

Here are a few examples:

CVE-2012-0056: Exploiting /proc/pid/mem

http://blog.zx2c4.com/749 - C code that uses a bug in the way the Linux kernel checked permissions on /proc/pid/mem and then uses that to exploit the su binary to give a root shell.

CVE-2010-3847: Exploiting via $ORIGIN and file descriptors

http://www.exploit-db.com/exploits/15274/ - By exploiting a hole in the way the $ORIGIN is checked, a symlink can be made to a program that uses setuid and exec’d ‘to obtain the file descriptors which then lets arbitrary code injection (in this case a call to system("/bin/bash")).

More of these can be found at http://www.exploit-db.com/shellcode/ and just searching google for setuid exploits.

So you may not want to completely disable the setuid flag on all the binaries for your distribution, but we can turn on some logging to watch when they’re getting called and install a kernel patch that will secure the OS and help prevent 0-days that may prey on setuid vulnerabilities.

How to log setuid calls

I will detail the steps to do this on Ubuntu, but they should apply to the other audit daemons on CentOS.

Let’s first install auditd: sudo apt-get install auditd

Let’s open up /etc/audit/audit.rules, and with a few tweaks with vim, we can insert the list we generated with find into the audit rule set (explanation of each flag after the jump):

 1# This file contains the auditctl rules that are loaded# whenever the audit daemon is started via the initscripts.
 2# The rules are simply the parameters that would be passed
 3# to auditctl.
 4
 5# First rule - delete all
 6-D
 7
 8# Increase the buffers to survive stress events.
 9# Make this bigger for busy systems
10-b 320
11
12# Feel free to add below this line. See auditctl man page
13
14-a always,exit -F path=/usr/lib/pt_chown -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
15-a always,exit -F path=/usr/lib/eject/dmcrypt-get-device -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
16-a always,exit -F path=/usr/lib/dbus-1.0/dbus-daemon-launch-helper -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
17-a always,exit -F path=/usr/lib/openssh/ssh-keysign -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
18-a always,exit -F path=/usr/sbin/uuidd -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
19-a always,exit -F path=/usr/sbin/pppd -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
20-a always,exit -F path=/usr/bin/at -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
21-a always,exit -F path=/usr/bin/passwd -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
22-a always,exit -F path=/usr/bin/mtr -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
23-a always,exit -F path=/usr/bin/sudoedit -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
24-a always,exit -F path=/usr/bin/traceroute6.iputils -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
25-a always,exit -F path=/usr/bin/chsh -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
26-a always,exit -F path=/usr/bin/sudo -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
27-a always,exit -F path=/usr/bin/chfn -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
28-a always,exit -F path=/usr/bin/gpasswd -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
29-a always,exit -F path=/usr/bin/newgrp -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
30-a always,exit -F path=/bin/fusermount -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
31-a always,exit -F path=/bin/umount -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
32-a always,exit -F path=/bin/ping -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
33-a always,exit -F path=/bin/ping6 -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
34-a always,exit -F path=/bin/su -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
35-a always,exit -F path=/bin/mount -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged
1-a: appends the always, and exit rules. This says to always make a log at syscall entry and syscall exit.
2-F
3     path= says filter to the executable being called
4     perm=x says filter on the program being executable
5     auid>= says log all calls for users who have a UID above 500 (regular user accounts start at 1000 generally)
6     auid!=4294967295 sometimes a process may start before the auditd, in which case it will get a auid of 4294967295
7-k passes a filter key that will be put into the record log, in this case its "privileged"

So now when we run ping google.com we can see a full audit trail in /var/log/audit/audit.log:

1type=SYSCALL msg=audit(1361852594.621:48): arch=c000003e syscall=59 success=yes exit=0 a0=f43de8 a1=d40488 a2=ed8008 a3=7fffc9c9a150 items=2 ppid=1464 pid=1631 auid=1000 uid=1000 gid=1000 euid=0 suid=0 fsuid=0 egid=1000 sgid=1000 fsgid=1000 tty=pts1 ses=6 comm="ping" exe="/bin/ping" key="privileged"type=EXECVE msg=audit(1361852594.621:48): argc=2 a0="ping" a1="google.com"
2type=BPRM_FCAPS msg=audit(1361852594.621:48): fver=0 fp=0000000000000000 fi=0000000000000000 fe=0 old_pp=0000000000000000 old_pi=0000000000000000 old_pe=0000000000000000 new_pp=ffffffffffffffff new_pi=0000000000000000 new_pe=ffffffffffffffff
3type=CWD msg=audit(1361852594.621:48):  cwd="/home/ubuntu"
4type=PATH msg=audit(1361852594.621:48): item=0 name="/bin/ping" inode=131711 dev=08:01 mode=0104755 ouid=0 ogid=0 rdev=00:00
5type=PATH msg=audit(1361852594.621:48): item=1 name=(null) inode=934 dev=08:01 mode=0100755 ouid=0 ogid=0 rdev=00:00

Next steps: Patching and upgrading the kernel with GRSecurity

GRSecurity is an awesome tool in the security-minded system administrators toolbag. It will prevent zero days (like the proc mem exploit explained above 1 ) by securing which areas a user can access. A full list can be seen at http://en.wikibooks.org/wiki/Grsecurity/Appendix/Grsecurity_and_PaX_Configuration_Options and http://en.wikipedia.org/wiki/Grsecurity#Miscellaneous_features, I suggest going through these and seeing if you want to continue with this.

The following below is for advanced users. Not responsible for any issues you may run into, please make sure to test this in a staging/test environment.

Here are the steps I followed to install the patch:

 1# Start by downloading the latest kernel
 2wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.2.39.tar.bz2
 3
 4# Next extract it
 5tar xjvf linux-3.2.39.tar.bz2
 6cd linux-3.2.39
 7
 8# Copy over your current kernel configuration:
 9cp -vi /boot/config-`uname -r` .config
10
11# Updates the config file to match old config and prompts for any new kernel options.
12make oldconfig
13
14# This will make sure only modules get compiled only if they are in your kernel.
15make localmodconfig
16
17# Bring up the configuration menu
18make menuconfig

Once your in the menu config you can browse to the Security section and go to Grsecurity and enable it. I set the configuration method to automatic and then went to Customize. For example, you can now go to Kernel Auditing -> Exec logging to turn on some additional logging to shell activities (WARNING: this will generate a lot of log activity, decide if you want to use this or not). I suggest going through all of these and reading through their menu help descriptions (when selecting one, press the ? key to bring up the help).

Now we’ll finish making the kernel and compiling it:

1# Now we can compile the kernel
2make -j2 # where 2 is the # of CPU's + 1
3
4# Install and load the dynamic kernel modules
5sudo make modules_install
6
7# Finally install kernel
8sudo make install

We can now reboot and boot into our GRsecurity patched kernel!

Hopefully this article has provided some insight into what the setuid flag does, how it has and can be exploited, and what we can do to prevent this in the future.

Here are a few links to useful books on the subject of shellcode and exploits that I reccomend:

Below is the list of setuid binaries on each OS

Ubuntu 12.04 LTS (22)

back to top

 1-rwsr-xr-x 1 root    root        31304 Mar  2  2012 /bin/fusermount-rwsr-xr-x 1 root    root        94792 Mar 30  2012 /bin/mount
 2-rwsr-xr-x 1 root    root        35712 Nov  8  2011 /bin/ping
 3-rwsr-xr-x 1 root    root        40256 Nov  8  2011 /bin/ping6
 4-rwsr-xr-x 1 root    root        36832 Sep 12 18:29 /bin/su
 5-rwsr-xr-x 1 root    root        69096 Mar 30  2012 /bin/umount
 6-rwsr-sr-x 1 daemon  daemon      47928 Oct 25  2011 /usr/bin/at
 7-rwsr-xr-x 1 root    root        41832 Sep 12 18:29 /usr/bin/chfn
 8-rwsr-xr-x 1 root    root        37096 Sep 12 18:29 /usr/bin/chsh
 9-rwsr-xr-x 1 root    root        63848 Sep 12 18:29 /usr/bin/gpasswd
10-rwsr-xr-x 1 root    root        62400 Jul 28  2011 /usr/bin/mtr
11-rwsr-xr-x 1 root    root        32352 Sep 12 18:29 /usr/bin/newgrp
12-rwsr-xr-x 1 root    root        42824 Sep 12 18:29 /usr/bin/passwd
13-rwsr-xr-x 2 root    root        71288 May 31  2012 /usr/bin/sudo
14-rwsr-xr-x 2 root    root        71288 May 31  2012 /usr/bin/sudoedit
15-rwsr-xr-x 1 root    root        18912 Nov  8  2011 /usr/bin/traceroute6.iputils
16-rwsr-xr-- 1 root    messagebus 292944 Oct  3 13:03 /usr/lib/dbus-1.0/dbus-daemon-launch-helper
17-rwsr-xr-x 1 root    root        10408 Dec 13  2011 /usr/lib/eject/dmcrypt-get-device
18-rwsr-xr-x 1 root    root       240984 Apr  2  2012 /usr/lib/openssh/ssh-keysign
19-rwsr-xr-x 1 root    root        10592 Oct  5 16:08 /usr/lib/pt_chown
20-rwsr-xr-- 1 root    dip        325744 Feb  4  2011 /usr/sbin/pppd
21-rwsr-sr-x 1 libuuid libuuid     18856 Mar 30  2012 /usr/sbin/uuidd

CentOS 6.3 (21)

back to top

 1-rwsr-xr-x. 1 root root  76056 Nov  5 05:21 /bin/mount-rwsr-xr-x. 1 root root  40760 Jul 19  2011 /bin/ping
 2-rwsr-xr-x. 1 root root  36488 Jul 19  2011 /bin/ping6
 3-rwsr-xr-x. 1 root root  34904 Jun 22  2012 /bin/su
 4-rwsr-xr-x. 1 root root  50496 Nov  5 05:21 /bin/umount
 5-rwsr-x---. 1 root dbus  46232 Sep 13 13:04 /lib64/dbus-1/dbus-daemon-launch-helper
 6-rwsr-xr-x. 1 root root  10272 Apr 16  2012 /sbin/pam_timestamp_check
 7-rwsr-xr-x. 1 root root  34840 Apr 16  2012 /sbin/unix_chkpwd
 8-rwsr-xr-x. 1 root root  54240 Jan 30  2012 /usr/bin/at
 9-rwsr-xr-x. 1 root root  66352 Dec  7  2011 /usr/bin/chage
10-rws--x--x. 1 root root  20184 Nov  5 05:21 /usr/bin/chfn
11-rws--x--x. 1 root root  20056 Nov  5 05:21 /usr/bin/chsh
12-rwsr-xr-x. 1 root root  47520 Jul 19  2011 /usr/bin/crontab
13-rwsr-xr-x. 1 root root  71480 Dec  7  2011 /usr/bin/gpasswd
14-rwsr-xr-x. 1 root root  36144 Dec  7  2011 /usr/bin/newgrp
15-rwsr-xr-x. 1 root root  30768 Feb 22  2012 /usr/bin/passwd
16---s--x--x. 2 root root 219272 Aug  6  2012 /usr/bin/sudo
17---s--x--x. 2 root root 219272 Aug  6  2012 /usr/bin/sudoedit
18-rwsr-xr-x. 1 root root 224912 Nov  9 07:49 /usr/libexec/openssh/ssh-keysign
19-rws--x--x. 1 root root  14280 Jan 31 06:30 /usr/libexec/pt_chown
20-rwsr-xr-x. 1 root root   9000 Sep 17 05:55 /usr/sbin/usernetctl

OpenBSD 5.2 (3)

back to top

1-r-sr-xr-x  1 root  bin       242808 Aug  1  2012 /sbin/ping-r-sr-xr-x  1 root  bin       263288 Aug  1  2012 /sbin/ping6
2-r-sr-x---  1 root  operator  222328 Aug  1  2012 /sbin/shutdown

OpenIndiana 11 (53)

back to top

 1-rwsr-xr-x   1 root     bin        64232 Jun 30  2012 /sbin/wificonfig--wS--lr-x   1 root     root           0 Dec 11 15:20 /media/.hal-mtab-lock
 2-r-sr-xr-x   1 root     bin       206316 Dec 11 21:00 /usr/lib/ssh/ssh-keysign
 3-rwsr-xr-x   1 root     adm        12140 Jun 30  2012 /usr/lib/acct/accton
 4-r-sr-xr-x   1 root     bin        23200 Jun 30  2012 /usr/lib/fs/ufs/quota
 5-r-sr-xr-x   1 root     bin       111468 Jun 30  2012 /usr/lib/fs/ufs/ufsrestore
 6-r-sr-xr-x   1 root     bin       106964 Jun 30  2012 /usr/lib/fs/ufs/ufsdump
 7-r-sr-xr-x   1 root     bin        18032 Jun 30  2012 /usr/lib/fs/smbfs/umount
 8-r-sr-xr-x   1 root     bin        18956 Jun 30  2012 /usr/lib/fs/smbfs/mount
 9-r-sr-xr-x   1 root     bin        12896 Jun 30  2012 /usr/lib/utmp_update
10-r-sr-xr-x   1 root     bin        35212 Jun 30  2012 /usr/bin/fdformat
11-r-s--x--x   2 root     bin       188080 Jun 30  2012 /usr/bin/sudoedit
12-r-sr-xr-x   1 root     sys        34876 Jun 30  2012 /usr/bin/su
13-r-sr-xr-x   1 root     bin        42504 Jun 30  2012 /usr/bin/login
14-r-sr-xr-x   1 root     bin       257288 Jun 30  2012 /usr/bin/pppd
15-r-sr-xr-x   1 root     sys        46208 Jun 30  2012 /usr/bin/chkey
16-r-sr-xr-x   1 root     sys        29528 Jun 30  2012 /usr/bin/amd64/newtask
17-r-sr-xr-x   2 root     bin        24432 Jun 30  2012 /usr/bin/amd64/w
18-r-sr-xr-x   1 root     bin      3224200 Jun 30  2012 /usr/bin/amd64/Xorg
19-r-sr-xr-x   2 root     bin        24432 Jun 30  2012 /usr/bin/amd64/uptime
20-rwsr-xr-x   1 root     sys        47804 Jun 30  2012 /usr/bin/at
21-r-sr-xr-x   1 root     bin         8028 Jun 30  2012 /usr/bin/mailq
22-r-sr-xr-x   1 root     bin        33496 Jun 30  2012 /usr/bin/rsh
23-r-sr-xr-x   1 root     bin        68704 Jun 30  2012 /usr/bin/rmformat
24-r-sr-sr-x   1 root     sys        31292 Jun 30  2012 /usr/bin/passwd
25-rwsr-xr-x   1 root     sys        23328 Jun 30  2012 /usr/bin/atrm
26-r-sr-xr-x   1 root     bin        97072 Jun 30  2012 /usr/bin/xlock
27-r-sr-xr-x   1 root     bin        78672 Jun 30  2012 /usr/bin/rdist
28-r-sr-xr-x   1 root     bin        27072 Jun 30  2012 /usr/bin/sys-suspend
29-r-sr-xr-x   1 root     bin        29304 Jun 30  2012 /usr/bin/crontab
30-r-sr-xr-x   1 root     bin        53080 Jun 30  2012 /usr/bin/rcp
31-r-s--x--x   2 root     bin       188080 Jun 30  2012 /usr/bin/sudo
32-r-s--x--x   1 uucp     bin        70624 Jun 30  2012 /usr/bin/tip
33-rwsr-xr-x   1 root     sys        18824 Jun 30  2012 /usr/bin/atq
34-r-sr-xr-x   1 root     bin       281732 Jun 30  2012 /usr/bin/xscreensaver
35-r-sr-xr-x   1 root     bin      2767780 Jun 30  2012 /usr/bin/i86/Xorg
36-r-sr-xr-x   1 root     sys        22716 Jun 30  2012 /usr/bin/i86/newtask
37-r-sr-xr-x   2 root     bin        22020 Jun 30  2012 /usr/bin/i86/w
38-r-sr-xr-x   2 root     bin        22020 Jun 30  2012 /usr/bin/i86/uptime
39-rwsr-xr-x   1 root     sys        13636 Jun 30  2012 /usr/bin/newgrp
40-r-sr-xr-x   1 root     bin        39224 Jun 30  2012 /usr/bin/rlogin
41-rwsr-xr-x   1 svctag   daemon    108964 Jun 30  2012 /usr/bin/stclient
42-r-sr-xr-x   1 root     bin        29324 Jun 30  2012 /usr/xpg4/bin/crontab
43-rwsr-xr-x   1 root     sys        47912 Jun 30  2012 /usr/xpg4/bin/at
44-r-sr-xr-x   3 root     bin        41276 Jun 30  2012 /usr/sbin/deallocate
45-rwsr-xr-x   1 root     sys        32828 Jun 30  2012 /usr/sbin/sacadm
46-r-sr-xr-x   1 root     bin        46512 Jun 30  2012 /usr/sbin/traceroute
47-r-sr-xr-x   1 root     bin        18016 Jun 30  2012 /usr/sbin/i86/whodo
48-r-sr-xr-x   1 root     bin        55584 Jun 30  2012 /usr/sbin/ping
49-r-sr-xr-x   3 root     bin        41276 Jun 30  2012 /usr/sbin/allocate
50-r-sr-xr-x   1 root     bin        37320 Jun 30  2012 /usr/sbin/pmconfig
51-r-sr-xr-x   3 root     bin        41276 Jun 30  2012 /usr/sbin/list_devices
52-r-sr-xr-x   1 root     bin        24520 Jun 30  2012 /usr/sbin/amd64/whodo

Securing Ubuntu

Jan 17, 2013 - 10 minutes

Table of Contents

Initial Setup

Setting up iptables and Fail2Ban

Fail2Ban
iptables rules

Make shared memory read-only

Setting up Bastille Linux

Configuring Bastille

sysctl hardening

Setting up a chroot environment

Securing nginx inside the chroot

Extras

Initial Setup

Let’s login to our new machine and take some initial steps to secure our system. For this article I’m going to assume your username is ubuntu.

If you need to, setup your sudoers file by adding the following lines to /etc/sudoers:

1ubuntu ALL=(ALL:ALL) ALL # put this in the "User privilege specification" section

Edit your ~/.ssh/authorized_keys and put your public key inside it. Make sure you can login without a password now once your key is in place.

Open up /etc/ssh/sshd_config and make sure these lines exist to secure SSH:

 1# Only allow version 2 communications, version 1 has known vulnerabilities
 2Protocol 2
 3# Disable root login over ssh
 4PermitRootLogin no
 5# Load authorized keys files from a users home directory
 6AuthorizedKeysFile  %h/.ssh/authorized_keys
 7# Don't allow empty passwords to be used to authenticate
 8PermitEmptyPasswords no
 9# Disable password auth, you must use ssh keys
10PasswordAuthentication no

Keep your current session open and restart sshd:

1sudo service ssh restart

Make sure you can login from another terminal. If you can, move on.

Now we need to update and upgrade to make sure all of our packages are up to date and install two pre-requisites for later in the article: build-essential and ntp.

1sudo apt-get update
2sudo apt-get install build-essential ntp
3sudo apt-get upgrade
4sudo reboot

Setting up iptables and Fail2Ban

Fail2Ban

1sudo apt-get install fail2ban

Open up the fail2ban config and change the ban time, destemail, and maxretry /etc/fail2ban/jail.conf:

 1[DEFAULT]
 2ignoreip = 127.0.0.1/8
 3bantime  = 3600
 4maxretry = 2
 5destemail = [email protected]
 6action = %(action_mw)s
 7
 8[ssh]
 9
10enabled  = true
11port     = ssh
12filter   = sshd
13logpath  = /var/log/auth.log
14maxretry = 2

Now restart fail2ban.

1sudo service fail2ban restart

If you try and login from another machine and fail, you should see the ip in iptables.

1# sudo iptables -L
2Chain fail2ban-ssh (1 references)
3target     prot opt source               destination
4DROP       all  --  li203-XX.members.linode.com  anywhere
5RETURN     all  --  anywhere             anywhere

iptables Rules

Here are my default iptables rules, it opens up port 80 and 443 for HTTP/HTTPS communication, and allows port 22. We also allow ping and then log all denied calls and then reject everything else. If you have other services you need to run, such as a game server or something else, you’ll have to add the rules to open up the ports in the iptables config.

/etc/iptables.up.rules

 1*filter
 2
 3# Accepts all established inbound connections
 4 -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
 5
 6# Allows all outbound traffic
 7# You could modify this to only allow certain traffic
 8 -A OUTPUT -j ACCEPT
 9
10# Allows HTTP and HTTPS connections from anywhere (the normal ports for websites)
11 -A INPUT -p tcp --dport 443 -j ACCEPT
12 -A INPUT -p tcp --dport 80 -j ACCEPT
13# Allows SSH connections for script kiddies
14# THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE
15 -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT
16
17# Now you should read up on iptables rules and consider whether ssh access
18# for everyone is really desired. Most likely you will only allow access from certain IPs.
19
20# Allow ping
21 -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
22
23# log iptables denied calls (access via 'dmesg' command)
24 -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7
25
26# Reject all other inbound - default deny unless explicitly allowed policy:
27 -A INPUT -j REJECT
28 -A FORWARD -j REJECT
29
30COMMIT

We can load that up into iptables:

1sudo iptables-restore < /etc/iptables.up.rules

Make sure it loads on boot by putting it into the if-up scripts: /etc/network/if-up.d/iptables

1#!/bin/sh
2iptables-restore /etc/iptables.up.rules

Now make it executable:

1chmod +x /etc/network/if-up.d/iptables

Rebooting here is optional, I usually reboot after major changes to make sure everything boots up properly.

If you’re getting hit by scanners or brute-force attacks, you’ll see a line similar to this in your /var/log/syslog:

1Jan 18 03:30:37 localhost kernel: [   79.631680] iptables denied: IN=eth0 OUT= MAC=04:01:01:40:70:01:00:12:f2:c6:e8:00:08:00 SRC=87.13.110.30 DST=192.34.XX.XX LEN=64 TOS=0x00 PREC=0x00 TTL=34 ID=57021 DF PROTO=TCP SPT=1253 DPT=135 WINDOW=53760 RES=0x00 SYN URGP=0

Read only shared memory

A common exploit vector is going through shared memory (which can let you change the UID of running programs and other malicious actions). It can also be used as a place to drop files once an initial breakin has been made. An example of one such exploit is available here.

Open /etc/fstab/:

1tmpfs     /dev/shm     tmpfs     defaults,ro     0     0

Once you do this you need to reboot.

Setting up Bastille Linux

The Bastille Hardening program “locks down” an operating system, proactively configuring the system for increased security and decreasing its susceptibility to compromise. Bastille can also assess a system’s current state of hardening, granularly reporting on each of the security settings with which it works.

Bastille: Installation and Setup

1sudo apt-get install bastille # choose Internet site for postfix
2# configure bastille
3sudo bastille

After you run that command you’ll be prompted to configure your system, here are the options I chose:

Configuring Bastille

  • File permissions module: Yes (suid)
  • Disable SUID for mount/umount: Yes
  • Disable SUID on ping: Yes
  • Disable clear-text r-protocols that use IP-based authentication? Yes
  • Enforce password aging? No (situation dependent, I have no users accessing my machines except me, and I only allow ssh keys)
  • Default umask: Yes
  • Umask: 077
  • Disable root login on tty’s 1-6: No
  • Password protect GRUB prompt: No (situation dependent, I’m on a VPS and would like to get support in case I need it)
  • Password protect su mode: Yes
  • default-deny on tcp-wrappers and xinetd? No
  • Ensure telnet doesn’t run? Yes
  • Ensure FTP does not run? Yes
  • display authorized use message? No (situation dependent, if you had other users, Yes)
  • Put limits on system resource usage? Yes
  • Restrict console access to group of users? Yes (then choose root)
  • Add additional logging? Yes
  • Setup remote logging if you have a remote log host, I don’t so I answered No
  • Setup process accounting? Yes
  • Disable acpid? Yes
  • Deactivate nfs + samba? Yes (situation dependent)
  • Stop sendmail from running in daemon mode? No (I have this firewalled off, so I’m not concerned)
  • Deactivate apache? Yes
  • Disable printing? Yes
  • TMPDIR/TMP scripts? No (if a multi-user system, yes)
  • Packet filtering script? No (we configured the firewall previously)
  • Finished? YES! & reboot

You can verify some of these changes by testing them out, for instance, the SUID change on ping:

Bastille: Verifying changes

1[email protected]:~$ ping google.com
2ping: icmp open socket: Operation not permitted
3[email protected]:~$ sudo ping google.com
4PING google.com (74.125.228.72) 56(84) bytes of data.
564 bytes from iad23s07-in-f8.1e100.net (74.125.228.72): icmp_req=1 ttl=55 time=9.06 ms
6^C
7--- google.com ping statistics ---
81 packets transmitted, 1 received, 0% packet loss, time 0ms
9rtt min/avg/max/mdev = 9.067/9.067/9.067/0.000 ms

Sysctl hardening

Since our machine isn’t running as a router and is going to be running as an application/web server, there are additional steps we can take to secure the machine. Many of these are from the NSA’s security guide, which you can read in its entirety here.

/etc/sysctl.conf http://www.nsa.gov/ia/_files/os/redhat/rhel5-guide-i731.pdf Source

 1# Protect ICMP attacks
 2net.ipv4.icmp_echo_ignore_broadcasts = 1
 3
 4# Turn on protection for bad icmp error messages
 5net.ipv4.icmp_ignore_bogus_error_responses = 1
 6
 7# Turn on syncookies for SYN flood attack protection
 8net.ipv4.tcp_syncookies = 1
 9
10# Log suspcicious packets, such as spoofed, source-routed, and redirect
11net.ipv4.conf.all.log_martians = 1
12net.ipv4.conf.default.log_martians = 1
13
14# Disables these ipv4 features, not very legitimate uses
15net.ipv4.conf.all.accept_source_route = 0
16net.ipv4.conf.default.accept_source_route = 0
17
18# Enables RFC-reccomended source validation (dont use on a router)
19net.ipv4.conf.all.rp_filter = 1
20net.ipv4.conf.default.rp_filter = 1
21
22# Make sure no one can alter the routing tables
23net.ipv4.conf.all.accept_redirects = 0
24net.ipv4.conf.default.accept_redirects = 0
25net.ipv4.conf.all.secure_redirects = 0
26net.ipv4.conf.default.secure_redirects = 0
27
28# Host only (we're not a router)
29net.ipv4.ip_forward = 0
30net.ipv4.conf.all.send_redirects = 0
31net.ipv4.conf.default.send_redirects = 0
32
33
34# Turn on execshild
35kernel.exec-shield = 1
36kernel.randomize_va_space = 1
37
38# Tune IPv6
39net.ipv6.conf.default.router_solicitations = 0
40net.ipv6.conf.default.accept_ra_rtr_pref = 0
41net.ipv6.conf.default.accept_ra_pinfo = 0
42net.ipv6.conf.default.accept_ra_defrtr = 0
43net.ipv6.conf.default.autoconf = 0
44net.ipv6.conf.default.dad_transmits = 0
45net.ipv6.conf.default.max_addresses = 1
46
47# Optimization for port usefor LBs
48# Increase system file descriptor limit
49fs.file-max = 65535
50
51# Allow for more PIDs (to reduce rollover problems); may break some programs 32768
52kernel.pid_max = 65536
53
54# Increase system IP port limits
55net.ipv4.ip_local_port_range = 2000 65000
56
57# Increase TCP max buffer size setable using setsockopt()
58net.ipv4.tcp_rmem = 4096 87380 8388608
59net.ipv4.tcp_wmem = 4096 87380 8388608
60
61# Increase Linux auto tuning TCP buffer limits
62# min, default, and max number of bytes to use
63# set max to at least 4MB, or higher if you use very high BDP paths
64net.core.rmem_max = 8388608
65net.core.wmem_max = 8388608
66net.core.netdev_max_backlog = 5000
67net.ipv4.tcp_window_scaling = 1

After making these changes you should reboot.

Setting up a chroot environment

We’ll be setting up a chroot environment to run our web server and applications in. Chroot’s provide isolation from the rest of the operating system, so even in the event of a application compromise, damage can be mitigated.

chroot: Installation and Setup

1sudo apt-get install debootstrap dchroot

Now add this to your /etc/schroot/schroot.conf file, precise is the release of Ubuntu I’m using, so change it if you need to:

/etc/schroot/schroot.conf

1[precise]
2description=Ubuntu Precise LTS
3location=/var/chroot
4priority=3
5users=ubuntu
6groups=sbuild
7root-groups=root

Now bootstrap the chroot with a minimal Ubuntu installation:

1sudo debootstrap --variant=buildd --arch amd64 precise /var/chroot/ http://mirror.anl.gov/pub/ubuntu/
2sudo cp /etc/resolv.conf /var/chroot/etc/resolv.conf
3sudo mount -o bind /proc /var/chroot/proc
4sudo chroot /var/chroot/
5apt-get install ubuntu-minimal
6apt-get update

Add the following to /etc/apt/sources.list inside the chroot:

1deb http://archive.ubuntu.com/ubuntu precise main
2deb http://archive.ubuntu.com/ubuntu precise-updates main
3deb http://security.ubuntu.com/ubuntu precise-security main
4deb http://archive.ubuntu.com/ubuntu precise universe
5deb http://archive.ubuntu.com/ubuntu precise-updates universe

Let’s test out our chroot and install nginx inside of it:

1apt-get update
2apt-get install nginx

Securing nginx inside the chroot

First thing we will do is add a www user for nginx to run under: Adding a application user

1sudo chroot /var/chroot
2useradd www -d /home/www
3mkdir /home/www
4chown -R www.www /home/www

Open up /etc/nginx/nginx.conf and make sure you change user to www inside the chroot:

1user www;

We can now start nginx inside the chroot:

1sudo chroot /var/chroot
2service nginx start

Now if you go to http://your_vm_ip/ you should see “Welcome to nginx!” running inside your fancy new chroot.

We also need to setup ssh to run inside the chroot so we can deploy our applications more easily.

Chroot: sshd

1sudo chroot /var/chroot
2apt-get install openssh-server udev

Since we already have SSH for the main host running on 22, we’re going to run SSH for the chroot on port 2222. We’ll copy over our config from outside the chroot to the chroot.

sshd config

1sudo cp /etc/ssh/sshd_config /var/chroot/etc/ssh/sshd_config

Now open the config and change the bind port to 2222.

We also need to add the rules to our firewall script: /etc/iptables.up.rules

1# Chroot ssh
2 -A INPUT -p tcp -m state --state NEW --dport 2222 -j ACCEPT

Now make a startup script for chroot-precise in /etc/init.d/chroot-precise: /etc/init.d/chroot-precise`

1mount -o bind /proc /var/chroot/proc
2mount -o bind /dev /var/chroot/dev
3mount -o bind /sys /var/chroot/sys
4mount -o bind /dev/pts /var/chroot/dev/pts
5chroot /var/chroot service nginx start
6chroot /var/chroot service ssh start

Set it to executable and to start at boot:

1sudo chmod +x /etc/init.d/chroot-precise
2sudo update-rc.d chroot-precise defaults

Next is to put your public key inside the .ssh/authorized_keys file for the www user inside the chroot so you can ssh and deploy your applications.

If you want, you can test your server and reboot it now to ensure nginx and ssh boot up properly. If it’s not running right now, you start it: sudo /etc/init.d/chroot-precise.

You should now be able to ssh into your chroot and main server without a password.

Extras

I would like to also mention the GRSecurity kernel patch. I had tried several times to install this (two different versions were released while I was writing this) and both make the kernel unable to compile. Hopefully they’ll fix these bugs and I’ll be able to update this article with notes on setting GRSecurity up as well.

I hope this article proved useful to anyone trying to secure a Ubuntu system, and if you liked it please share it!

Rb RFO Status is a simple system to post status updates to your team or customers in a easy to understand format so there is no delay in reporting a reason for outage. It is modeled slightly after the Heroku Status Page.

Source: https://github.com/bluescripts/rb_rfo_status

Download: https://s3.amazonaws.com/josh-opensource/rb_rfo_status-0.1.war

It is licensed under the MIT License so do whatever you want with it!

I’ve already opened up a few issues on Github that are enhancements, but this serves as a super simple application to deploy to keep your customers and team informed of system states.

Installation

Download the .war file and deploy it in your favorite container (Tomcat, etc). Once the war file is extracted you can modify the config settings and start it.

To run migrations on an extracted WAR file:

1cd rb_rfo_status/WEB-INF
2sudo RAILS_ENV=production BUNDLE_WITHOUT=development:test BUNDLE_GEMFILE=Gemfile GEM_HOME=gems java -cp lib/jruby-core-1.7.1.jar:lib/jruby-stdlib-1.7.1.jar:lib/gems-gems-activerecord-jdbc-adapter-1.2.2.1-lib-arjdbc-jdbc-adapter_java.jar:lib/gems-gems-jdbc-mysql-5.1.13-lib-mysql-connector-java-5.1.13.jar org.jruby.Main -S rake db:migrate

Screenshots

Homepage

Creating an Incident

Updating an incident

A resolved incident

Chef is awesome. Being able to recreate your entire environment from a recipe is an inredibly powerful tool, and I had started using Chef a few months ago. When I had initially configured the Chef server I hadn’t paid much attention to the couchdb portion of it until I had a chef-server hiccup. Here are a few things to watch out for when running chef-server:

  • Setup CouchDB compaction - Chef had a CouchDB size of 30+GB (after compaction it was only a few megabytes).
  • When resizing instances, make sure you setup RabbitMQ to use a NODENAME. If you don’t you’ll run into an issue with RabbitMQ losing the database’s that were setup (by default, they’re based on hostname… so if you resize a EC2 instance the hostname may change, and you’ll either have to do some moving around or manually set the NODENAME to the previous hostname).
  • Client’s may fail to validate after this - requiring a regeneration of the validation.pem, which is fine since this file is only used for the initial bootstrap of a server.
  • Make sure you run your chef recipes you setup (for instance monitoring) on your chef-server.

I hope these tips will be helpful to other people when they run into a Chef/CouchDB/RabbitMQ issue after a server resize or hostname change. Another really helpful place is #chef on freenode’s IRC servers.