Chef Re-convergence within a cookbook

I've been using Chef for many years now and just figured out through some searching and troubleshooting how to reconverge a node during a Chef run. What I mean by this, is to allow the Chef run to converge like usual, then ask it later on in the recipe to converge again before finishing out the rest of the cookbook.

The one example I used for this method is I needed to download a remote file and that remote file had my application's version in it. I couldn't assign it to attributes or to a ruby variable. Both would have been converged before I downloaded the file. So I needed a way to reconverge, download the file, set the attribute and continue on. Below is the code to do just that:

#Initialize a new chef client object
client = Chef::Client.new
client.run_ohai #you probably only need this if you need to get new data
client.load_node
client.build_node

#Intialize a new run context to evaluate later
run_context = if client.events.nil?
  Chef::RunContext.new(client.node, {})
else
  Chef::RunContext.new(client.node, {}, client.events)
end

#Initialize a chef resource that downloads the remote file
r = Chef::Resource::RemoteFile.new("myapplicationsversion.json", run_context)
r.source "https://example.com/cdn/myapplicationsversion.json"
r.run_action(:create)

#Converge and run the new resources
runner = Chef::Runner.new(run_context)
runner.converge

#Anything below this line will wait on a successfully run convergence above.  Once complete you can use the new attribute of the json file.
version = ::JSON.parse(::File.read("myapplicationsversion.json"))

node[:myapplication][:version] = version

This is a very simple example and hack of a Chef run, but it may be helpful in other situations. You may need this when adding a new interface, storage mount or something Chef does, but doesn't capture that data in the attributes of ohai or custom ones.

However, I should say if you are doing this hack, you may want to look over your design and think about how it could be done a better way. Right after completing this code, I realized I could do this a better way and threw this code out. Now you may be in a bind and have requirements to do these things a certain way, so hopefully this helps someone.

Linux - when you don't need a proxy

I've seen many examples of people using apache/nginx/haproxy to proxy requests from 1 port to another just because it's not a privilege port, such as 80 or 443. Both which are defaults for http and https for most web browsers. The reasoning could be lack of knowledge, future use of the proxy server or being stubborn.

Below is an an example of how not to use a proxy server incase you are just moving requests from a privilege port 80 to 8080. When this is all you need, without any extra redirecting or anything else then iptables is your friend.

/sbin/iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
/sbin/iptables -t nat -I PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 8443
/sbin/iptables-save > /etc/sysconfig/iptables
chkconfig --level 35 iptables on
service iptables restart

To explain, all iptables is doing is adding 2 rules to the prerouting chain and nat table. One rule that redirects TCP requests on port 80 to 8080 seamlessly and the same with 443 to 8443. Then we are saving the rules, adding iptables to startup and restarting to make sure it takes affect after a system restart.

Did you know? Sysctl.d

Maybe I'll start making small posts like this, where I did a quick write up about a particular technology...we'll see how it works out...

Did you know that by default in RHEL, CentOS, Fedora, Amazon Linux that sysctl automatically evaluates files in /etc/sysctl.d - I found this out only after reading the bug report.

Apparently this code is in /etc/rc.sysinit, which will evaluate sysctl.d files. Though I have never seen /etc/sysctl.d directory from a default install. So if you have an application that needs changes made via sysctl then add that directory and have the application's configuration in there. Like so:

ls -la /etc/sysctl.d

elasticsearch
logstash
tomcat

AWS EC2 - Create an ELB with Fog

I like to post things that are helpful and not highly documented. Here is an example of building out ELBs via the ruby gem fog:

require 'fog'
connection = Fog::AWS::ELB.new(:aws_access_key_id => access_key, :aws_secret_access_key => secret_key, :region => "us-east-1")

#Build the Load Balancer
availability_zones = ["us-east-1d", "us-east-1b", "us-east-1c"]
listeners = [ { "Protocol" => "HTTP", "LoadBalancerPort" => 80, "InstancePort" => 8080, "InstanceProtocol" => "HTTP" } ]
result = connection.create_load_balancer(availability_zones, "mynewlb", listeners)

if result.status != 200
  puts "ELB creation failed!"
end

#Let's get the new load balancer's object
elb = connection.load_balancers.get("mynewlb")

#Let's configure a faster health check
health_check_config = { "HealthyThreshold" => 2, "Interval" => 30, "Target" => "TCP:80", "Timeout" => 5, "UnhealthyThreshold" => 3 }
health_check_result = connection.configure_health_check("mynewlb", health_check_config)

if health_check_result.status != 200
  puts "Failed health check configuration request"
end

Now you should have a new ELB in Amazon EC2 with some basic health checks and a listener on 80 pointing to 8080.

AWS S3 - Fog Stream Download and Upload

I couldn't find an exact example of streaming a file download from S3 to a local file with the ruby gem fog, so I'm posting one up here. Below is code to connect to S3, download a file and also check the MD5 matches.

require 'fog'
require 'digest/md5'
connection = Fog::Storage.new({ :provider => "AWS", :aws_access_key_id => access_key, :aws_secret_access_key => secret_key })
bucket = connection.directories.new(:key => "myS3bucket")

open("mydownloadedfile.txt", 'w') do |f|
  bucket.files.get("mydownloadedfile.txt") do |chunk,remaining_bytes,total_bytes|
    f.write chunk
  end
end

downloaded_file_md5 = Digest::MD5.file(mydownloadedfile.txt).to_s #This method won't take up much memory
remote_file_md5 = connection.head_object("myS3bucket", "mydownloadedfile.txt")).data[:headers]["ETag"].gsub('"','')

if remote_md5 == local_md5
  puts "MD5 matched!"
else
  puts "MD5 match failed!"
end

Be careful about using the ETag! If you upload a file larger than ~30-50MBs via the S3 console you won't get an MD5, you'll get an MD5 of all chunked MD5s. See the Amazon docs for more information. And any files uploaded larger than 5GBs from the console, fog, etc.. will not have a simple MD5 either.

And this is how you stream an upload to S3 using fog:

require 'fog'
require 'digest/md5'
connection = Fog::Storage.new({ :provider => "AWS", :aws_access_key_id => access_key, :aws_secret_access_key => secret_key })
bucket = connection.directories.new(:key => "myS3bucket")

local_file_md5 = Digest::MD5.file("myfiletoupload.txt")
s3_file_object = bucket.files.create(:key => "myfiletoupload.txt", :body => File.open("myfiletoupload.txt"), :content_type => "text/plain", :acl => "private")

if s3_file_object.etag != local_file_md5
  puts "MD5 match failed!"
else
  puts "MD5 matched!"
end

Much easier than download huh? The File.open does all the chunk work for you.