Changing Kerberos Expiration To Test Ticket Renewal

If you’re testing a Kerberos enabled hadoop cluster and want to make sure that ticket renewal is working as expected, you’ll probably want to change the ticket renewal time so that you don’t have to wait 24 hours for each test.

Using a “krb5-server” as an authentication source for a Hadoop Cluster. You can run the following commands to change the default ticket lifespan. 

kadmin.local: getprinc krbtgt/EXAMPLE.COM@EXAMPLE.COM

Principal: krbtgt/EXAMPLE.COM@EXAMPLE.COM
Expiration date: [never]
Last password change: [never]
Password expiration date: [none]
Maximum ticket life: 1 days 00:00:00
Maximum renewable life: 365 days 00:00:00
Last modified: Wed Nov 19 00:09:37 UTC 2014 (quick/admin@EXAMPLE.COM)
Last successful authentication: [never]
Last failed authentication: [never]
Failed password attempts: 0
Number of keys: 7
Key: vno 1, aes256-cts-hmac-sha1-96, no salt
Key: vno 1, aes128-cts-hmac-sha1-96, no salt
Key: vno 1, des3-cbc-sha1, no salt
Key: vno 1, arcfour-hmac, no salt
Key: vno 1, des-hmac-sha1, no salt
Key: vno 1, des-cbc-md5, no salt
Key: vno 1, des-cbc-crc, no salt
MKey: vno 1
Attributes:
Policy: [none]

To change the default ticket expiration from 1 day to 30 minutes, issue the following command:

kadmin.local:  modprinc -maxlife “30 minutes” krbtgt/EXAMPLE.COM@EXAMPLE.COM

You should then be able to verify that the settings have taken effect:

[root@host ~]# kinit -k -t hdfs.keytab hdfs
[root@host ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs@EXAMPLE.COM
Valid starting     Expires            Service principal
08/22/16 20:24:18  08/22/16 20:54:18  krbtgt/EXAMPLE.COM@EXAMPLE.COM
renew until 08/29/16 20:24:18

You should see that the “Expires” time is 30 minutes in the future. This keytab will be renewed every 30 minutes for 7 days.

The interesting point that isn’t well documented is that there is a hierarchy to the settings in Kerberos. You can modify each individual principle’s maxlife and maxrenewlife, but if a higher level principle has stricter settings then they will be used. The krbtgt principle is the top level principle. Changes made here will apply to all other principles.

 

Debugging Kerberos Issues

Kerberos is one of those items that I can’t imagine anyone ever actually enjoys. It’s a necessary evil if you want to have a secure Hadoop cluster setup. Without it, the permissions checks are trivially easy to sidestep.

When you’re running into an issue with an application running on this type of setup, the first thing you’ll want to look at is to turn on debug logging. If you start your application with the parameter:

-Dsun.security.krb5.debug=true

Then you’ll get a whole lot of debug output to stdout. Often issues will come up with ticket renewal, and in most setups, ticket renewal will happen every 24hrs. So it helps to pipe the console log to a file so you can go over it later. An example of how you can start an application to accomplish this is:

nohup ./bin/app-start >> logs/app.log 2>&1 &

This command will start your application and pipe the stdout and stderr to the app.log file. It will also append to the log instead of rewriting it, this way you won’t lose the log on restarts.

When developing applications that communicate with a secure Hadoop cluster, I’ve found it to be helpful to change the default ticket renewal time to be 10minutes. Less than this and you’ll start to see unrelated errors, but this will make it easier to verify that ticket renewal is happening correctly.

Don’t forget to change your hdfs / mapred config when a drive fails with hadoop

Just ran into a little gotcha when running a huge job against my CDH4 cluster. One of the servers lost a drive at the 50% mark. Each server has 4 1TB drives mounted, so losing one isn’t a huge deal. With the new config “dfs.datanode.failed.volumes.tolerated” set to 2 it was possible for the datanode to keep right on going and not impact the larger job.

To get ready to replace the drive later, I unmounted the drive, leaving only the mount point dir. Then I made the mistake of bouncing the datanode so that I could start collecting ganglia stats, which are great by the way and really easy to set up.

Now the datanode determined that the mount point was back, nevermind that it was on the root device. So a day later, when the root device filled up and the tasks on that server started failing, I realized what I had done wrong.

If you’re going to temporarily take a drive out. Take it out of the config as well or else you’re going to forget about it and get yourself into trouble.

CDH3 Hbase

I’ve spent the last several days playing with and configuring CDH3B2 with Hbase. My test cluster is using an ESXi server with 20Gb of ram to boot up a bunch of CentOS5 VMs. Definitely not something that you’d want to run in production, but it works great for testing. Actually helps to expose some bugs due to the slow IO of running a bunch of servers on the same disk.

My production cluster is still running HBase 0.20.3 and has performed flawlessly. It has a table holding half a billion content items, taking up several terabytes of space, and has made it through several disk failures without a hitch. However, I’m looking at the Cloudera distro because I’m not happy with having to repackage everything, test it out, push to the live cluster, and then retest to make sure that everything made it properly every time a new release comes out. I’m hoping that using the Cloudera distro will simplify a lot of this. I’m also hoping that with the patches that they include and testing being done that I’ll have a more stable cluster. I had a real bad experience with the 20.4 update which is why production is still on 20.3.

One major problem that I still have, even with the Cloudera distro, is that the LZO code isn’t included due to licensing problems. I’m really hoping that the new compression options can be packaged up soon so that these libraries don’t need to be maintained separately any more.

A couple quick notes that I found from my testing.

  • The way that HBase interacts with and manages zookeeper has changed. It’s more like running an unmanaged zookeeper setup. I found that not only did I need to make sure that the zookeeper configs in the hbase-site.xml needed to be correct on all of the servers, but that when I ran map-reduce jobs against HBase that it seemed to be reading from /etc/zookeeper/zoo.cfg and that this needed to be correct on all of the regionservers. I initially had only edited it on the server running my zookeeper instance. I also added the file to the HADOOP_CLASSPATH in hadoop-env.sh but I’m not sure that that’s required.
  • I wish that there was a better way to manage the HADOOP_CLASSPATH and it’s inclusion of the hbase and zookeeper jar files. I’m trying to find a way so that this doesn’t need to be edited each time I update the software to a new version.
  • I had to change the value for dfs.datanode.socket.write.timeout. On the live cluster I have it set to 0 which is supposed to mean no timeout, but it appears that there is a bug with this latest version that doesn’t respect that value properly. So I just set it to a high number.

Hadoop Lives

I was a bit concerned after the announcement that Microsoft would be taking over search development for Yahoo that Hadoop would suffer. But according to this, development will continue if not be enhanced.

I’ve been using Hadoop and HBase with ReadPath and have been thoroughly impressed with the results. Prior to testing it out, I had read many comments about HBase being usable for batch processing but not really up to the task of replacing a database for live requests.

As an initial test I’ve replaced ReadPath’s dictionary system, which had been running on Mysql, with HBase. This system is responding to hundreds of requests a second and returning in ~10mS which is on par with what Mysql was doing. The big wins are with scalability though. I’ve currently got a five server cluster with which I’m seeing a 100x increase in inserts/updates over Mysql with a much larger data set. As I convert other databases to use HBase and add these servers into the Hadoop cluster, performance should only increase.