Autoforwarding Security Credentials In Storm

(HDP2.2, Storm, HBase, HDFS, but not hive unfortunately, security, kerberos, authentication, hadoop, java)

Using Kerberos with Storm is, like most things involving Kerberos, an experience akin to pulling teeth with a pair of tweezers: it hurts and it goes on for a long time. Can you get the keytabs generated and into the right place, and what does that end up meaning for your Storm supervisor nodes? Wouldn’t it be lovely if Storm could simply hand out Hadoop Kerberos credentials to a topology when it is submitted and Everything Just Works™?

Well, if you’re attempting to use HBase or HDFS in your Bolts, then things are looking up for you. You can use the AutoHBase and AutoHDFS classes to do exactly that, and then the only keytab you need worry about is the one on your Nimbus server.

Except

It’s never quite that easy. Mainly, the thing you have to be aware of is this: the class hierarchy of AutoHDFS and AutoHBase have changed in the last few months, so if you’re using a platform like Cloudera, MapR, or HortonWorks, you may find yourself staring at a terminal wondering why on Earth Kerberos isn’t working…and like all things Kerberos, the errors are obtuse and unhelpful.

Anyway, the old hierarchy is:

backtype.storm.security.auth.hadoop.AutoHDFS
backtype.storm.security.auth.hadoop.AutoHBase

and the new locations are:

org.apache.storm.hdfs.common.security.AutoHDFS
org.apache.storm.hdfs.common.security.AutoHBase

Then, in your topology, update the Config.TOPOLOGY_AUTO_CREDENTIALS with a list of all the credentials it needs access to (in this example, just HDFS, but you could simply add HBase into the autoCreds list and it’ll have access to HBase too:

public static void main(String[] args) throws Exception {    
	//...
    Config cfg = new Config();
    List<String> autoCreds= new ArrayList<String>();
    
    // Use this hierarchy for an older distribution, e.g. HDP 2.2
    autoCreds.add("backtype.storm.security.auth.hadoop.AutoHDFS");
    // This is the current hierarchy
    //autoCreds.add("org.apache.storm.hdfs.common.security.AutoHDFS");
    cfg.put(Config.TOPOLOGY_AUTO_CREDENTIALS, autoCreds);
    
    // [...other topology and config setup...]
    
    StormSubmitter.submitTopology(TOPOLOGY_NAME, cfg, builder.createTopology());
  }

Then, on your Nimbus server, you need to update your storm.yaml (this example uses the current hierarchy, but you can replace the entries with the old ones and it’ll work if you’re on a non-current version of Storm):

nimbus.autocredential.plugins.classes: [“org.apache.storm.hdfs.common.security.AutoHDFS”, “org.apache.storm.hdfs.common.security.AutoHBase”] 

nimbus.credential.renewers.classes: [“org.apache.storm.hdfs.common.security.AutoHDFS”, "org.apache.storm.hdfs.common.security.AutoHBase”]

hdfs.keytab.file: "/path/to/keytab/on/nimbus" 
hdfs.kerberos.principal: "superuser@EXAMPLE.com" 
nimbus.credential.renewers.freq.secs : 82800

Restart your Nimbus server, submit your topology and watch Secure HDFS be authenticated without any further Kerberos nightmares! This time, at least. Kerberos is always out there, waiting. Waiting.

blog comments powered by Disqus