Logstash and Playframework

I’m not sure why google let me down in regards to hooking up logstash and Play, but it sent me on some pretty weird paths. So I’m going to share what we did to get it working that is pretty simple in the end.

Play uses logback, first rule is don’t try to include a new version of logback in your build, that will cause you conflicts, the ootb Play dependencies are all you need, at the time I did this, we were using Play 2.3.8.

In your logback config, logger.xml, just wire up the appender you want, the tcp or udp one like this:


<!– for udp –>
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashSocketAppender">
<host>logstash_server/host>
<port>logstash_port</port>
<encoder class="net.logstash.logback.encoder.LogstashEncoder" />
</appender>
<!– for tcp –>
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<host>logstash_server/host>
<port>logstash_port</port>
<encoder class="net.logstash.logback.encoder.LogstashEncoder" />
</appender>
<!– include the appender –>
<root level="INFO">
<appender-ref ref="LOGSTASH" />
</root>

view raw

gistfile1.xml

hosted with ❤ by GitHub

Then on the logstash config side create a new input:


input {
udp {
codec => json
port => XXXX
type => logback
}
}

view raw

gistfile1.txt

hosted with ❤ by GitHub

This puts your data into a new type (called “logstash”) in the ES indices so that the json mappings don’t conflict with anything else. That’s it.

Retry

One of the things I love about functional programming, and Scala in particular is that it feels like you can make new language constructs.  One common pattern I run into is having to do something a few times until hopefully it works.  This is especially relevant when you are calling some remote endpoint that might be busy, slow, or down or whatever.

I found this little gem on stackoverflow:


@annotation.tailrec
final def retry[T](n: Int)(fn: => T): T = {
util.Try { fn } match {
case util.Success(x) => x
case _ if n > 1 => retry(n – 1)(fn)
case util.Failure(e) => throw e
}
}

view raw

gistfile1.scala

hosted with ❤ by GitHub

What’s going on here?  We take an Int and a function.  Then try to call the function.  If it successful then return its value, else recursively call ourselves again until we run out of tries.  If we haven’t succeeded in a certain number of tries then throw the exception the function gave us.

If you don’t fully understand what the @annotation.tailrec does, do yourself a favor and read this article.  It does a great job of explaining it.  But in a nutshell, the Scala compiler can optimize a tail recursive function into a loop, which means no more stack overflows.  Surprisingly, this feature is rare in a lot of modern day languages.  I think in Ruby you can compile it in, Python is philosophically opposed to it.  The more purely functional languages like Clojure, Erlang, and Haskell of course would do better.  Sorry Javascript, you have to trampoline too, how sad for you…  But I’m sure there are a bunch of transpilers that can fix that up, I wonder what Scala.js does there?

Ok so enough of that, the real story is in how easy it in to use this guy.


retry(10) {
// to some stuff that might need a few kicks in the butt before it works
}

view raw

gistfile1.scala

hosted with ❤ by GitHub

It’s just that easy you can retry how ever many times you like else throw your exception…

Hot Threads

If you have ever used elasticsearch and you haven’t discovered the hot_threads endpoint, take a look at it.  It basically figures out which threads on each node are consuming the most cpu and then gives you a stack trace.  Its like a rest endpoint for thread dump around the cluster. It looks something like this:

Screen Shot 2015-03-03 at 11.32.00 PM

In a previous project we were running elasticsearch embedded in our monolith, so this was pretty handy, because it gave you a view into not only what ES was doing but the whole server.

I had a issue the other day where my monitoring was not helping me out, and I needed a stack dump, but I was having some trouble getting one.  So I decided I don’t want to be in this position again standing there holding my dick, so I thought I wonder if I could just lift this thing out of elasticsearch, or at least borrow some code.

Well it turns out it was easier than I thought.   I poked around in github found the HotThreads class.  Since I already had the elasticsearch api included in my project, all I had to do in Play was just wire up a quick controller like this, and then map in the routes file.


import org.elasticsearch.monitor.jvm.HotThreads
import play.api.mvc._
object HotThreadsController extends BaseController {
def getHotThreads = Action {
Ok(new HotThreads().detect())
}
}

view raw

gistfile1.scala

hosted with ❤ by GitHub

That’s it, then you have a nice admin endpoint to see running threads.  It’s not across the cluster its per node, that’s a job for another time, but still pretty cool!

Screen Shot 2015-03-03 at 11.07.12 PM