Skip to content

splunk

How to Collapse the Splunk Search Bar

This post will be short and to the point. I use Splunk almost daily. One of the main pain points is that the search bar cannot be collapsed. I regularly have searches that are 50+ lines. The UI currently doesn’t handle this very well. You have to do a lot of scrolling to get down to the results and then more scrolling to get back to the top. It’s kind of a pain. I looked around and haven’t found anyone who has addressed this so I decided to. The below bookmarklet will create collapse buttons above and below the search bar that can be used to get back valuable screen real estate.

The buttons are pretty obvious. They expand and collapse the search bar. Also, if you click inside the search bar while it is collapsed it will automatically expand out again. There are a couple limitations. If you refresh the page you will need to click the bookmarklet again … there is nothing I can do about that. Also, as of now when you initiate a search the buttons will also go away as Splunk redraws that section of the page. I may or may not be able to do anything about that. I’ll keep looking into it. They do survive paging though the results of a search. I you find this helpful let me know.

Bookmarklet:
Splunk Search Collapser <- drag this to your bookmarks toolbar

Search Bar Expanded

Search bar expanded

Search Bar Collapsed

Search bar collapsed

The code

Here is the code for the bookmarklet in case you are curious.

var collapser = {
	state: 'expanded',
	origHeight: 0,
	init: function(){
		var t = this;
		if($('.ace-collapser').length == 0) {
			var btn = '<div class="ace-collapser pull-left"><a class="btn-pill" href="#">Collapse</a></div>';
			$('div.pull-right.jobstatus-control-grouping').prepend(btn);
			$('div.document-controls.search-actions').prepend(btn);
			$('div.ace-collapser a.btn-pill').click(function(e){
				e.preventDefault();
				t.collapseExpand();
			});
			$('pre.ace_editor').click(function(){
				t.expand();
			});
		}
	},
	collapseExpand: function(){
		var t = this;

		if(t.state == 'expanded') {
			t.collapse();
		} else {
			t.expand();
		};
	},
	collapse: function(){
		var t = this;
		t.origHeight = $('pre.ace_editor').css('height');
		$('div.ace-collapser a.btn-pill').text('Expand');
		$('pre.ace_editor').css('height','20px');
		t.state = 'collapsed';
	},
	expand: function(){
		var t = this;
		$('div.ace-collapser a.btn-pill').text('Collapse');
		$('pre.ace_editor').css('height', t.origHeight);
		t.state = 'expanded';
	}
};

collapser.init();

How to Track and Report on Splunk .conf File Changes

Have you ever wanted to know what changes have been made to your Splunk .conf files? This was somewhat painful in the past. However, with version 9 Splunk itself will monitor your conf files and track changes that are made. The changes are stored in the _configtracker index in JSON format. This is what it looks like.

Source JSON
{
  "datetime": "08-01-2022 14:06:00.516 -0700",
  "log_level": "INFO ",
  "component": "ConfigChange",
  "data": {
    "path": "/opt/splunk/etc/users/splunkadmin/user-prefs/local/user-prefs.conf",
    "action": "update",
    "modtime": "Mon Aug  1 14:06:00 2022",
    "epoch_time": "1659387960",
    "new_checksum": "0x3ea7786da36c0d80",
    "old_checksum": "0xa01003ea5c398010",
    "changes": [
      {
        "stanza": "general",
        "properties": [
          {
            "name": "tz",
            "new_value": "",
            "old_value": "America/Denver"
          }
        ]
      }
    ]
  }
}
Flattening the Events

Most of the elements of the above JSON are pretty easy to comprehend. The catch comes with the changes (line 12) and properties (line 15) elements. These can have multiple values. So, the first things we need to do is flatten the JSON such that we create an event for each property modification under each change.

index="_configtracker" sourcetype="splunk_configuration_change"
| spath output=changes path=data.changes{}
| mvexpand changes
| spath input=changes output=properties path=properties{}
| mvexpand properties
App and Change Details

Once we’ve done that we can start pulling out fields. The first ones we will pull out are the app, the conf file type (default or local), the conf file, the modification date, and the action taken (update or add). For app, conf_type, and conf_file we can get those from the data.path field. We can split it by “/” for *Nix systems or “\” for Windows systems and then work from the right to the left to with the ultimate index split being the conf file, the penultimate being the conf file type, and the anti-penultimate being the app. Yes, I wrote it that way simply so I could use the words ultimate, penultimate, and anti-penultimate. For mod_time and action we will simply rename their data fields. While we are at it we will also format the mod_time field to be in YYYY-MM-DD HH:MM:SS format for easier legibility. If that’s not more legible to you then you can leave the last line out … but then we can’t be friends.

| eval path_type = if(match('data.path', "^.+/(local|default)/.+$"), "nix", "windows")
| eval app=if(path_type=="nix", mvindex(split('data.path', "/"), -3), mvindex(split('data.path', "\\"), -3))
| eval conf_type=if(path_type=="nix", mvindex(split('data.path', "/"), -2), mvindex(split('data.path', "\\"), -2))
| eval conf_file=if(path_type=="nix", mvindex(split('data.path', "/"), -1), mvindex(split('data.path', "\\"), -1))
| rename "data.modtime" as mod_time, "data.action" as action
| eval mod_time=strftime(strptime(mod_time, "%a %b %d %H:%M:%S %Y"), "%Y-%m-%d %H:%M:%S")
The Changed Values

This next part will pull out the stanza, property name, old value, and new value from the events based on how we expanded the event to flatten out the JSON.

| spath input=changes output=stanza path=stanza
| spath input=properties output=property path=name
| spath input=properties output=old path=old_value
| spath input=properties output=new path=new_value
Filling in the Blanks

Just to make the old and and new values a bit more legible if either value is blank let’s put {BLANK} in … this make me happy.

| eval old=if((old=="" OR isnull(old)), "{BLANK}", old)
| eval new=if((new=="" OR isnull(new)), "{BLANK}", new)
Formatting the Results

Finally, let’s display the fields we’ve extracted as a table and sort it by the modification time with the newest changes being shown first.

| table mod_time, app, conf_type, conf_file, stanza, action, property, old, new
| sort -mod_time, app, conf_type, conf_file, stanza
Full SPL Search

So, that’s it! You now have an SPL search that you can use to see how your conf files have changed over time. You can save this as a report, create an alert whenever a default conf file is modified, or you could use lines 1-16 as a base search in an accelerated data model. There are so many options! Hope this is helpful to you.

index="_configtracker" sourcetype="splunk_configuration_change"
| spath output=changes path=data.changes{}
| mvexpand changes
| spath input=changes output=properties path=properties{}
| mvexpand properties
| eval path_type = if(match('data.path', "^.+/(local|default)/.+$"), "nix", "windows")
| eval app=if(path_type=="nix", mvindex(split('data.path', "/"), -3), mvindex(split('data.path', "\\"), -3))
| eval conf_type=if(path_type=="nix", mvindex(split('data.path', "/"), -2), mvindex(split('data.path', "\\"), -2))
| eval conf_file=if(path_type=="nix", mvindex(split('data.path', "/"), -1), mvindex(split('data.path', "\\"), -1))
| rename "data.modtime" as mod_time, "data.action" as action
| eval mod_time=strftime(strptime(mod_time, "%a %b %d %H:%M:%S %Y"), "%Y-%m-%d %H:%M:%S")
| spath input=changes output=stanza path=stanza
| spath input=properties output=property path=name
| spath input=properties output=old path=old_value
| spath input=properties output=new path=new_value
| eval old=if((old=="" OR isnull(old)), "{BLANK}", old)
| eval new=if((new=="" OR isnull(new)), "{BLANK}", new)
| table mod_time, app, conf_type, conf_file, stanza, action, property, old, new
| sort -mod_time, app, conf_type, conf_file, stanza
Splunk HL7

Parsing HL7 with Splunk

At my job I do a fair amount of work with HL7. If you work in the medical field you probably know that HL7 is the language that medical systems use to talk with each other. It’s a fairly simple format that uses carriage returns and pipes to delimit fields … ok there are a few other delimiters as well, but the carriage returns and pipes are the big ones. Below is an example HL7 message.

MSH|^~\&|MegaReg|XYZHospC|SuperOE|XYZImgCtr|20060529090131-0500||ADT^A08|01052901|P|2.3
EVN||200605290901||||200605290900
PID|||56782445^^^UAReg^PI||KLEINSAMPLE^BARRY^Q^JR||19620910|M||2028-9^^HL70005^RA99113^^XYZ|260 GOODWIN CREST DRIVE^^BIRMINGHAM^AL^35209^^M~NICKELL’S PICKLES^10000 W 100TH AVE^BIRMINGHAM^AL^35200^^O|||||||0105I30001^^^99DEF^AN
PV1||I|W^389^1^UABH^^^^3||||12345^MORGAN^REX^J^^^MD^0010^UAMC^L||67890^GRAINGER^LUCY^X^^^MD^0010^UAMC^L|MED|||||A0||13579^POTTER^SHERMAN^T^^^MD^0010^UAMC^L|||||||||||||||||||||||||||200605290900
OBX|1|NM|^Body Height||1.80|m^Meter^ISO+|||||F
OBX|2|NM|^Body Weight||79|kg^Kilogram^ISO+|||||F
AL1|1||^ASPIRIN
DG1|1||786.50^CHEST PAIN, UNSPECIFIED^I9|||A

Each line in the message is called a segment and each segment can be divided into fields based on the pipes. For instance the third line is the PID segment which has patient information such as the MRN (PID 3), patient name (PID 5), birth date (PID 7), etc. The PV1 segment has information that relates to the patient visit. It is a fairly concise format without much overhead and as such is perfect for medical institutions where these kinds of messages are flowing constantly throughout the day.

The Problem

In a typical medical environment there will be a system called the HL7 routing engine that serves as an intermediary between all the various medical systems in the clinic or hospital. The HL7 engine can route messages to one or various systems and transforms them en-route based on rules. Most HL7 engines have the ability to log the messages sent through them in some format.

Often times there may be a need to lookup what messages were sent to various system to troubleshoot problems. In many cases there is no great means of searching through the thousands or even hundreds of thousands of messages sent each day to troubleshoot these issues.

The Solution

About a year ago I was approached by some folks from Splunk about creating a Technical Add-on (TA) for Splunk for parsing HL7. After many months of working with one of their engineers named Joe Welsh we were able to release the free ‘HL7 Add-On for Splunk“. We tested the add-on by throwing millions of our HL7 messages at it to make sure it parsed the messages correctly.

With this TA we can have Splunk monitor our HL7 logs and in real-time are able to quickly search those logs to troubleshoot issues, report on failed messages, and view dashboards to monitor the health of our HL7 environment. I have been super pleased with the results.

If you are a medical institution that uses Splunk check out the add-on … it’s free. See what awesome things you can do with Splunk and HL7. Let me know in the comments if you have found it useful.

Splunkbase Link
https://splunkbase.splunk.com/app/3283/