NEMS – Nagios for your Pi

NEMS or Nagios Enterprise Monitoring Server developed by Robbie Ferguson is a modernized version of NagiosPi.

NEMS is a modern pre-configured, customized and ready-to-deploy Nagios Core image designed to run on the Raspberry Pi 3 micro computer. At its core it is a lightweight Debian Stretch deployment optimized for performance, reliability and ease of use.

I had used FAN (Fully Automated Nagios) for my home instance until development stopped around 2013, NagiosPi was a good alternative, and I liked the idea of Nagios living on a Pi rather as another vm on a server, seemed counter intuitive to have it live on virtual host, and a pi allows for an inexpensive platform to have a stand alone service.

NEMS is built for RPI3 and requires as such. The beauty of NEMs (as with FAN or NagiosPi) its all rebuilt, download the .img, flash it to a Micro SD and you’re off.

NEMS bundles a lot of great features that use the Nagos Core, and it can be a simple box preforming check_pings or it can be as robust as nagios NRPE can get- that’s up to you.

Within an hourI had it up and running preforming basic checks on my home environment and at no more than $65 USD for all the parts (pi, case, powersupply, Micro SD). Its an easy and robust solution. I recommend checking out  baldnerd.com/nems/ as there is an excellent write-up and a direct download to the image.

If you have a small environment or home infrastructure I highly recommend NEMS  (Nagios Enterprise Monitoring Server) by Robbie Ferguson.

LinuxFest Northwest

I am super excited to announce be presenting at LinuxFest Northwest May 6th on “Managing macOS, without macOS(almost)” you can read more about the session here. LinuxFest Northwest is an annual OpenSource event held at Bellingham Technical College.

What is LinuxFest Northwest? LFNW features presentations and Exhibits on various F/OSS topics, as well as Linux distributions and applications. LinuxFest Northwest has something for everyone from the novice to the professional. The hours are 9:00 a.m. to 5:00 p.m. both days.


LinuxFest Northwest  a great conference and you cannot argue with the price. I hope to see you there! 

Macadmins Meetup

“Unofficial” Apple Admins of Seattle and the Great Northwest social to follow Saturday’s sessions will be held at Elizabeth Station at around 5pm. They should have a food truck outside and an over abundance of Beer/ Cider selection. There is also the incredible Primer Coffee right next door if that’s more your speed. As always find us on Slack, hope to meet you soon.

MunkiAdmin sync on “Save”

The idea was to use MunkiAdmin‘s script features to automatically rsync changes from a management machine to a machine hosting the repo for clients access. My testng case was syncing from a macOS machine to Ubuntu 16.04. This utilizes rsync with psk’s, great documentation specifically on check out Digital Ocean‘s article.

The Script

The main bread and butter is a simple rsync script:

/usr/local/bin/rsync -vrlt -e "ssh -i /Users/$macUSER/.ssh/id_rsa.pub" --chmod=$symbolic --chown=&nixUSER:$nixGROUP /macOS/munki_repo/* $nixUSER@$nixHOST:/nix/munki_repo/

So to break it down…

-vrlt
  • v
    • verbose
  • r
    • recursive
  • l
    • symlinks (optional? probably not needed in a munki_repo specifically)
  • t
    • preserve times
-e
  • specify the remote shell
    • ssh
    • -i
      • identity file
    • /Users/$macUSER/.ssh/id_rsa.pub
      • The key you would like to use (that also exists under authorized keys on the receiving server”
--chmod=$symbolic
  • specify the modification privileges via symbolic
    • 4744=go+r,u+rwxs
    • I just cheated, here.
--chown=&nixUSER:$nixGROUP
  • change the ownership
    • user:group
/macOS/munki_repo/*
  • local repo
$nixUSER@$nixHOST:/nix/munki_repo/
  • destination admin@host
  • :path/to/repo_destination

Tip: That should do it, you can always use -n or –dry-run to check this sync without actually syncing any data.

  • -n, –dry-run “perform a trial run with no changes made”

MunkiAdmin Integration

I added the command as well as some logging items to a bash script, and saved it as repository-postsave.

MunkiAdmin full documentation on custom scripts is available here, though its pretty cut and dry:

  • scripts should be saved in <repository>/MunkiAdmin/scripts/ or ~/Library/Application Support/MunkiAdmin/scripts/.
  • The presave scripts can abort the save by exiting with anything other than
  • All of the scripts are called with the working directory set to the current repository root.

Further more according to MunkiAdmin documentation, MunkiAdmin looks for executable files (with any extension) with the following names:

  • pkginfo-presave
  • pkginfo-postsave
  • manifest-presave
  • manifest-postsave
  • repository-presave
  • repository-postsave

I chose repository-postsave because a sync would be the last thing we would want to do. I moved my script to <repository>/MunkiAdmin/scripts/, reloaded MunkiAdmin, and then added a pkg to test.

Quick Test

I figured why not test it with a worst possible case..? How about a 10.11.6 upgrade pkg, 6.24 GB? Yeehaw.

So I imported via munkiimport, and then reloaded MunkiAdmin. As the script is tied to “Save” in munki admin, no sync occurs until then…

I hit “Save” and everything died:

Screen Shot 2017-05-02 at 7.45.54 AM.png

But not really, I had a hunch that it was just working hard, and MunkiAdmin was waiting until the script exited, and those suspicions were confirmed:

Screen Shot 2017-05-02 at 7.45.59 AM.png

Once the transfer processes completed, MunkiAdmin was back to normal.

Much success! As a note smaller more regular pkg/infos and catalog files really quickly* (your milage may vary depending on your speeds).

Caveats

rsync 3.1, Your keen eye may have picked up on /usr/local/bin/rsync vs /usr/bin/rsync, as one may expect on macOS. Unfortunately macOS ships with rsync v2.6.9, which does not support the –chown functionality, so I had to brew err pursue other avenues for rsync to completely work in this capacity…

Implications

No manual rsync of your repo anymore! Well… actually its still manual on “Save” but its automatic!

If you use MunkiAdmin this scripting has a lot of potential for different automation tasks, git integrations or whatever you may do to your repos after “saving,” to pkgs or whatever your use case may call for- I really like this integration and I just thought I’d share this bit I found useful.

Packaging Filebeat on macOS

In my previous post I explained why I set out to dig more into logging and how I got a proof of concept of how to deploy a system to forward particular log files to to a syslog server.

This post is more about bundling it all up in a way I could easily deploy (.pkg).

Edit: I didn’t explicitly state this was for testing, I do plan on moving/bundling and placing in a place that it better for an environment that say would interact with an end user, thats not this! Just what I need to get it onto some machines for testing.

I am not going to get into the ins and outs of creating packages. There are many other people who’ve wrote far more elegant.

Getting the pieces

In my last post and via Beats documentation the extent of launching Beats is

sudo ./filebeat -e -c filebeat.yml

but I left the “-e” off this when I transitioned it into a .plist which

-e     Log to stderr and disable syslog/file output

so the .plist which we will call co.elastic.filebeat.plist, ends up looking like this:

<?xml version="1.0" encoding="UTF-8"?>
 <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
 <plist version="1.0">
 <dict>
 <key>Label</key>
 <string>co.elastic.filebeat</string>
 <key>ProgramArguments</key>
 <array>
 <string>/Applications/Filebeat/filebeat</string>
 <string>-c</string>
 <string>/Applications/Filebeat/filebeat.yml</string>
 </array>
 <key>KeepAlive</key>
 <true/>
 </dict>
 </plist>

Before I continue I did want to make a note that I did some more research to the configuration filebeat.yaml and found out a couple neat items, you can array log files and you can specify multiple prospectors. But wait aren’t those the same things? Look at this example filebeat.yaml:

filebeat.prospectors:

- input_type: log
 paths:
 - /var/log/install.log
 - /var/log/accountpolicy.log
 - input_type: log
 paths:
 - /var/log/system.log
 include_lines: ["sshd","screensharingd"]

output.logstash:
 hosts: ["hostip:port"]

What this does is it sends all entries from install.log and accountpolicy.log to the syslog server.

AND then watches the syslog for any messages containing sshd and screensharingd.

Pretty nifty, the documentation on configuring prospectors has a lot of neat features, even regex options that I may explore later on…

Assembly

So I started with the contents of “filebeat-5.1.1-darwin-x86_64” which I downloaded from Elastic’s site.

Get the pieces

  • filebeat-5.1.1-darwin-x86_64/
  • (custom) filebeat.yaml
    • Which I placed into the filebeat-5.1.1-darwin-x86_64 directory
    • And then I renamed ‘Filebeat’
  • (custom) .plist (I called mine: co.elastic.filebeat.plist`
  • I also downloaded a ‘B’ icon that I found by scanning through the Elastic site to use as an icon. Just to put a little polish on the folder.

Put the pieces into Place

filebeat-5.1.1-darwin-x86_64 directory

  1. I renamed “filebeat-5.1.1-darwin-x86_64” to “Filebeat”
  2. I placed the folder into the /Applications/ Directory and made sure it had the proper permissions
  3. I then found the afore mentioned B.png
    1. Opened in preview
      • b png screenshot.png
    2. cmd+a (select all), cmd+c (copy)
    3. Then I do a get info on the Filebeat folder, cmd+i,
      • get-info
    4. Then cmd+v (paste)
      • paste
    5. Gives us a little more polished folder icon.

So at this point we have a “app,” well a folder that hosts the exec needed, next thing to do is to place our config, filebeat.yaml, into the /Applications/Filebeat/ directory. (Or modify the existing one.)

Next we will place the Launchd .plist we created earlier, co.elastic.filebeat.plist, in /Library/LaunchDaemons/ but wait, theres more. If you’ve never done much with Launchd I encourage you to rtm. To actually get this to load with out a restart one would need to:

launchctl load /Library/LaunchDaemons/co.elastic.filebeat.plist

Also make sure this has the proper permissions:

-rw-r--r--   root:wheel

 

And feel free to load it, this is a great point to test the setup. I am not going to touch on Graylog or the Beats Input, as I looked at it in my previous post, I will say in Graylog 2.1.2, the .ova you can download to test, Beats input is included, so no additional loading of a .jar file is needed.

Packaging

Brief review

  • Filebeat folder, with custom filebeat.yml config is in place
  • co.elastic.filebeat.plist Launchd is in place.

Packages (How I did it), start a new “Raw Distribution”

  1. Project
    • Name, path and exclusions
    • packages-1-project
  2. Settings
    • ID and version for your development reference
    • packages-2-settings
  3. Payload
    • The afore mentioned items in their locations
    • packages-3-payload
  4. Scripts
    • This is a point where we can change schools of thoughts, you have two options.
      1. Include a script to load the launchd here
      2. Don’t include said script, and have it run by a pkg management client you may use.
    • packages-4-scripts
  5. Comments
    • I leave my self reminders in the comments during development
    • packages-5-comments

Build! Build! Build!

filebeat-build

I used Suspicious Package here to show you what it looks like after the build…

So there it is… plenty more to test around with as time permits… but a good start.

What’s next?

Testing.

  • Whats the impact/implications on…
    • Machine
    • Network
  • Do I need everything in that /Filebeat/ directoy?

More Testing.

  • Update just the yaml for future versions?

Even more testing.

  • What logs do Ii really want?
    • Of those logs do I want to exclude or include any more items!?
    • Do YOU know? I haven’t a clue.

Uninstall

I also  made this handy uninstall script for testing as well:

#!/bin/bash
#filebeat testing quick cleanup

#unload launchd if its running
/bin/launchctl unload /Library/LaunchDaemons/co.elastic.filebeat.plist

#remove app folder
/bin/rm -rf /Applications/Filebeat
#remove the launchd
/bin/rm /Library/LaunchDaemons/co.elastic.filebeat.plist
#remove receipts, I don't use in production if I can avoid it 
/bin/rm /private/var/db/receipts/com.yourinstitution.pkg.Filebeat.bom
/bin/rm /private/var/db/receipts/com.yourinstitution.pkg.Filebeat.plist

UPDATEs

2017-03-17

So a few edits I’ve made since I was working on this a few months ago.

  1. Install location
    1. I ended up putting the application into /Library/Filebeat  for a cleaner, unobtrusive install
  2. Folder GFX
    1. Point 1 means I no longer need to make it pretty, so I dropped the folder graphic
  3. Launchd Auto Load
    1. I deployed this to a small # of machines and manually installed the pkg, and then loaded the launch daemon manually as well.
    2. This also allowed me to test the config locally before adding it to load at launch., I had some firewall rules and other items I needed to ensure weren’t conflicting so it ended up not being quite as “set it and forget it” as I once set out for it to be-
  4. 2 months later
    1. Works great. Planning on a followup, specifically about the graylog input, notification and extractors side of things.

Referenced materials

Installing Filebeat | Elastic 

Deploying Filebeat on MacOS X | Elastic Forums 

Creating Launchd Jobs | Apple Developer

Packages | Whitebox

Suspicious Package | Mothers Ruin

MacOS, Beats and Graylog. Learning for better logging.

Background

Until recently I’ve had to dump the entire syslog to the syslog server, now trying to  begin using Filebeat collector for macOS  and Graylog Elastic Beats Input Plugin which one can send a specific log or set of logs to a syslog server.

How I was doing it:

Edit the syslog conf at /etc/syslog.conf

*.*                                       @serverip:port

Redirect Logs To A Syslog Server In OS X | Krypted.com

The caveat of this method is it dumps the entire of the syslog to the syslog server. I dislike the chattiness of  syslog and would prefer to send only a particular log or set of logs that I am interested in, hence this post.

The server I was particularly interested in was averaging about 250 or so various entries and hour. A bit too much for my liking.

Sometimes it felt like the logs could easily get out of hand…

logs

Just found this log on my server. Should I be worried?

 

The pieces 

I was lucky enough to inherit a preconfigured infrastructure of Graylog, but assuming nothing I set up my own and tested this from scratch… if have log server already setup that you can skip the configuration of server…

I am sure this well documented somewhere else too, this process was mostly for me to better understand 1.) Logging service in general, 2.) MacOS logging practices and 3.) Assessing the plausibility of using Beats or similar for a backend to forward logs in a package able, deployable fashion.

Before you start:

Mac OS VM for testing, I use VMware either local or a remote server (ESXi) for my MacOS testing.

Graylog preconfigured OVA (Download)

Graylog Elastic Beats Input Plugin (Included in v2 of Graylog, may not need this)

Filebeat collector for macOS

Generally the flow of information will look something like this:

 

  1. Log is written by .app or service
  2. File collector then forwards files to Beats input on Graylog server
  3. Beats input plug allows for any beats File collector source to be treated as any TCP/UDP log dump.

filebeat-overview-001

Testing: 

Graylog Server

This is very well documented in Graylog’s docs-

  1. Setting up from an OVA
    1. Download and run OVA in whatever virtual appliance host you’d like
    2. Make changes to defaults as needed.
  2. Install Beats plugin (If needed)
    1. Get Beats plugin for graylog
    2. mv to /opt/graylog/plugin/
    3. restart graylog
      • graylog-ctl restart
  3. Setup input
    1. See Graylog documentation here

Client (macOS)

  1. Downloaded filebeat-5.0.2-darwin-x86_64.tar (or current)
    1. Unzip
    2. Modify lines in yaml file:

 

- input_type: log #uncomment

paths:

- /var/log/install.log #uncomment I changed to install.log for specific log testing, but you could set it to whatever you'd like. 

output.elasticsearch: #uncomment

hosts: ["URL:PORT"] #change to server ip and port make sure it aligns to you input configuration

Once changes are made you can start the forwarder, by:

  • sudo ./filebeat -e -c filebeat.yml

More considerations & To Dos

  • Automated start/stop of forwarder
    • I’d like to figure out (or find someone who has) how to auto-start the filebeat service
    • As well as bundling in a deployable pkg yo distribute to a large number of clients
  • Further granularity/ filtering at the Graylog Level

Reference:

Filebeat Reference

Grayling Documentation “Sending in log data”

Graylog Elastic Beats Input Plugin

Elastic Filebeat Collector  (Mac | Win | Linux)

https://www.reddit.com/r/funny/comments/5ft0hi/just_found_this_log_on_my_server_should_i_be/

Munkireport-PHP on Ubuntu 16.04 w/ SQL

Overview

After deciding Docker wasn’t a direction I wanted to head infrastructure wise I decided to pursue Ubuntu host… but I also wanted to update the infrastructure, so I decided to pursue an option like this:

  • Ubuntu 16.o4
  • PHP 7
  • Non-local SQL

I’ve cited him once and I’ll do it again, Clayton Burlison has a great blog post on such a thing for Ubuntu 14.04 (less non-local db)which was the basic outline I used to move forward with this project.

Continue reading

Munki, Docker and why you’d want to even try. The video!

Here  is a video from PSU MacAdmins where I take a high level look at Munki, Docker and why you’d want to even try to get them to play nice with one another- or what better options for hosting your repo may be…

 

I love what the folks on the PSU MacAdmins team have been doing for the community, you can read more about them and the PSU MacAdmins conference here.