Exploring Osquery, Kolide’s Fleet and Graylog for Endpoint Visibility

Why?

  • Desire for fleet visibility
  • Visibility on both clients and hosts
  • Ability to be alerted if something changes, but not necessarily enforce
  • There’s one of me
  • Easy to stand up
  • Easy to maintain
  • Automation
  • As much as I wish I had time to “dev” it- I just need it to work.

This stack

There’s a lot of bias here, but again I wanted it to be similar to what I am already doing, and as much as possible fit into infrastructure I have already running.

The pieces…
osquery
Fleet
Graylog

osquery

What is osquery? (abridged)

osquery exposes an operating system as a high-performance relational database. This allows you to write SQL-based queries to explore operating system data. With osquery, SQL tables represent abstract concepts such as running processes, loaded kernel modules, open network connections, browser plugins, hardware events or file hashes.

SQL tables are implemented via a simple plugin and extensions API. A variety of tables already exist and more are being written: https://osquery.io/schema. To best understand the expressiveness that is afforded to you by osquery, consider the following SQL queries:

List the users:

SELECT * FROM users;

Check the processes that have a deleted executable:

SELECT * FROM processes WHERE on_disk = 0;

Get the process name, port, and PID, for processes listening on all interfaces:

SELECT DISTINCT processes.name, listening_ports.port, processes.pid
FROM listening_ports JOIN processes USING (pid)
WHERE listening_ports.address = '0.0.0.0';

Find every OS X LaunchDaemon that launches an executable and keeps it running:

SELECT name, program || program_arguments AS executable
FROM launchd
WHERE (run_at_load = 1 AND keep_alive = 1)
AND (program != '' OR program_arguments != '');

A note on osqueryi vs osqueryd

osqueryi is the osquery interactive query console/shell. It is completely standalone and does not communicate with a daemon and does not need to run as an administrator. Use the shell to prototype queries and explore the current state of your operating system.

and

osqueryd is the host monitoring daemon that allows you to schedule queries and record OS state changes. The daemon aggregates query results over time and generates logs, which indicate state change according to each query. The daemon also uses OS eventing APIs to record monitored file and directory changes, hardware events, network events, and more.

Fleet by Kolide

So now we know just a little bit of what osquery can do, so how canwe automate that? Make that work for use en mass?

Thats where Fleet by Kolide comes in… Kolide offers another product as a SaaS option, Kolide Cloud:

Kolide Cloud is the fastest way to get started with Osquery in your organization. Following our setup guide, you can have Kolide on your machine and reporting insights in less than two minutes flat. Our pre-built packages make organization-wide deployment a piece of cake with the tools you already use today.

Understanding Fleet

Fleet will:
– hold our client reported information
– queue and send queries to clients/servers
– fleet will log any findings

Installing Fleet

NOTE: You can also just use the Quickstart method, and skip to Exploring Fleet

Quickstart
Github Docs

Docker

If you want to use Docker, you can pull the latest Fleet docker image:
docker pull kolide/fleet

Binary

If want to use a linux vm or otherwise, here is an example of an Ubuntu 16.04 setup process, after your initial vm setup.

Get the latest binary, move it to /usr/bin as fleet

$ wget https://dl.kolide.co/bin/fleet_latest.zip
$ unzip fleet_latest.zip 'linux/*' -d fleet
$ sudo apt install unzip
$ sudo cp fleet/linux/fleet_linux_amd64 /usr/bin/fleet

Installing MySQL:

$ sudo apt-get install mysql-client

You can run this db local or on a remote db server thats up to you, for sake of illustration, lets say tou want to run it on a remote db server…

On your remote db server:

$ mysql -u root -p
mysql> create database fleet;
create user 'fleet'@'$FLEET_IP' identified by '$PSW';
create user 'fleet'@'$FLEET_FQDN' identified by '$PSW';
grant all on fleet.* to 'fleet'@'$FLEET_IP';
grant all on fleet.* to 'fleet'@'$FLEET_FQDN';
flush privileges;
mysql> flush privileges;
mysql> select user,host from mysql.db where db='fleet';
+-------+-------------------+
| user | host |
+-------+-------------------+
| fleet | $FLEET_IP |
| fleet | $FLEET_FQDN |
+-------+-------------------+
2 rows in set (0.00 sec)
mysql> exit;

So now we can prepare the fleet database via the binary we installed on the fleet host:

/usr/bin/fleet prepare db \ --mysql_address=your.database.com:port \
--mysql_database=fleet --mysql_username=fleet \ # or whatever was created
--mysql_password='$PSW' # or prompt for input

This will prepare the db as needed, once this is done, you could serve fleet service via cli such as:

/usr/bin/fleet serve \
--mysql_address=your.database.com:port \ --mysql_database=fleet --mysql_username=fleet \ --mysql_password='CiKv_Gzy9:>B]vJ-jifsm&' \ --redis_address=127.0.0.1:6379 \ --server_cert=/your/host/Certs/fleet.pem \ --server_key=/your/host/Certs/fleet.key \ --logging_json --auth_jwt_key $auth_jwt_key

But fleet can also take a kolide.yaml config file, docs, so above becomes:

mysql:
address: your.database.com:port
database: fleet
username: fleet
password: $PSW
redis:
address: 127.0.0.1:6379
server:
address: 0.0.0.0:443
cert: /your/host/Certs/fleet2.pem
key: /your/host/Certs/fleet2.key
auth:
jwt_key: $auth_jwt_key
osquery:
status_log_file: /var/log/osquery/status.log
result_log_file: /var/log/osquery/result.log
label_query_update_interval: 1h
enable_log_rotation: true
logging:
json: false
debug: false

I won’t get into all of Fleets options, but most are the above are self explanatory.

Now we could launch fleet as such:

/usr/bin/fleet serve --config /your/config/kolide.yaml

But I’d rather have it be controlled through systemd, launch automatically etc, so before we launch we can configure systemd

Lets create /etc/systemd/system/fleet.service

[Unit]
Description=Fleet web service

[Service]
ExecStart=/usr/bin/fleet serve --config /your/config/kolide.yaml
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=Fleet

[Install]
WantedBy=multi-user.target

As well I edited rsyslog.d to send service logs to a stand alone file, /var/log/fleet/fleet-out.log dump them w/o sending to syslog, appending or created an rsyslog.d conf with the follow lines should ensure that:

if $programname == 'Fleet' then /var/log/fleet/fleet-out.log
if $programname == 'Fleet' then ~

We can enable this service to be started at boot, if desired:

systemctl enable fleet.service

And now we can start the service with:

systemctl start fleet.service

As well get systemctl information on the server:

$ systemctl status fleet.service
● fleet.service - Fleet web service
Loaded: loaded (/etc/systemd/system/fleet.service; enabled; vendor preset: enabled)
Active: active (running) since ... ; x days ago
Main PID: 8722 (fleet)
Tasks: 11
Memory: 546.1M
CPU: 17min 39.910s
CGroup: /system.slice/fleet.service
└─8722 /usr/bin/fleet serve --config /your/config/kolide.yaml

Fleet service handles the rotation of the logs as well, no need to worry about implementing additional logrotate

Exploring Fleet

For testing/demo, I recommend checking out Fleet via Docker which is found on Kolide’s Fleet “Quickstart” Guide.
It can be as simple as

git clone https://github.com/kolide/kolide-quickstart.git
./demo.sh up

and is well documented.

Enrolling Clients with Fleet

macOS

If you chose to do the Quickstart, you can run this from the cloned directory:
./demo.sh enroll mac which will generate a macOS installer package into ../out.

If you are running another iteration, perhaps more production, I encourage you to take a look at Kolide/Launcher for an osquery management tool…

Keep in mind the docker instance is bound to localhosts so demo clients need to be configured as such…

What this package does is install a few things:
/LibraryLaunchDaemons/
co.kolide.osquery.enroll.plist
– LaunchDaemon
– starts osqueryd on your system
/etc/osquery/
kolide.crt
– allows for tls communication
– will be self-signed in demo mode
kolide_secret
– file containing secret
– allows for the client enrollment
kolide.flags file
– list of preferences / parameters

NOTE: Fleet just configures kolide.flags based on osqueryd‘s built in clip-flags to communicate with the server, (which spec is outlined here).

16.04

If you decide to just bootstrap for funsies or demo via the Quickstart we can get this bootsrapped on 16.04 by…

You can install osquery with apt its required to be installed before its configured with the kolide.flags

export OSQUERY_KEY=1484120AC4E9F8A1A577AEEE97A80C63C9D8B80B
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys $OSQUERY_KEY
sudo add-apt-repository 'deb [arch=amd64] https://pkg.osquery.io/deb deb main'
sudo apt-get update
sudo apt-get install osquery -y

As with the macOS installer we need all the same parts,
/etc/osquery/
kolide.crt
kolide_secret
kolide.flags

You can manually put these into /etc/osquery/

And then manually kick of osqueryd with this command:
/usr/bin/osqueryd --flagfile=/etc/osquery/kolide.flags

You should now see your clients in the server if the config was successful.

Fleet Documentation

#kolide in osquery Slack
API Documentation
Github kolide/fleet
Documentation

Graylog

A little out of scope here, but there are loads of options and plenty of good docs out there:
Graylog Downloads, (OVA, Open Stack, EC2, Docker, RPM, DEB
Scripts, Orchestration, etc )
Graylog Documentation
– Of which I highly recommend reviewing Architectural considerations before moving towards a production environment
– That being said the OVA is a great quick start option!

Tips on ingesting osquery logs into Graylog

On our Fleet sever config.yaml we designated the following logs:
status_log_file: /var/log/osquery/status.log
result_log_file: /var/log/osquery/result.log

Now we can setup sending those to Graylog.

Aside:
I much prefer to send logs to individual files and set rules surrounding them individually. And keeping them separate from the syslog

If you chose to send those logs to syslog this portion may vary for you…

You can do this a couple different ways,

A) rsyslog

You could set up a rsyslog entry for that log and send it that way-

B) agent, ie Filebeat by Elastic

Filebeat is a tool from Elastic that allows for managing log forwarding for multiple files, even parsing log files, see the Filebeat Documentation for more…

Let me explain why I like B, primarily because I can deploy a config that manages all the needed settings for multiple log files, and its as simple as:

filebeat.prospectors:
– input_type: log
paths:
– /var/log/osquery/status.log
– /var/log/osquery/result.log

# output
output.logstash:
hosts: ["your.graylog.com:PORT"]

#logging info

logging.level: info
logging.to_files: true
logging.to_syslog: false
logging.files:
path: /var/log/Filebeat/
name: filebeat.log
keepfiles: 7

This config will watch just the listed log files, as well as log its doings to /var/log/Filebeat/filebeat.log. One BIG bonus I like about Filebeat is that its aware of what it has sent and hasn’t…

In any environment, application downtime is always lurking on the edges. Filebeat reads and forwards log lines and — if interrupted — remembers the location of where it left off when everything is back online. filebeat

So if your syslog is down, or overloaded, and a successful log forward is not sent, then it knows and “catches up” later.

Once Filebeat is setup, you can configure an input for it on Graylog. Detailed info over at Graylog: Inputs

Once you are on youre inputs page, select Beats option:

This will bring up a config window, update as needed:

Additionally once the input is configured, “Manage Extractor”:

And find an example log file, (you may have to come back to this part), and load it-
Select the message portion of the log, and then select Select extractor type which is JSON and that should be it!

The message block received will look like this:

{"name":"pack\/Example Check\/etc_hosts","hostIdentifier":"macOS","calendarTime":"Fri Apr 27 20:38:36 2018 UTC","unixTime":"1524861516","epoch":"0","counter":"0","decorations":{"host_uuid":"564D4617-52F7-6924-9698-05E433322827","hostname":"macOS"},"columns":{"address":"8.8.8.8","hostnames":"whatever"},"action":"added"}

Now will be converted to:

action
added
calendarTime
Fri Apr 27 20:38:36 2018 UTC
columns_address
8.8.8.8
columns_hostnames
whatever
counter
0
decorations_host_uuid
564D4617-52F7-6924-9698-05E433322827
decorations_hostname
macOS
epoch
0
facility
filebeat
file
/var/log/osquery/result.log
hostIdentifier
macOS
message
{"name":"pack\/Example Check\/etc_hosts","hostIdentifier":"macOS","calendarTime":"Fri Apr 27 20:38:36 2018 UTC","unixTime":"1524861516","epoch":"0","counter":"0","decorations":{"host_uuid":"564D4617-52F7-6924-9698-05E433322827","hostname":"macOS"},"columns":{"address":"8.8.8.8","hostnames":"whatever"},"action":"added"}
name
pack/Example Check/etc_hosts
offset
10448
source
fleet
timestamp
2018-04-27T20:44:03.545Z
type
null
unixTime
1524861516

And once an item is extracted as such, each of these keys will be stand alone inside Graylog. Allowing you to alert, sort and graph based on their contents.

Monitoring and Alerting with Graylog

Again we are quickly approaching scope creep for this session, Graylog has great docs and here are some good places to start:

Graylog Streams
Graylog Alerts

Resources

osquery

osquery
osquery Docs
osquery tables
Github facebook/osquery
osquery Slack

Fleet

#kolide in osquery Slack
API Documentation
Github kolide/fleet
Documentation

Graylog

Downloads
Documentation
Inputs
Filebeat

Munki, Gitlab & Git-LFS

NOTE: This is not a HOW TO, more of a rant, the how to that may or may not come later. This is written from the perspective of someone who manages (for the most part) their infrastructure from end to end, vm config, service management, monitoring and patching. When it comes to scale, and complexity your milage may very.

tl;dr

  • there are some caveats this route
  • its worth it
  • git isn’t a one way street, you can use it as little or as much as you want
  • Gitlab as a tool sets you up for more in the CI/CD realm

The idea

This post is just some gernal process / lessons learned from migrating a Munki repo to a git tracked repository

fwiw, “git” and “git-lfs” don’t need Gitlab, its just my life, and these are my thoughts.

Rationale:

  • vcs
    • a version control system was huge, the benefit to see what I changed when I changed was really rad
  • pipelines
    • though I didn’t have that word for it at the time, some flow of dev/test/deploy of a repo
  • multiple users
    • though this still isn’t prod, I wanted the ability for multiple users to make changes, and said changes be tracked
  • no more smb/sfp
    • all the transport security can be done with pub/priv keys, etc

Why Gitlab?

  • I could keep it locally
    • OR use the cloud, privately too
  • The CE edition is rad (and I will do my best to link exclusively to the Community Edition docs)
    • and free
  • It has CI/CD built in
    • (though I didn’t realize how great this was ’till later)

The Reality

There were some kinks that had to be worked out, the first and maybe most obvious was managing large files with git. Gitlab has LFS support built in, which is rad, but still comes with its own nuances.

Git-LFS

Git Large File Storage

Git Large File Storage (LFS) replaces large files such as audio samples, videos, datasets, and graphics with text pointers inside Git, while storing the file contents on a remote server like GitHub.com or GitHub Enterprise.

Heres a handy tutorial: Getting started with Git LFS

Git LFS on Gitlab

Sounds great, and can be enabled in Gitlab easily.

LFS on Gitlab also requires you use HTTPS for auth and transport, rather than ssh. Digging into the Gitlab administration docs, we can see the docs straight list some pretty big limitations:

Support for removing unreferenced LFS objects was added in 8.14 onwards.
LFS authentications via SSH was added with GitLab 8.12
Only compatible with the GitLFS client versions 1.1.0 and up, or 1.0.2.
The storage statistics currently count each LFS object multiple times for every project linking to it

But some stuff they don’t tell you that I found out the hard way-

LFS caches (this, srsly)

What I found was when Gitlab was receiving LFS file sit could cache them in a /cache location, then move them to the configured storage location.

This was noticed when my OS disk on my VM filled. womp. Modifying the gitlab.rb you can change the LFS storage location.

Looking at gitlab.rb.template, we have these LFS options:

### Git LFS
# gitlab_rails['lfs_enabled'] = true
# gitlab_rails['lfs_storage_path'] = "/var/opt/gitlab/gitlab-rails/shared/lfs-objects"
# gitlab_rails['lfs_object_store_enabled'] = false # EE only
# gitlab_rails['lfs_object_store_direct_upload'] = false
# gitlab_rails['lfs_object_store_background_upload'] = true
# gitlab_rails['lfs_object_store_proxy_download'] = false
# gitlab_rails['lfs_object_store_remote_directory'] = "lfs-objects"
...

Oh nice! a gitlab_rails['lfs_storage_path'] option, sweet, so you can store your repo on /dev/sd(whatever), this is good to know- so say- your os disk doesn’t fill…

And what about your client? Most people have Munki running with autopkg or munki admin on a macOS box. So you have to have Git-LFS on your mac, and little less supported.

LFS on macOS

You can do this via the installer git provides (macOS), which is basically the command line extension and a shell script. Or you could use brew, loads of people have loads of opinions about brew, so don’t use it and just keep that to yourself.

Regardless you will need to initialize it-

# Update global git config
$ git lfs install
# Update system git config
$ git lfs install --system

Which is like HEY GIT we going to use LFS now. But for what? and when?, touché git.

Looking into Configuring Git Large File Storage you can, once you’re in a tracked dir,

$  git lfs track "*.psd"
Adding path *.psd

Which is cool, and it gets added to your .gitattributes file, but most admins know what they’re big files in their munki repo are… so somehting like this in your .gitattributes file may be more applicable:

*.pkg filter=lfs diff=lfs merge=lfs -text
*.mpkg filter=lfs diff=lfs merge=lfs -text
*.dmg filter=lfs diff=lfs merge=lfs -text
It still CACHES

Lets take a look at our mcOS git env…

$ git lfs env
Endpoint=https://your.gitlab.com/group/munki_repo.git/info/lfs
LocalGitDir=/your/munki_repo/.git
LocalGitStorageDir=/your/munki_repo/.git
LocalMediaDir=/your/munki_repo/.git/lfs/objects
LocalReferenceDir=
TempDir=/your/munki_repo/.git/lfs/tmp
...
LfsStorageDir=/your/munki_repo/.git/lfs
...

That effectively means your local repo folder can be double its actual size. So plan accordingly.

Once its tracked it is as simple as a git push to get those files and changes to the Gitlab server.

Git

I know it sounds pretty negative up until this point, the benefit though, of the hump of getting it setup is git. And you can “git” as much or as little as you want.

Munki & Git

Theres some great documentation on that here. But specifically check out Munki’s Repo Plugins, specifically the GitFileRepo:

GitFileRepo – a proof-of-concept/demonstration plugin. It inherits the behavior of FileRepo and does git commits for file changes in the repo (which must be already configured/initialized as a git repo)

This is rad because once your repo is tracked it does git commits for file changes in the repo. Rad.

Git Theories

master Branch All Day

Once you have all your stuff in git, you can choose how you’re going to use it- I have seen a lot of benefit of a tracked repo that just simply commits changes to master. Since dev, test, and prod are all contained in “munki logic” commit any changes to master allows you to track any changes made to .pkginfo or manifest files.

Which this works, and is totally legit, and will get you a load of good info from tracked files.

Let’s Face(book) It, we need more

Or maybe you don’t but Facebook’s CPE team has a really rad option…

Check out their CPE resources, specifically the autopkg_tools

This is an AutoPkg wrapper script creates a separate git feature branch and puts up a commit for each item that is imported into a git-managed Munki repo.

They have a Getting Started Guide if this sounds like more of what you’re looking for-

Stuff I didn’t touch on, that I could

  • catalogs, not tracking them, making them, making them with a runner
  • git-fat as an option?

In summary

I personally won’t go back, there was a little tweaking to get it all sorted, but the information and tracking git provides to the munki repo is well worth it.

Coming Soon

  • I am going to try and sanitize some of the helpful CI/CD stuff I got rolling in Gitlab and talk about it here.
  • Munki in the cloud stuff
  • macOS monitoring?
  • maybe a similar rant on osquery and fleet.io

Resources


Macadmin Resources

🎥 Mac Justice – Intro to Gitlab (MacDevOPs, shorter)
🎥 Mac Justice – Intro to Gitlab (PSU Macadmins, longer)

🔗 Advanced Munki Infrastructure: Moving to Cloud Services by Rick Heil
🎥 Advanced Munki Infrastructure: Moving to Cloud Services

General Resources

🔗 Git LFS
🔗 Gitlab CE
🔗 Gitlab and LFS

🎥 Git Large File Storage – How to Work with Big Files
🎥 Git LFS Training – GitHub Universe 2015
🎥 Tracking huge files with Git LFS, GlueCon 2016

NEMS – Nagios for your Pi

NEMS or Nagios Enterprise Monitoring Server developed by Robbie Ferguson is a modernized version of NagiosPi.

NEMS is a modern pre-configured, customized and ready-to-deploy Nagios Core image designed to run on the Raspberry Pi 3 micro computer. At its core it is a lightweight Debian Stretch deployment optimized for performance, reliability and ease of use.

I had used FAN (Fully Automated Nagios) for my home instance until development stopped around 2013, NagiosPi was a good alternative, and I liked the idea of Nagios living on a Pi rather as another vm on a server, seemed counter intuitive to have it live on virtual host, and a pi allows for an inexpensive platform to have a stand alone service.

NEMS is built for RPI3 and requires as such. The beauty of NEMs (as with FAN or NagiosPi) its all rebuilt, download the .img, flash it to a Micro SD and you’re off.

NEMS bundles a lot of great features that use the Nagos Core, and it can be a simple box preforming check_pings or it can be as robust as nagios NRPE can get- that’s up to you.

Within an hourI had it up and running preforming basic checks on my home environment and at no more than $65 USD for all the parts (pi, case, powersupply, Micro SD). Its an easy and robust solution. I recommend checking out  baldnerd.com/nems/ as there is an excellent write-up and a direct download to the image.

If you have a small environment or home infrastructure I highly recommend NEMS  (Nagios Enterprise Monitoring Server) by Robbie Ferguson.

LinuxFest Northwest

I am super excited to announce be presenting at LinuxFest Northwest May 6th on “Managing macOS, without macOS(almost)” you can read more about the session here. LinuxFest Northwest is an annual OpenSource event held at Bellingham Technical College.

What is LinuxFest Northwest? LFNW features presentations and Exhibits on various F/OSS topics, as well as Linux distributions and applications. LinuxFest Northwest has something for everyone from the novice to the professional. The hours are 9:00 a.m. to 5:00 p.m. both days.


LinuxFest Northwest  a great conference and you cannot argue with the price. I hope to see you there! 

Macadmins Meetup

“Unofficial” Apple Admins of Seattle and the Great Northwest social to follow Saturday’s sessions will be held at Elizabeth Station at around 5pm. They should have a food truck outside and an over abundance of Beer/ Cider selection. There is also the incredible Primer Coffee right next door if that’s more your speed. As always find us on Slack, hope to meet you soon.

MunkiAdmin sync on “Save”

The idea was to use MunkiAdmin‘s script features to automatically rsync changes from a management machine to a machine hosting the repo for clients access. My testng case was syncing from a macOS machine to Ubuntu 16.04. This utilizes rsync with psk’s, great documentation specifically on check out Digital Ocean‘s article.

The Script

The main bread and butter is a simple rsync script:

/usr/local/bin/rsync -vrlt -e "ssh -i /Users/$macUSER/.ssh/id_rsa.pub" --chmod=$symbolic --chown=&nixUSER:$nixGROUP /macOS/munki_repo/* $nixUSER@$nixHOST:/nix/munki_repo/

So to break it down…

-vrlt
  • v
    • verbose
  • r
    • recursive
  • l
    • symlinks (optional? probably not needed in a munki_repo specifically)
  • t
    • preserve times
-e
  • specify the remote shell
    • ssh
    • -i
      • identity file
    • /Users/$macUSER/.ssh/id_rsa.pub
      • The key you would like to use (that also exists under authorized keys on the receiving server”
--chmod=$symbolic
  • specify the modification privileges via symbolic
    • 4744=go+r,u+rwxs
    • I just cheated, here.
--chown=&nixUSER:$nixGROUP
  • change the ownership
    • user:group
/macOS/munki_repo/*
  • local repo
$nixUSER@$nixHOST:/nix/munki_repo/
  • destination admin@host
  • :path/to/repo_destination

Tip: That should do it, you can always use -n or –dry-run to check this sync without actually syncing any data.

  • -n, –dry-run “perform a trial run with no changes made”

MunkiAdmin Integration

I added the command as well as some logging items to a bash script, and saved it as repository-postsave.

MunkiAdmin full documentation on custom scripts is available here, though its pretty cut and dry:

  • scripts should be saved in <repository>/MunkiAdmin/scripts/ or ~/Library/Application Support/MunkiAdmin/scripts/.
  • The presave scripts can abort the save by exiting with anything other than
  • All of the scripts are called with the working directory set to the current repository root.

Further more according to MunkiAdmin documentation, MunkiAdmin looks for executable files (with any extension) with the following names:

  • pkginfo-presave
  • pkginfo-postsave
  • manifest-presave
  • manifest-postsave
  • repository-presave
  • repository-postsave

I chose repository-postsave because a sync would be the last thing we would want to do. I moved my script to <repository>/MunkiAdmin/scripts/, reloaded MunkiAdmin, and then added a pkg to test.

Quick Test

I figured why not test it with a worst possible case..? How about a 10.11.6 upgrade pkg, 6.24 GB? Yeehaw.

So I imported via munkiimport, and then reloaded MunkiAdmin. As the script is tied to “Save” in munki admin, no sync occurs until then…

I hit “Save” and everything died:

Screen Shot 2017-05-02 at 7.45.54 AM.png

But not really, I had a hunch that it was just working hard, and MunkiAdmin was waiting until the script exited, and those suspicions were confirmed:

Screen Shot 2017-05-02 at 7.45.59 AM.png

Once the transfer processes completed, MunkiAdmin was back to normal.

Much success! As a note smaller more regular pkg/infos and catalog files really quickly* (your milage may vary depending on your speeds).

Caveats

rsync 3.1, Your keen eye may have picked up on /usr/local/bin/rsync vs /usr/bin/rsync, as one may expect on macOS. Unfortunately macOS ships with rsync v2.6.9, which does not support the –chown functionality, so I had to brew err pursue other avenues for rsync to completely work in this capacity…

Implications

No manual rsync of your repo anymore! Well… actually its still manual on “Save” but its automatic!

If you use MunkiAdmin this scripting has a lot of potential for different automation tasks, git integrations or whatever you may do to your repos after “saving,” to pkgs or whatever your use case may call for- I really like this integration and I just thought I’d share this bit I found useful.

Packaging Filebeat on macOS

In my previous post I explained why I set out to dig more into logging and how I got a proof of concept of how to deploy a system to forward particular log files to to a syslog server.

This post is more about bundling it all up in a way I could easily deploy (.pkg).

Edit: I didn’t explicitly state this was for testing, I do plan on moving/bundling and placing in a place that it better for an environment that say would interact with an end user, thats not this! Just what I need to get it onto some machines for testing.

I am not going to get into the ins and outs of creating packages. There are many other people who’ve wrote far more elegant.

Getting the pieces

In my last post and via Beats documentation the extent of launching Beats is

sudo ./filebeat -e -c filebeat.yml

but I left the “-e” off this when I transitioned it into a .plist which

-e     Log to stderr and disable syslog/file output

so the .plist which we will call co.elastic.filebeat.plist, ends up looking like this:

<?xml version="1.0" encoding="UTF-8"?>
 <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
 <plist version="1.0">
 <dict>
 <key>Label</key>
 <string>co.elastic.filebeat</string>
 <key>ProgramArguments</key>
 <array>
 <string>/Applications/Filebeat/filebeat</string>
 <string>-c</string>
 <string>/Applications/Filebeat/filebeat.yml</string>
 </array>
 <key>KeepAlive</key>
 <true/>
 </dict>
 </plist>

Before I continue I did want to make a note that I did some more research to the configuration filebeat.yaml and found out a couple neat items, you can array log files and you can specify multiple prospectors. But wait aren’t those the same things? Look at this example filebeat.yaml:

filebeat.prospectors:

- input_type: log
 paths:
 - /var/log/install.log
 - /var/log/accountpolicy.log
 - input_type: log
 paths:
 - /var/log/system.log
 include_lines: ["sshd","screensharingd"]

output.logstash:
 hosts: ["hostip:port"]

What this does is it sends all entries from install.log and accountpolicy.log to the syslog server.

AND then watches the syslog for any messages containing sshd and screensharingd.

Pretty nifty, the documentation on configuring prospectors has a lot of neat features, even regex options that I may explore later on…

Assembly

So I started with the contents of “filebeat-5.1.1-darwin-x86_64” which I downloaded from Elastic’s site.

Get the pieces

  • filebeat-5.1.1-darwin-x86_64/
  • (custom) filebeat.yaml
    • Which I placed into the filebeat-5.1.1-darwin-x86_64 directory
    • And then I renamed ‘Filebeat’
  • (custom) .plist (I called mine: co.elastic.filebeat.plist`
  • I also downloaded a ‘B’ icon that I found by scanning through the Elastic site to use as an icon. Just to put a little polish on the folder.

Put the pieces into Place

filebeat-5.1.1-darwin-x86_64 directory

  1. I renamed “filebeat-5.1.1-darwin-x86_64” to “Filebeat”
  2. I placed the folder into the /Applications/ Directory and made sure it had the proper permissions
  3. I then found the afore mentioned B.png
    1. Opened in preview
      • b png screenshot.png
    2. cmd+a (select all), cmd+c (copy)
    3. Then I do a get info on the Filebeat folder, cmd+i,
      • get-info
    4. Then cmd+v (paste)
      • paste
    5. Gives us a little more polished folder icon.

So at this point we have a “app,” well a folder that hosts the exec needed, next thing to do is to place our config, filebeat.yaml, into the /Applications/Filebeat/ directory. (Or modify the existing one.)

Next we will place the Launchd .plist we created earlier, co.elastic.filebeat.plist, in /Library/LaunchDaemons/ but wait, theres more. If you’ve never done much with Launchd I encourage you to rtm. To actually get this to load with out a restart one would need to:

launchctl load /Library/LaunchDaemons/co.elastic.filebeat.plist

Also make sure this has the proper permissions:

-rw-r--r--   root:wheel

 

And feel free to load it, this is a great point to test the setup. I am not going to touch on Graylog or the Beats Input, as I looked at it in my previous post, I will say in Graylog 2.1.2, the .ova you can download to test, Beats input is included, so no additional loading of a .jar file is needed.

Packaging

Brief review

  • Filebeat folder, with custom filebeat.yml config is in place
  • co.elastic.filebeat.plist Launchd is in place.

Packages (How I did it), start a new “Raw Distribution”

  1. Project
    • Name, path and exclusions
    • packages-1-project
  2. Settings
    • ID and version for your development reference
    • packages-2-settings
  3. Payload
    • The afore mentioned items in their locations
    • packages-3-payload
  4. Scripts
    • This is a point where we can change schools of thoughts, you have two options.
      1. Include a script to load the launchd here
      2. Don’t include said script, and have it run by a pkg management client you may use.
    • packages-4-scripts
  5. Comments
    • I leave my self reminders in the comments during development
    • packages-5-comments

Build! Build! Build!

filebeat-build

I used Suspicious Package here to show you what it looks like after the build…

So there it is… plenty more to test around with as time permits… but a good start.

What’s next?

Testing.

  • Whats the impact/implications on…
    • Machine
    • Network
  • Do I need everything in that /Filebeat/ directoy?

More Testing.

  • Update just the yaml for future versions?

Even more testing.

  • What logs do Ii really want?
    • Of those logs do I want to exclude or include any more items!?
    • Do YOU know? I haven’t a clue.

Uninstall

I also  made this handy uninstall script for testing as well:

#!/bin/bash
#filebeat testing quick cleanup

#unload launchd if its running
/bin/launchctl unload /Library/LaunchDaemons/co.elastic.filebeat.plist

#remove app folder
/bin/rm -rf /Applications/Filebeat
#remove the launchd
/bin/rm /Library/LaunchDaemons/co.elastic.filebeat.plist
#remove receipts, I don't use in production if I can avoid it 
/bin/rm /private/var/db/receipts/com.yourinstitution.pkg.Filebeat.bom
/bin/rm /private/var/db/receipts/com.yourinstitution.pkg.Filebeat.plist

UPDATEs

2017-03-17

So a few edits I’ve made since I was working on this a few months ago.

  1. Install location
    1. I ended up putting the application into /Library/Filebeat  for a cleaner, unobtrusive install
  2. Folder GFX
    1. Point 1 means I no longer need to make it pretty, so I dropped the folder graphic
  3. Launchd Auto Load
    1. I deployed this to a small # of machines and manually installed the pkg, and then loaded the launch daemon manually as well.
    2. This also allowed me to test the config locally before adding it to load at launch., I had some firewall rules and other items I needed to ensure weren’t conflicting so it ended up not being quite as “set it and forget it” as I once set out for it to be-
  4. 2 months later
    1. Works great. Planning on a followup, specifically about the graylog input, notification and extractors side of things.

Referenced materials

Installing Filebeat | Elastic 

Deploying Filebeat on MacOS X | Elastic Forums 

Creating Launchd Jobs | Apple Developer

Packages | Whitebox

Suspicious Package | Mothers Ruin