What the FLIRC

So, purchased a FLIRC to replace an older MCE USB IR receiver.  Wanted to do this, as I am going to upgrade OS, and didn’t want to muck around with LIRC settings.  Having something that just is a keyboard is a great concept.  Unfortunately, it took longer to get going than what I expected.

I use a Harmony 650 remote, and while an OK remote, does not seem to place nicely with FLIRC.  OK – let me say this, I have it trained to the old Microsoft remote that I had, and it is giving me 0.5 second delays between keypresses, not matter what inter delay setting I choose.  That is for another post.

The issues I had with FLIRC (and possibly related to the Microsoft MCE), is that when I was pressing the button, I was getting two distinct codes.  Turns out, with the remote, is that it is basically alternating what it sending.  So, to fix this, either use a different profile, or do what I did – record the button twice.  This also means that if you want to remove the button, you need to erase it two in the FLIRC GUI as well.  This took way longer to Google than I would wanted.  Anyway – a successful conclusion.

Next was to to the hardest bit.  Replace the restart myth frontend capability.  In LIRC, I used configured in the config file the file to run when a button on the remote is pressed.  You cannot do that with FLIRC, as it mimics a keyboard.  Best option is to get something to listen to a keyboard shortcut.  So, since running XFCE desktop, I used the Keyboard settings panel to associate the CTRL-~ key to run a command: /home/myth/restartFrontend.sh.  Turns out to work like a charm!

 

<span class="entry-utility-prep entry-utility-prep-cat-links">Posted in</span> Uncategorized | Leave a comment

nxlog and Logstash to parse Exchange SMTP Receive logs

Needed to parse the receive logs of the incoming SMTP traffic (Exchange 2010) to find out which devices are relaying mail through the system.  The receive connector was very loose about the devices that could relay traffic through.  So, needed to work out what will happen, so here it is.  The basic flow is this:

RECVxxxxxx.log -> nxlog -> logstash -> Elasticsearch

Enable SMTP Receive logs to be generated.  Do this in the Exchange Management Console.  I have not detailed here, as a quick google will demonstrate how to do this.  Ultimately, for Exchange 2010, the logs will be written to C:\Program Files\Microsoft\Exchange Server\V14\TransportRoles\Logs\ProtocolLog\SmtpReceive.  The format of the file is detailed at the top of each file:

#Software: Microsoft Exchange Server
#Version: 14.0.0.0
#Log-type: SMTP Receive Protocol Log
#Date: 2019-12-19T00:00:02.159Z
#Fields: date-time,connector-id,session-id,sequence-number,local-endpoint,remote-endpoint,event,data,context
2019-12-19T00:00:02.159Z,VALI\Servers and Applications,08D773C7FE2A833E,0,10.2.0.19:25,10.2.0.29:54567,+,,
2019-12-19T00:00:02.174Z,VALI\Servers and Applications,08D773C7FE2A833E,1,10.2.0.19:25,10.2.0.29:54567,*,SMTPSubmit SMTPAcceptAnyRecipient SMTPAcceptAnySender SMTPAcceptAuthoritativeDomainSender AcceptRoutingHeaders,Set Session Permissions
2019-12-19T00:00:02.174Z,VALI\Servers and Applications,08D773C7FE2A833E,2,10.2.0.19:25,10.2.0.29:54567,>,"

To send this through to ElasticSearch (ES), we need to parse the files.  I have chosen NXlog here, as it was already existing the environment.  To reduce the amount of data entering ES I decided to only send in lines with SMTP command mail-from in the line.  An example of this line is follows:

019-12-19T00:00:02.190Z,SERVER\Servers and Applications,08D773C7FE2A833E,19,10.2.0.19:25,10.2.0.29:54567,<,mail from: <payroll@example.com

NXLOG

Once we have the logs being written to the disk, we now need something to parse them.  This is what nxlog will do for us.  Below is an extract of the relevant parts of the configuration file.

define EXBASEDIR C:\Program Files\Microsoft\Exchange Server\V14

<Extension csv_parser>
    Module      xm_csv
    Fields      datetime, connectorid, sessionid, sequencenumber, \
                localendpoint, remotendpoint, event, data, context
</Extension>

<Input smtp_receive>
    Module  im_file
    File    '%EXBASEDIR%\TransportRoles\Logs\ProtocolLog\SmtpReceive\RECV*.LOG'
    <Exec>
        if $raw_event =~ /FROM/ 
		{ 
            csv_parser->parse_csv();
            $EventTime = parsedate($datetime);	
		}
        else
        {
			
			drop();
        }
    </Exec>
</Input>


<Output out_exchangercv>  
    Module    om_tcp
    Host      IP ADDRESS OF LOGSTASH 
    Port      5142        # Replace with your desired port
    Exec      $SyslogFacilityValue = 2;
    Exec      $SourceName = 'exchange_smtpreceive_log';
    Exec      to_syslog_bsd();
</Output>

<Route exchange_smtp>
	Path      smtp_receive => out_exchangercv
</Route>

Lets unpack this configuration.

  • The base directory of the Exchange logs is defined.  Not really relevant in this case, but if we wanted to load in other logs, it would be valuable to as this is DRY (Dont Repeat Yourself).
  • We load the CSV parsing module and define the fields that we want to parse.  NOTE:  I think these need to match exactly what it is expecting
  • The magic happens in the Input statement block.
    • We tell nxlog where the files are to read
    • if the log file line contains FROM
      • parse the line
    • … otherwise we drop the line and dont send it too logstash
  • The <Output> section defines where the output will go.  In this case will be sent to our logstash server on port 5142
  • The <Route> section just ties inputs to outputs

Once we (re)start the nxlog service, it should send relevant lines towards our logstash server.

HINTS FOR DEBUGGING

There are some hints to assist with debugging this side of the connection.

  1. Use
    log_info("raw_event is: " + $raw_event);

    to log the relevant incoming information to the log file.  I used it in the if section of the input block to write out when it got a valid line.  This way I new that information was being sent to the logstash server.

  2. Use the following flags in the INPUT block so that nxlog does not save where it was in reading the files while troubleshooting.  Set the FILE name to be one file instead of a wildcard name.
    SavePos FALSE
    ReadFromLast  FALSE

LOGSTASH

Now that nxlog is sending lines to our LS server, we need it to listen on port 5142.  Here is the logstash configuration

input {
  tcp {
    type => "ExchangeSMTPRcv"
    port => 5142
  }
}
filter {
  if [type] == "ExchangeSMTPRcv" {
    csv {
      separator => ","
      columns => ["date-time","connector-id","session-id","sequence-number","local-endpoint","remote-endpoint","event","data","context"]
    }
    mutate {
      gsub => [ "remote-endpoint", ":.*",""  ]
      gsub => [ "local-endpoint", ":.*",""  ]
      remove_field => ["message", "date-time","port","event"]
    }
  }
}

output {
   if [type] == "ExchangeSMTPRcv" {
      elasticsearch {
        hosts =>  ["localhost"]
        index => "logstash_exchsmtpreceive-%{+YYYY.MM.dd}"
     }
#       stdout{
#        codec => rubydebug
#       }
   }
}

Lets unpack

  • Input section: defines LS to listen for TCP connections on port 5142.  It sets the “Type” of the message to be “ExchangeSMTPRcv”
  • Filter section:  If the message is of type “ExchangeSMTPRcv” then
    • Split the line by the , character
    • The field names are defined in columns array
    • Remove the port number from the remote-endpoint and local-endpoint columns.  For example, the remote-endpoint might be decoded as 10.2.0.29:54567 by logstash.  The gsub will change the field to be 10.2.0.29
    • remove_field removes the original message, date-time, port and event fields from the block that goes to ES
  • Output section: Writes the entry to ES into the logstash_exchsmtpreceive-%{+YYYY.MM.dd} index

For debuggingm uncomment the stdout section of the output section.  This will write the decoded information to stdout if the logstash program is run standalone and not as a service.

Kibana

Once the files are being written to the database, then use kibana to graph the output

 

<span class="entry-utility-prep entry-utility-prep-cat-links">Posted in</span> Uncategorized | Leave a comment

Automatic Screen Layout Change on Linux

Added another monitor onto my Linux desktop, and soon got sick on clicking the “Extend to Right” every time I turned the monitor on. So, started to investigate how to enable this automatically, I mean, Windows does this well. So, so googling I went. However, the found a lot of posts talking about using a Udev rule, calling a script. I tried this route for a while, and when my head really hurt, came across a post on https://bbs.archlinux.org/viewtopic.php?id=170294. His symptoms mimicked mine: wouldn’t work when called through Udev, but would through command line; If I put a sleep loop in the script, then after 5 seconds or so, it would work. So, he touted another method using Systemd instead.

Basically”

Create a Udev rule (/etc/udev/rules.d/99-monitor-hotplug.rules:

ACTION=="change", KERNEL=="card0", SUBSYSTEM=="drm", RUN+="/bin/systemctl start monitor_plug.service"

then, restart Idev with service udev restart

Next, create a service in systemd:
create a /etc/systemd/system/monitor_plug.service with the following code

[Unit]
Description=Monitor hotplug

[Service]
 Type=simple
 RemainAfterExit=no
 ExecStart=/usr/local/bin/monitor-hotplug.sh

[Install]
 WantedBy=multi-user.target

Load that into the system with systemd enable monitor-hotplug

Last step is to create the code for the script (/usr/local/bin/monitor-hotplug.sh) – this has changed on upgrade to Ubuntu 18.04.  Previously (on 16.04), monitor was called DP1, now it is called DP-1.  Same for HDMI, was HDMI1, but now HDMI-1.  You cannot set a variable with a hyphen (-) in it in bash, so this broke the script.  Work around was to change how we declare the variable to remove the hyphen.  Also, the value of the variable was changed from yes, to the device name (with hyphen).  Then, change the direct names in the xrandr commands to be the variables instead.

#!/bin/bash

#Adapt this script to your needs.

DEVICES=$(find /sys/class/drm/*/status)

#inspired by /etc/acpd/lid.sh and the function it sources
echo "##### Monitor start script #####" >> /tmp/monitor

displaynum=`ls /tmp/.X11-unix/* | sed s#/tmp/.X11-unix/X##`
display=":$displaynum.0"
#export DISPLAY=":$displaynum.0"
export DISPLAY=":0"

# from https://wiki.archlinux.org/index.php/Acpid#Laptop_Monitor_Power_Off
#export XAUTHORITY=$(ps -C Xorg -f --no-header | sed -n 's/.*-auth //; s/ -[^ ].*//; p')

#export DISPLAY=$(w -h -s | grep ":[0-9]\W" | head -1 | awk '{print $2}')
X_USER=$(w -h -s | grep ":[0-9]\W" | head -1 | awk '{print $1}')
export XAUTHORITY=/home/$X_USER/.Xauthority
#export XAUTHORITY="/home/kelvins/.Xauthority"
echo $XAUTHORITY >> /tmp/monitor
echo $X_USER >> /tmp/monitor
echo $DISPLAY >> /tmp/monitor

#this while loop declare the $HDMI1 $VGA1 $LVDS1 and others if they are plugged in
while read l
do
dir=$(dirname $l);
status=$(cat $l);
dev=$(echo $dir | cut -d\- -f 2-);

echo "Dev is $dev" >> /tmp/monitor
if [ $(expr match $dev "HDMI") != "0" ]
then
#REMOVE THE -X- part from HDMI-X-n
dev=HDMI${dev#HDMI-?}
# 4/10/2020 - Ubuntu 18.04 different from 16.04
# else
# dev=$(echo $dev | tr -d '-')

fi

if [ "connected" == "$status" ]
then
echo $dev "connected" >> /tmp/monitor
# declare $dev="yes";
declare $(echo $dev | tr -d '-')=$dev;

fi
done <<< "$DEVICES"

echo `date` >> /tmp/monitor

if [ ! -z "$HDMI1" -a ! -z "$VGA1" ]
then
echo "HDMI1 and VGA1 are plugged in" >> /tmp/monitor
xrandr --output LVDS1 --off
xrandr --output VGA1 --mode 1920x1080 --noprimary
xrandr --output HDMI1 --mode 1920x1080 --right-of VGA1 --primary
#elif [ ! -z "$HDMI1" -a -z "$VGA1" ]; then
# echo "HDMI1 is plugged in, but not VGA1"
# xrandr --output LVDS1 --off
# xrandr --output VGA1 --off
# xrandr --output HDMI1 --mode 1920x1080 --primary
elif [ ! -z "$DP1" -a ! -z "$HDMI1" ]
then
echo "HDMI1 and DP1 are plugged in" >> /tmp/monitor
# while ! xrandr | grep 'DP1 connected' ; do sleep 1; done;
/usr/bin/xrandr --output $DP1 --mode 1920x1080 --primary
# /usr/bin/xrandr --output DP1 --auto
echo $? >> /tmp/monitor
/usr/bin/xrandr --output $HDMI1 --noprimary --right-of $DP1
echo $? >> /tmp/monitor
elif [ -z "$HDMI1" -a ! -z "$VGA1" ]; then
echo "VGA1 is plugged in, but not HDMI1" >> /tmp/monitor
xrandr --output LVDS1 --off
# xrandr --output HDMI1 --off
xrandr --output VGA1 --mode 1920x1080 --primary
elif [ ! -z "$HDMI1" -a -z "$DP1" ]; then
echo "HDMI1 is plugged in, DP1 is not" >> /tmp/monitor
/usr/bin/xrandr --output $DP1 --off
/usr/bin/xrandr --output $HDMI1 --primary
else
echo "No external monitors are plugged in" >> /tmp/monitor
xrandr --output LVDS1 --off
# xrandr --output HDMI1 --off
xrandr --output LVDS1 --mode 1366x768 --primary
fi

Change the script to be executable with chmod +x /usr/local/bin/monitor-hotplug.sh

 

<span class="entry-utility-prep entry-utility-prep-cat-links">Posted in</span> Uncategorized | Leave a comment

Teamviewer quick link

Here is a quick link to download teamviewer support https://download.teamviewer.com/download/TeamViewerQS.exe

<span class="entry-utility-prep entry-utility-prep-cat-links">Posted in</span> Uncategorized | Leave a comment

Event Log XML Filtering

Been doing a lot with Elastic Stack and log ingestion. I had a very basic configuration file for NXlog for grabbing security events. However, I was getting lots of eventlogs into ElasticStack that I was not filtering on. So, as always, better to filter at the beginning and not at the end. I used this blog to help me understand my filtering a lot better. This in turn reduced my number of events going into ElasticStack

https://blogs.technet.microsoft.com/askds/2011/09/26/advanced-xml-filtering-in-the-windows-event-viewer/

<span class="entry-utility-prep entry-utility-prep-cat-links">Posted in</span> Uncategorized | Leave a comment

Remove security headers from HTTP responses

Been doing some security work around securing HTTPS websites.  Here is a good resource about how to remove Application headers: https://veggiespam.com/headers/#x-powered-by-aspnet

<span class="entry-utility-prep entry-utility-prep-cat-links">Posted in</span> Uncategorized | Leave a comment

Get WkHtmlToPdf to work in CakePHP 2

Doing a website, and using the code from here: https://github.com/Milanzor/cakephp-wkhtmltopdf

Problem was that I was getting blank PDFs back.  Turns out it does not do error handling very well.

After debugging the output command etc, and it bombing out with “host not found” type errors, I copied from the “Tips and URLs” section into AppHelper, and this resolved the issue.

https://www.dereuromark.de/2014/04/08/generating-pdfs-with-cakephp/

Basically, it just ensures that when printing out to PDF it uses absolute URLs instead of relative (which don’t work from a text file)

<span class="entry-utility-prep entry-utility-prep-cat-links">Posted in</span> CakePHP | Leave a comment

Ultrastar on Ubuntu 14.04

Needed to get Ultrastar up and running on a 14.04 Ubuntu media centre.  Some issues to install that were not there on >= 16.04.  These steps were done on a fresh version of 14.04

Quick steps

The Install needs Free Pascal 3 for compiling.  To install this do:

sudo add-apt-repository ppa:ok2cqr/lazarus
sudo apt-get update
sudo apt-get install fpc

To install the correct version of FFMPEG (v3 required), follow the steps here:
http://ubuntuhandbook.org/index

sudo apt-get update
sudo apt-get install fpc

.php/2017/05/install-ffmpeg-3-3-in-ubuntu-16-04-14-04/

Install UltrastarDX using the instructions on their GitHub page
https://github.com/UltraStar-Deluxe/USDX#compiling-on-linuxbsd-using-make

 

<span class="entry-utility-prep entry-utility-prep-cat-links">Posted in</span> Uncategorized | Leave a comment

Permissions for joining computers in a domain for OSD

OK – I always need to go and find this.  Annoys me everytime.  So, thought I would put a link to a doco so I know where to find it:

Minimum Permissions Required for Account Used to Join Computers to a Domain During OS Deployment

 

<span class="entry-utility-prep entry-utility-prep-cat-links">Posted in</span> SCCM | Leave a comment

Using Client Pre-Production for Updates

Upgraded a SCCM server the other day, and found that you can now deploy client updates in SCCM during upgrades.  Awesome.

Here is a blog about it: https://www.systemcenterdudes.com/sccm-pre-production-client-deployment/

<span class="entry-utility-prep entry-utility-prep-cat-links">Posted in</span> SCCM | Leave a comment