Thursday, May 22, 2025

Reading ACARS Messages



Continuing on from last month's post, now that I've set up my Linux computer and my Software Defined Radio to monitor ACARS frequencies and send the messages received every day to a log file that I can read and maybe do something a little more interesting with it.


When reviewing the daily log files, I can see that there can be literally hundreds of messages being received during a tpical 24 hour period, which is hardly surprising considering I am fairly close to a  large international airport. 

One immediate draw back is the shear volume of rather cryptic messages that I needed to wade through. 
  

A lot of it is really pretty mundane stuff like position reports, clearance approvals and system status updates with the odd free text messages.

What I would like to do is to come up with some way to filter through a day's worth of messages, filter out only the messages I care about, and post it up to a place where I can check in to see what\s been happening during the day, regardless where I may in the world - meaning I wanted to have the messages posted someplace where I can look at them online.  

To do this I initially looked at maybe posting things up to a social media site like X (i.e. Twitter). The issue with that idea is that I would need to hook into them with an API connection, which can be a bit of a hassle to set up, and depending on the service involved, may also require a paid subscription. After some playing around I determined that the best option was to set up a pretty simple blog site, which wuld allow me to use a non-API based interface, with the added bonus of being able to fully customize the look and feel of the messages.

I also didn't want to post up in "real time" just in case there may be some sort of security concerns that I might not be aware of. Based on that concern I wanted to delay any posts until at least 2 days had past since the transmission. 

So, with those ideas in mind, I needed to come up with an automated process that would:
  1. Look for the log file that my SDR process created 2 days ago
  2. Look at the file selected and extract only the messages I care about
  3. Create html code to format the messages in a format that makes them easy to read
  4. Post the html code up to a blog page.  
Since there is a fair bit of  logic needed here to make this all happen, I needed some sort of programmed solution here. In short I needed to write some code. 

Recently, I've started using Python as part of my day to day work. While I am nowhere near what I would consider proficient in it, I know enough to fully appreciate what it is able to do and it seemed to be the perfect tool to do what I want to do. 

To make it happen, I needed to break the process into steps.



Step 1 - Identifying and Selecting The Correct Message File


The first step in the process is to look for a Daily log file that was created by ACARSDEC two days ago for converting it into a temporary file that can be used for further processing. 

The basic logic for this is to look at the date tag in the Daily log file name. ACARSDEC creates the daily log files as a "Daily_YYYYMMDD.log" naming convention. For pulling the needed file, I needed my program to do the following:

  • Scan the directory that contains my log files and look for a file that has the date labelled in it's name that equals to 2 days previous than today's date. 
  • Once the file is found, create a copy of that file called "Daily.log" that we will use for further processing. 
In Python code, it looks like this:

import os
import shutil
from datetime import datetime, timedelta

def copy_log_file_if_two_days_old():
    # Get the date two days ago
    two_days_ago = datetime.now() - timedelta(days=2)
    date_str = two_days_ago.strftime('%Y%m%d')
    
    # Construct the expected filename
    source_filename = f"Daily_{date_str}.log"
    
    # Check if the file exists in the current directory
    if os.path.exists(source_filename):
        # Copy the file to 'daily.log'
        shutil.copyfile(source_filename, "Daily.log")
        print(f"Copied '{source_filename}' to 'Daily.log'.")
    else:
        print(f"File '{source_filename}' does not exist.")

if __name__ == "__main__":
    copy_log_file_if_two_days_old()


With the execution of this program, I now had a working copy of the raw ACARS data from 2 days ago. What I needed to do now was to filter out the data that I didn't want to look at and create an html file that I then see the data that I was interested in a fairly easy to read format



Step 2 - Extracting The Information and Building the Webpage


Now comes the heavy lifting.

As I mentioned at the start of this post, there is a really large volume of data that's being transmitted on a daily basis. While all interesting stuff in of itself, I was really interested in looking at any messages that were likely to be human generated. 

To figure out what sorts of messages I wanted to look at, I first captured several days worth of transmissions and tried to find some common message labels that were most likely to have been maually created. 

From my analysis I determined that the following message labels were my best candidates:
  • 84 - labelled as "S.A.S. Free Text Message"
  • 87 - labelled as "Airline Defined" - Likely Air Canada based on the aircraft tail numbers
  • 85 - labelled as "Airline Defined" - Likely Air Canada based on the aircraft tail numbers
  • 5Z (with the words "FRM ENTRY " or "DISP MSG" in the message text.  - labelled as "Airline designated downlink" - Primarily United Airlines based on the aircraft tail numbers
Once I had defined the messages I wanted to look at, it should be fairly straightforward to parse the file to pull out the required messages. 

In order to increase the readability of the message, I decided that I really only wanted to see:
  • Message Label
  • Message Number
  • Tail Number
  • Message Text 
Once I had figured out what I wanted to see and how I wanted to see it, the final step was to put the information in a format that would make it readable as a webpage - in preparation for eventually posting it to a website - which meant converting the data to html code.  

While it looks like there's a lot going, it actually translated into a fairly compact Python program where the program reads in the Daily.log file that I crreated in the last program, pulls out the records that I mentioned above and threw some html code around it before spitting it all out as a html file. 

As a result I ended up with a Python program that looked like this:

import json
from datetime import datetime, timedelta

phrase = "FRM ENTRY"
phrase2 = "DISP MSG"

# Get the date two days ago
two_days_ago = datetime.now() - timedelta(days=2)
date_str = two_days_ago.strftime("%B %d, %Y")

def read_json_records(filename, fields):
    records = []
    with open(filename, 'r', encoding='utf-8') as file:
        for line in file:
            try:
                record = json.loads(line.strip())
                if (
                    "label" in record and
                    (
                        record["label"] in ["84", "87", "85"] or
                        (record["label"] == '5Z' and phrase in record["text"]) or
                        (record["label"] == '5Z' and phrase2 in record["text"])
                    )
                ):
                    records.append({field: record.get(field, "") for field in fields})
            except json.JSONDecodeError:
                continue
    return records

def records_to_html(records, fields):
    html = f"<html><head><meta charset='UTF-8'><title>Filtered Records</title></head><body>"
    html += f"<h1>ACARS Messages {date_str}</h1>"
    for record in records:
        html += "<div style='margin-bottom: 20px; padding: 10px; border-bottom: 1px solid #ccc;'>"
        for field in fields:
            html += f"<p><strong>{field}:</strong> {record.get(field, '')}</p>"
        html += "</div>"
    html += "</body></html>"
    return html

# Example usage
if __name__ == "__main__":
    filename = 'Daily.log'
    fields = ["label", "msgno", "tail", "text"]
    json_records = read_json_records(filename, fields)

    # Save to HTML
    html_output = records_to_html(json_records, fields)
    with open("blog_ready.html", "w", encoding="utf-8") as f:
        f.write(html_output)

    print("HTML file created: blog_ready.html")

Executing the program gives me something that looks like this:


Now I am able to convert the huge mass of cryptic messages from an airplanes ACARS terminal into something that is pretty easy to understand. 

The final step in the process is to now post the day's activity up to a website that I can view anytime I want.

As I mentioned at the start of this post, I wanted to post this up to a blog page, without the need to utilize any API connections. That proved to be more of a complex process than I had expected. So because of that, I think this may be a good time to wrap it up for this month and describe how I sorted that out in my next post.

Stay tuned next month for the finale of this project. 
 

No comments:

Post a Comment