December Adventure

01-12-2024

Intro

The December Adventure is opportunity to start writing a little bit of code every day for the whole December.

I'm thiking of starting a no-dependencies (almost) local-server portal website that is in my mind for some time now, but I might jump on other projects too!

I'm already running a debian server on a pine64 single board computer (SBC) and so, just to kick start things I will create a basic server in Node.js (with the hope to evolve it to something in LISP but that might be a task for another December).

The initial goal for this would be to serve a few metrics from the SBC, and it might evolve into something that is actual useful for my family (weather report? bus schedules? shared groceries list?).

First thoughts

02-12-2024

Day 2 - and I got some code running!

Outputing some system info is quite straight forward using the node.js OS API (opens in new tab).

Some basic parts that came easy:

Some parts that required some digging:

I ended up using bash. With node you can do that with: child_process.exec.

Check the node.js docs for more info (opens in a new tab).

CPU Temperature

      function getTemperature() {
        const temperature = child_process
          .execSync("cat /sys/class/thermal/thermal_zone0/temp")
          .toString()
          .trim();
        return `${(parseInt(temperature) / 1000).toFixed(1)}°C`;
      }
      

This is basically returns out puts the temperature in a format that looks like: 48.8°C

for more info check the linux kernel docs on the thermal sysfs API (opens in a new tab).

Disc Space

I used df -h / to get the used and available space and also the mount point.

The command is in bash but I did the text transformation in javascript (which turned out more ugly than expected).

Check the docs of the df command (opens in a new tab).

Network Traffic

I used the ip -s -h link to display the received (RX) and transmitted (TX) bytes.

Check the docs of the IP command (opens in a new tab).

the above stats rendered on white background html page

03-12-2024

Day 3 - the router

I got busy with work and family stuff, so I didn't progress the project as much as I expected, but here is a fun router with vanila JS:

       const router = {
        "POST/somePostEndpoint": somePostEndpointHandler,
        "GET/someGetEndpoint": someGetEndpointHandler,
        default: notFoundHandler,
      };

       const handler = router[request.method + requestUrl.pathname] || router["default"];
       handler(request, response);
      

A request comes in, and based on the method used and the endpoint provided, it maps it to the relevant handler function.

Wrap the above in a function and call it from node's http.createServer and you got a server running!

Check the docs of the node.js http server (opens in a new tab)

04-12-2024

Day 4 - exploring some graphs

CPU graphs

Following the same patern with the previous day, I'm going to run a bash command from nodejs.

Luckily, debian comes with the sar command as part of the sysstat package, and we can use that to report statistics of CPU usage.

Check the docs for the sar command (opens in a new tab).

Debian also comes with the sadf command which displays the data collected by sar in different formats, and in our case in svg

Check the docs for the sadf command (opens in a new tab).

So here is the end command that reports CPU statistics in SVG: sadf -g -O customcol -- -P ALL.

Now I can run this on the server and return the SVG back to the frontend.

Graph of CPU usage for a single core

05-12-2024

Day 5 - more graphs

Network graphs

Similar with yesterday, I'm using sadf to generate network statistics: sadf -g -O customcol,skipempty -- -n DEV --iface=wlx3

I'm also generarating another SVG to capture network error statistics: sadf -g -O customcol,skipempty -- -n EDEV --iface=wlx3

Memory graph

Memory utilization statistics: sadf -g -O customcol -- -r

Putting it all together

I created some basic endpoints and handlers on the backend and used HTMX to call the endpoints on the frontend. Everything works nicely so I'm calling it a day!

06-12-2024

Day 6 - Logs?

I started to run out of ideas on what to build next. So, since things go often wrong, it will be good to have some logs readily available without having to ssh to the server.

For that, I'm using journalctl to output daily logs paginated, and I'm going to extend this in the future as I see fit.

07-12-2024

Day 7 - IoT

I have the TFA Dostmann AIRCO2NTROL for monitoring temperature and co2 but it is completely disconnected from any network.

a co2/temp/humidity monitor that is small enough to fit in your palm

It does not support wifi or bluetooth or any other wireless protocol but it can transfer data through usb.

As far as I know there isnt any documentation for that but there are a few people that managed to read data from the usb in various programming languages.

For the sake of the December adventure I will try to debug this myself instead of copying and pasting stuff.

08-12-2024

Day 8 - usb serial data

I'm reading usb serial data.

Next is making sense on what that output means (any specs?) and converting it to something readable.

09-12-2024

Day 9 - bash again and data persistence

I made some progress and I have a bash script running.

I'm still not able to decode the data to something meaningful and there aren't any specs, so I paused that and now I'm also working in buffering the data and storing them permanently.

10-12-2024

Day 10 - decoded

        Raw data: 410000410d000000
        Bytes in decimal: 65 0 0 65 13 0 0 0
        Operation: 65, Value: 0
         56: 2D7F 11647
         50: 03F8  1016
         6E: 522C 21036
         71: 040C  1036
         4F: 1C1C  7196
         6D: 0D5E  3422
         43: 0B71  2929
         42: 126A  4714
        *41: 0000     0
        Latest Readings:
        CO2: 1016 ppm
        Temperature: 21.47 °C
        Relative Humidity: 0 %
      

11-12-2024

Day 11 - node streams <3

I decided to switch from bash back to nodejs (initially I thought I wouldn't be able to read usb serial data but it turns out that it is just as simple as reading a file).

Enter Node streams as a non-brainer for this use case (and it is also happen to be the most fun thing about node.js).

So, I created a pipeline that reads from a file, transform it's data and then write it to another file for permanent storage.

I also went into a rabbit hole searching what data format would be the most suitable for storing a large volume of information on a limited disc space, without using a dedicated database for timescale data.

(If you want to see a cool conference talk about node streams check James Halliday's (aka Substack) "Harnessing The Awesome Power Of Streams" on LXJS 2012.)

12-12-2024

Day 12 - binary

I decided to store the data in binary and not use any open source databases (eg: any timescale db).

In code, all that comes nicely with nodejs' buffers.

I'm using 7 bytes for each log, out of these, the 4 bytes are the timestamp (epoch in seconds), 1 byte for the indicator (on if the log is for the temperature or for the CO2 metric), followed by 2 bytes for the temperature or the co2 metric.

[Timestamp (4B)][Metric Indicator (1B)][Metric (2B for temperature or co2)]

That basically means that I can store lots of metrics in a limited storage space.

More specifically, since I'm getting 1 log per second it will take 347 days to fill 200 MB of storage space with 29,959,314 logs (and at this point I might as well start purging the older ones).

13-12-2024

Day 13 - visuals

I created the new endpoints and plugged the functions that write and read the air quality metrics.

Next, is to visualize them in a way that they make sense, and for that, I would normally use canvas, as I feel very comfortable working with it, but I decided to make something with SVG since the frontend is using HTMX.

Graph of CO2 metrics

I also got a meshtastic device (decentralized wireless off-grid mesh networking LoRa protocol) which I'm planning on plugging it to the server, probably in the next days.

14-12-2024

Day 14 - temp visuals and historic data

Similar with Day 13, but this time for temperature data.

I'm also working on having some sort of a paginated view for historic data in both graphs (Node's streams come handy again).

I'm missing grafana but I have no appetite to create an interactive time series visualization.

15-12-2024

Day 15 - init the services and housekeeping

I created systemd services, managed them with systemctl, wrote bash scripts and a Makefile for init purposes and created a status page to monitor their health.

Then I spent a good amount of time restructuring the project, since it started to become to big to manage in a single dir, and I did a rewrite of the static file handler.

16-12-2024

Day 16 - meshtastic

Flashing the RAK4630 with a firmware was a hoot!

I learned that the web serial API (opens in a new tab) is a thing and I can flash the device directly from my browser (but sadly no firefox support so I had to download a chromium based browser).

The android meshtastic app was easy to use and paired wth the device without any issues. Unfortunately I wasn't able to find any other nodes nearby and that will make testing and developing dashboards more difficult.

The next thing is to explore how to disable the wifi and bluetooth then get serial data out of the device and finally to make sense of that data in order to create dashboards.

17-12-2024

Day 17 - serial data vol2

I'm trying to get serial data out of the RAK4630, with no luck so far.

That is the whole update 😅.

18-12-2024

Day 18 - serial data / the fun part

I was able to get a connection to the `/dev/ttyACM0 serial port, after adding my user to dialout group by running this command: sudo usermod -a -G dialout $USER.

I'm using nodejs' streams again and there is a weird behavior where the connection is terminated and the fs.createReadStream is reaching the end of the input, which is something I'm not expecting when reading from a serial port. I'm still able to read a couple of lines of data, and I can probably get the full output if I start polling for the results but there is no fun in that.. so more about this tomorrow!

19-12-2024

Day 19 - log structure

I'm not able to replicate the issue from yesterday and my pipeline of streams work as expected.

The data are not in a structure I would expect. In the small sample I got, there are log severity indicators (so far DEBUG and INFO), timestamps, log categories (although for many logs this is not present) and free text messages.

A single log line looks like this: DEBUG|INFO hh:mm:ss SOME_CODE_3_DIGIT_NUMBER [Power | DeviceTelemetry | Router | GPS] MSG

I can't find any specs defined in the docs, so I can translate the log to something more meaningful on the frontend, and I reached to the community for support. If that fails I'll probably have a look at the firmware for any clues (although I avoid that deliberately as I don't like C++).

Other than that, I created a service, handled it's instrumentation and outputted the raw logs in a separate dashboard on the frontend.

20-12-2024

Day 20 - memex

I've been writing a knowledge database for many years inspired from Cory Doctorow's Memex method (opens in a new tab). It's a mix of technical and non technical docs in a markdown format.

It lives in a git hosting website that gives me also a good searching functionality when I'm not on my laptop.

I won't expose the whole memex in the local portal and the plan is to only serve the docs that will be useful to my family. For now that would be recipes!

In the future, I might convert it to something similar with yosh's Obsidian template (opens in a new tab), that I saw on Mastodon today and I thought it looked cool.

21-12-2024

Day 21 - schema

I'll avoid using markdown since I want to stick to the non-depedencies rule thing. And I'm not feeling like creating a MD to HTML convertor from scratch, even though that it might be an easy task if I cover only basic elements like headings and links.

For the data format I'm using JSON.

For the schema I'm using the "Recipe" schema type from schema.org (opens in a new tab), encoded as JSON-LD.

It all works quite nicely, I have a pipeline that stream reads the JSON file, I use offsets for pagination, in the transform - I buffer the data until I fullfil the request and then I map them to html that get returned as the response.

The backend only loads in memory what is needed, the frontend only gets what is requesting instead of the whole JSON file that is a few MBs, and all of that while having nice linked data.

22-12-2024

Day 22 - indexing

I'm trying to create an index for efficiently retrieving a recipe from the JSON file.

I want to achieve O(1) and that requires knowing the exact byte offset of the desired object within the file (which is not straightforward with JSON due to its variable-length nature).

The idea is to update the index at the point of adding a new recipe, and updating it should be benefiting from previous runs.

The index will have the recipe's identifier and the start and end byte of the object

The frontend will request the recipe using the identifier and the backend will read the index file, retrieve the start and end bytes and use them in a read stream as start and end positions.

I have not done this yet sucessfuly, but I'm in a good place with automated tests.

Update: The issue I was having was due to multiple byte characters but now all my tests are passing.

23-12-2024

Day 23 - recipes!

Since the data and handlers are ready I'm now doing some basic html and css stuff to render the recipes.

I need to decide which properties I'm going to keep from the "Recipe" schema and account for them in code.

Since "Less is More", I'm thinking of stripping the images support completely, stick to the three categories as seen in the screenshot and just add more metadata under 'Properties'.

Guacamole recipe rendered in an html page. Three categories with h2 headings, 'Properties', 'Ingredients' and 'Instructions'

24-12-2024

Day 24 - web components (ugh!)

I hate using web components (for reasons), but like them or not they are part of the web standards now and that means they are widely supported from all major browsers without any dependencies. Also, since I rarely use them that makes it a good December Adventure topic.

So, I created a couple of them mainly to save the time from updating common components in different html files every time I want to introduce a new change, and that also gave me sandboxed styles for free.

25-12-2024

Day 25 - p2p

Exploring designing a toy p2p protocol.

No, code written but lot's of reading done.

I found lots of interesting papers in the peer-to-peer wikipedia page (opens in a new tab)

I also found a lot of old books in the archive.org but none of them are free.

If you have any reads to recommend send me a message on mastodon at @evangelos@libretooth.gr.

26-12-2024

Day 26 - RTL-SDR

I got a radio scanner that I'm planning to use to get NOAA weather satellite images.

This won't require any external internet connection so it is perfect for my server as it still ticks the "offline" checkbox.

I'm exploring what type of antenna I will need in order to receive frequencies at 137 MHz, and how to DIY that antenna with hardware store materials.

27-12-2024

Day 27 - radio software

I'm still doing some reading on antennas but I will likely go with the Quadrifilar Helix antenna (opens in a new tab) because it is fairly compact and looks cool.

For the software I'm exploring on what will it take to not use any depedencies.

The process of getting and saving the data from the SDR it's probably going to be similar with the other serial data stuff I've done this month.

The complex part will be the decoder, but as it is now I'm blocked from not having an antenna, so I will move to another adventure until this is sorted.

28-12-2024

Day 28 - 38c3

Since yesterday, I'm full on watching 38c3 talks (opens in a new tab), so all other activities are paused.

I'm logging the talks I'm watching here (opens in a new tab) and that will continue until the 30th of December.

29-12-2024

Day 29 - caching

I extended my static handler to cache the static assets.

For that I'm using the ETag and Cache-Control HTTP headers.

The Cache-Control determines caching behavior (eg: caching duration) and the ETag enables revalidation (eg: if the cache expires or if the resource has changed).

The first load returns 200 OK status for the static files and then looks for the ETag header in the response headers.

Reloading the page, without clearing the cache, returns a 304 Not Modified status for the cached files and the request headers include an If-None-Match header with the ETag value.

Modifying the static file on the server and reloading the page results in a 200 OK status and a new ETag value.

All of that to optimize performance by minimizing unnecessary data transfers, which in my case is not needed but it is fun to do it.

30-12-2024

Day 30 - error handling

A few things that I did as part of this:

31-12-2024

Day 31 - structured semantic logger

To wrap up the December Adventure, I created a logger and replaced all the console related logs in the codebase.

There are now three level of logs: INFO for routine information, WARN for events that might cause problems and ERROR for events that are likely to cause problems.

The logs' payload is in JSON and it looks like this:

  {
    "level": "info | warn | error",
    "timestamp": "2024-12-31T08:00:00.000Z",
    "title": "Failed to fetch user data",
    "message": "12345",
    ...additionalMetadata
  }
        

That makes it dead easy to filter relevant information based on their severity and properties.

The next thing I might do is to add a DEBUG level of logs that will be visible only when I run the server locally or on tests.