Man In The Middle in SCADA network

The SCADA acronym stands for supervisory control and data acquisition. A SCADA system, is a collection of different software and hardware components that are connected through a network. The system includes inputs and sensors, PLCS and Remote terminal units and different human machine interfaces.

A SCADA system can use communication systems over TCP, for example the IEC 104 protocol. That protocol like DNP3 does not come with the authentication and packet verification batteries included. This means it’s probably vulnerable to man in the middle attacks (hint: it is).

How would a man in the middle attack look like in such a network? We are mostly familiar with such attacks for web communications like for example we know about man in the middle attacks when a user logs into their bank web admin using an untrusted network. A man in the middle attack works similarly for SCADA systems.

Let’s suppose we have an electrical grid which uses some remote transfer units. Now let’s suppose one of these RTUs detects a faulty condition and wants to communicate the condition back to the main SCADA servers. This communication is vulnerable when it’s being done over MODBUS, DNP3 or IEC 104. An attacker with enough domain knowledge will intercept the communication and will be able to modify crucial data.

For example if the communication is over TCP with IEC 104, an attacker can intercept 104 packets, modify the SPI (status) field of one packet and then route it back to the SCADA servers. This can lead to hiding the problem from the engineers and could lead eventually to economical loss or damage to the company’s image.

Halting the callback chains

On this article I will share a technique to find out more about callbacks in Ruby on Rails framework.

Callbacks are a common way of implicitly invoking actions when an event fires inside a system. Many web frameworks like Rails or Laravel depend on this mechanism to handle esoteric events. A chain of callbacks is created when multiple callbacks are being fired one after another. Finally, we will call halting such a chain when we stop the event propagation.

Halting a callback in Rails is simple if we use the abstraction provided by the framework. Specifically, calling

throw :abort

will halt the ActiveRecord callbacks.

In larger Rails systems where legacy code or questionable techniques, like depending in implicit ActiveRecord callbacks for domain specific logic, may have been accumulated, halting and debugging callbacks is not a trivial task. One way to approach such troubleshooting is by learning more about the callbacks of our system. To do that we will use the pry debugger.

The class methods we are looking for is:

_#{action_name}_callbacks # e.g., _destroy_callbacks

Then using select, map and filter I am able to list and iterate over any callback given an ActiveModel class:

User._destroy_callbacks.select {|cb| cb.kind == :after }.map(&:filter)

=>
[
  :destroy_user_friends,
  :build_ast,
  :disconnect_from_bank_system,
  :check_soft_delete,
  ...
]

Caching in HTTP/1.1

Throughout my work experience with web applications HTTP caching was a must have element of the request-response system I was building. HTTP caching is a huge topic (e.g., this is the RFC for HTTP/1.1 – https://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html), and in this post I want to discuss my experience with the cache directives and specifically the If-Match or If-None-Match ones.

The directives are unidirectional in HTTP/1.1. Let’s suppose we have a request response cycle. There is a simple web frontend rendered in a browser and the user has clicked a button. The first ever request will go to some server which will server an answer. Unidirectional means that even if the request contains some directive, that doesn’t provide any guarantee for the existence of such directive in the response. They may exist as is and they may not exist. Some common request directives include: no-cache, no-store, max-stale, min-fresh, no-transform, only-if-cached.  Some response ones are: public, must-revalidate, proxy-revalidate and others.

If we zoom into the If-Match directive we notice the close relation with the entity tags (or ETags). An ETag represents a resource and is a digest of information. Using such digest helps the client decide whether the resource returned in the response is stale or fresh. 

HTTP is a stateless protocol. ETags represent a state though. If a response tells us a story where a resource is partially or fully stale or fresh then this means there is a background story. This background story is the state of the resource we chose to include in the request response cycle by using entity tags.

One way to use ETags is by including one in the response headers. Then subsequent HTTP requests will use this information with the If-None-Match header to determine whether the data is stale. The server then will compute an information digest for the requested resource, an ETag and will respond with 304 if the resource is fresh. Alternatively, it will respond with a new tag.

Some web frameworks like ruby on rails abstract ETags away from the programmer. The essence is whether the resource is fresh or not. So Rails has a method called

stale?

which uses ETags behind the scenes. That method is used usually alongside the 

fresh_when

method as described in the docs https://guides.rubyonrails.org/caching_with_rails.html.

Example usage:

curl -i http://localhost:3000/posts/1 

Content-Length: 667 
Etag: "123123122132" 
Last-Modified: Wed, 12 Nov 2014 15:44:46 GMT 

And then:

curl -i -H 'If-None-Match: "123123122132"' http://localhost:3000/posts/1 

HTTP/1.1 304 Not Modified 
Etag: "123123122132" 
Last-Modified: Wed, 12 Nov 2014 15:44:46 GMT 

The advantage of the above is that we gain the time that would normally be spent rendering the response and the response body is empty.

Finally, there are some interesting challenges that come with using such a cache mechanism. First, the content is not cached between users. Building a caching mechanism to care for such case would need a more custom solution. Second, user specific content (so, most of it in big applications) is not handled correctly. Every user will see a different timeline, has different followers and different connection graphs. Using the above simple cache mechanism will always return stale data in those cases so we will avoid it. To solve the problem of caching a timeline we could work outside the application layer. Each user can have their first x items of their timeline cached in an edge Redis server for instance.

Malware System Calls

My motivation for writing this essay was a graduate project I built while studying security in Georgia Tech. An important part of the system is a program that relies on finite state automata to train deep learning models.

Each state is a system call. The following table lists the system calls I used for developing the Linux kernel modules.

In the first column there is the eax register value, followed by the system call name, source code and the ebx, ecx, edx register values. The register values are the arguments we need to pass into the system calls or in the case of hooking the system calls these register values must be read and passed along to the hook.

  • eax: 4, name: sys_write, ebx: unsigned int, ecx: const char *, edx: size_t
  • eax: 5, name: sys_open, ebx: const char *, ecx: int, edx: int
  • eax: 11, name: sys_execve, ebx: struct pt_regs
  • eax: 15, name: sys_chmod, ebx: const char *, ecx: mode_t
  • eax: 23, name: sys_setuid, ebx: uid_t
  • eax: 24, name: sys_getuid
  • eax: 33, name: sys_access, ebx: const char *, ecx: int