The purpose of the binary is to create a backdoor in the system that it is run.
This means that once the program is running it may go unnoticed by the normal users of the system, which will perform without problems, and will allow a remote user to use the server in which it runs at its pleasure. To avoid being noticed it listens for commands on a "raw" socket and disguises itself as a process named "[mingetty]" independently of how it was initially invoked.
A hacker with the appropiate knowledge of its functionality can take full control over the server that runs it as it is able to execute arbitrary commands through a telnet connection (case 6) or by different means (cases 3 and 7).
It also has some extra "traffic generation" capabilities (cases 4, 5, 9, 10, 11 and 12).
For a detailed explanation of the encoding/decoding process check our previous section on the "decryption" routine.
For a short description of the basics of the decoding process, check the basic "decoding" routine.
A very basic decoder is provided here.
All the "commands" sent to the binary are sent through the "nvp" protocol. This protocol is "Network Voice Protocol", and nowadays it's really uncommon to see traffic using this protocol (or perhaps it is not seen at all except for hacking?). So one way to detect this traffic would be simply to identify this protocol. To block this protocol would stop the ability of any remote user to control and use the binary.
To extend the detection of similar network traffic it would be a good idea to look for any "uncommon" protocol. The list of all the "named" protocols can be obtained from the "/etc/protocols" file in any *nix server.
The most common ones are probably "ICMP", "UDP" and "TCP", but in any given network some others can be seen like "IGMP" or "IGP" for example. But if we're using any kind of IDS (Intrusion Detection System) it could be a good idea to identify any traffic that is using an uncommon protocol in our network and only once it has been verified as valid remove its control from our IDS system.
As a curious mention, in http://www.incidents.org/archives/intrusions/msg00774.html we can see a message from someone that probably was dealing with some trojan/backdoor using a way to communicate similar to the one we're discussing here.
Being this the first reverse engineering of a binary I've done in my life I have to admit that perhaps I'll mention as "protections" things that perhaps were not meant to be, but they have given me some difficulty or have misguided me in some way. Here comes the list of "difficulties" encountered;
This one is perhaps too obvious, but stripping a binary is a common practice to avoid reverse engineering. Without stripping the binary a lot of useful information shows up under "gdb" or "objdump". This would be the first thing to do for anyone that wants a minimum of reverse engineering protection.
Static compilation means that all the used functions of the libraries are linked into the binary. This has several effects ( all good to protect reverse engineering).
Mainly without static compilation we wouldn't have had to go through the function identification process we've done before, which we have to remember, even that it has proved really useful it's not warranted to be 100% precise.
Compiling code with high optimization always generates a more difficult to debug (and reverse engineer) binary. In our case, for example, fenris can't tell completely correctly the parameters passed to a function because the stack pointer it's not updated after every function call but just after a while. This confuses fenris in thinking that there're more parameters on the stack than they really are. Another example of how optimization makes assembly code harder to reverse engineer would be the use of "register variables" where a local variable that was defined in the original source ("C") code doesn't show up as such but instead a register is used to store it.
Mainly optimization is not intended to difficult reverse engineering, but it's one of its side effects.
This is a problem for debuggers in general. And fenris is no exception here. We have seen how fenris disliked the situation.
Here we can only guess that this was done to avoid reverse engineering. It could be possible that the programmer was just doing a "common practice" when setting up a daemon.
Here comes another one that is subject to interpretation. It's possible that what we see as complex and unintuitive way of implementing a routine it's not that complex or unintuitive if we see the original source code or hear the explanation of the programmer.
But seeing the "decrypt_function" the way it was written compared to the way it can be written, clearly makes us think that the unnecessary complexity was added on purpose.
This one probably wasn't used to prevent reverse engineering, but it really has this effect. Running an unknown binary that has obviously some obscure intentions is something that should never be done with a root account. The only possible exception could be a test machine that is disconnected from any network with production or non-test systems, even indirectly. And even then is advisable not to run such a binary as root until a minimum understanding of its capabilities is achieved.
So, even that probably this was not intended to prevent reverse engineering, having to "patch" all this privileged calls adds difficulties to it.
Well, I'm not sure this one was used to add difficulty to reverse engineer, but I definitely spent more time than I should trying to find out what these routines were doing. I ended up calling them "randomize??" "rand??" and "rand??caller", and they seem to perform this actions, but I don't have any "convincing" demonstration that this is all they do!
There have been other backdoors around. To mention some we could mention BackOrifice and BackOrifice2K as they have become quite well known as they were targeted to the more popular OS; Windows.