Detecting a TCP reset with Linux sockets

When one end of a TCP link disconnects, it sends a reset (RST) message to the other end. I want to be able to receive this in the application layer.

In my code, I use a select() call to receive input from potentially multiple sources, including a TCP connection. I’ve seen that when the select() shows that there is data ready to read on the connection, and then a read() call returns 0 bytes read, that happened after a RST was sent over TCP. (I understand recv() works similarly to read().)

Does read() return 0 bytes (after select()) on a TCP connection only if the connection was reset? Does it ever return 0 bytes in any other cases?

I remember a while ago using a particular ethernet device on the other end of the connection, and this Linux end was receiving 0 bytes after a select(), but not for a connection reset, but rather mid-way through some data streaming. I confirmed in Wireshark that the packet received had 0 data bytes. Is this a bug, or like the question above, is it valid to have this behaviour? I can’t remember which device it was, as it was a few years back, but it was using a Windows driver.


Source: c#

Leave a Reply