- Introduction (handout and overhead presentation).
- The client-server model (handout and overhead presentation). We learn about the client-server model
at the base of all the network applications. This matter is also covered in Chapter 2 of the textbook.
- Concurrency (handout and overhead presentation). Concurrency is a must for network application,
and here is a first look at how it can be accomplished. This first look is according to Chapter 3 of the
textbook. We will revise and expand the discussion later.
- Application program interfaces (handout and overhead presentation). This section is concerned with
the principles and examples of application programming interfaces. In the textbook APIs are covered
by Chapter 4, Chapter 5, and part of Chapter 6.
Sample code: You may find the following various helper functions for both servers and clients useful
(take a look at the header for details). You are welcome to use them in your assignments, but first make
sure you understand what happens in there. In other words, keep it someplace and refer to it once
you are presented with new TCP-related concepts; the archive may contain examples of use for these
concepts. You are not expected to understand all the code at once, concepts shall be introduced as the
course progresses.
- Client design (handout and overhead presentation). Clients may be complex pieces of software (think
modern Web browser), but the communication part is fairly straightforward. This section is covered
in the textbook in Chapter 6. Note that at this time we re only concerned with TCP clients; the UDP
part will be discussed later.
Sample code: Here is a simple client. The archive contains the various functions for TCP clients you
have seen already (tcp-utils.h and tcp-utils.cc), and code for a simple client that does roughly
the same thing as the parameterized telnet client (client.cc).
-
Server design (handout and overhead presentation). Here is how a server looks like. Most of this
discussion is also covered in Chapters 8 and 13 of the textbook, though critical regions are extra.
Sample code: You have seen a client, so we now have a server for a change. Besides the known tcp-utils module
(tcp-utils.h and tcp-utils.cc), the archive contains a simple server, which receives lines of text from
clients and send them back prefixed by a string. There are in fact three implementations of this server,
produced by the following targets in the associated makefile (the default target makes them all):
- iserv is the iterative variant (listens by default on port 9000)
- ciserv is still iterative in nature, but simulates concurrency in its sole thread of execution (listens
by default on port 9002)
- cserv is the fully concurrent version (listens by default on port 9001)
- Multithreaded servers (handout and overhead presentation). Concurrent programs (including obviously
servers) can also be implemented in a single process using multiple threads of execution. This matter is also
covered in Chapter 12 of the textbook.
Sample code: Here are two more servers: a multithreaded one (tserv.cc) and another which features a monitor
thread (mtserv.cc). The implementation of the monitor is for all practical purposes identical to the one
presented in Section 12.8 of the textbook. Note in passing that this archive also contains a makefile that
through judicious macro definitions does not need any rule actions (except of course for the “clean” target).
- Managing concurrency (handout and overhead presentation). Concurrency is managed in most real-life
servers, and for a good reason. This is also covered in Chapter 16.
Sample code: Here you have yet another addition to our family of servers. Specifically, you have now a
multithreaded server with a control socket opened to local machine only (mctserv which is also
a multiservice server if you think about it, see the next section), a multithreaded server that
uses preallocation (mtpserv), and a multiprocess server that also uses preallocation (cpserv).
- Multiservice servers (handout and overhead presentation). There is nothing preventing servers from
handling clients of different types using different application protocols. Also see Chapter 15.
Sample code: Here you have the whole collection of servers. In addition to what you have seen already, a sample
super server has been added (see the included README for details).
- Practical issues (handout and overhead presentation). Servers are daemons that is, programs that sit quietly
in background and only wake up to respond to specific requests. This section explains how to create Unix
daemons in general (and well-behaved servers in particular). This is also covered in the textbook Sections 30.1
to 30.23.
- Logging and debugging (handout and overhead presentation). Daemons do not have a terminal to print to, so
they have to provide information through different mechanisms that is, using logging facilities. Debugging
also has its own challenges in production environments. This matter is partially covered in Chaoter 30 of the
textbook.
- Deadlock and starvation (handout and overhead presentation). We discussed deadlock issues throughout
the course. Now we summarize all of these discussions. Also see Chapter 31 of the textbook.
- Secure programming (handout and overhead presentation). Secure programming is a course by
itself, but the highlights are definitely worth presenting, especially since servers interact with
unknown clients and so are particularly sensitive pieces of software. This presentation is based on
the Secure programs howto. Exploiting buffer oferflows is explained in details in the classic
Smashing the Stack for Fun and Profit. A couple of examples are from this paper on race conditions.
- The User Datagram Protocol (UDP) (handout and overhead presentation). We now discuss the main
connectionless TCP/IP transport-level protocol. This is covered throughout the textbook namely, in
Sections 6.18 to 6.24, 8.19, 8.22, 8.27, and 8.28.
- The Internet Protocol (IP) (handout and overhead presentation). Here is a brief presentation of the network
level of the TCP/IP stack. IP is described in RFC 791. Those interested in firewalls can check out the nftables
wiki, though this is obviously not necessary for this course. I do not have a good reference for routing
algorithms at the level of abstraction used in this course, but they are described in details in all the textbooks
on computer networks.
Lecture recording
Lectures are recorded as follows:
Reusing a socket
When you use the close command to close a socket your program tells the system that it is done with using the
respective socket. The socket is however not deallocated, and even the port binding created by your program is still
in place. Since nobody is using it, the port binding will eventually time out (according to the TCP timeout
value, typically of the order of minutes) and die, but you get in the meantime slapped with an “port
already in use” error if you want to bind another socket to the same port. This could become a serious
nuisance when one uses thread preallocation, and especially when this is combined with dynamic
reconfiguration.
This kind of behaviour occurs when the client does not shut down the socket. Indeed, the TCP stack on the
server side assumes that communication will come from the client in the future unless the client has sent an end of
file. But then the end of file is sent only upon shoudown. One solution is thus to convince one’s clients to be
well-behaved, but this is no real solution from the server’s programmer point of view (typically the server’s
programmer has no control over the clients).
I could find no server-side solution to convince a socket to just die for good (should anybody know one, I would
be very interested to hear about it). So I will present here the next best thing, a workaround. The
workaround consists in convincing your socket to tell bind to reuse the initially provided address. You can
do this by modifying the properties of the socket as follows (with sd the descriptor of your master
socket):
int reuse = 1;
setsockopt(sd, SOL_SOCKET, SO_REUSEADDR, &reuse, sizeof(reuse));
This must be done after creating the socket and before binding it. See the manual page of socket in Section 7 for
details on this and other options, and of course the manual page for setsockopt.
Finally, it is worth pointing out that when a client dies in an uncivilized manner the server receives a SIGPIPE
signal from its TCP stack. This may be be used as well in the context of this problem (though I don’t really know
how).
You may also be interested in this explanation of what happens when a socket closes.
Recommended reading
To better understand the concepts presented during the lectures. it may be a good idea to read the examples
presented in the textbook for a supplementary (and sometimes alternative) view of the domain, as
follows:
-
Chapter 7
-
deals with client design (both TCP and UDP). You will not necessarily be able to use those clients
(most ECHO ports are actually closed), but they do make good simple examples.
-
Chapters 10 and 11
-
deal with TCP server design (iterative and concurrent, respectively).
-
Section 13.5
-
presents an implementation that simulates concurrency in a single thread of execution. It is similar in
spirit with the implementation you have seen in the course, but does not allocate memory dynamically,
and uses select instead of poll. You may find it more palatable.
-
Section 15.9
-
presents a sample super server, worth a look.
-
Chapters 7, 9, 14, and 15
-
present UDP clients and servers. I strongly advise at least a quick look at them.