A discovery service has one key task: to map peer identifiers to peer connectivity information (see Figure 8-1). The peer identifier might be a unique user name or a dynamically generated identifier such as a globally unique identifier (GUID). The connectivity information includes all the details needed for another peer to create a direct connection. Typically, this includes an IP address and port number, although this information could be wrapped up in a higher-level construct. For example, the coordination server that we used in the Remoting chat application stores a proxy (technically, an ObjRef) that encapsulates the IP address and port number as well as other details such as the remote class type and version.
In addition, a discovery service might provide information about the resources a peer provides. For example, in the file-sharing application demonstrated in the next chapter, a peer creates a query based on a file name or keyword. The server then responds with a list of peers that can satisfy that request. In order to provide this higher-level service, the discovery service needs to store a catalog of peer information, as shown in Figure 8-2. This makes the system more dependent on its central component, and it limits the ways that you can search, because the central component must expect the types of searches and have all the required catalogs. However, if your searches are easy to categorize, this approach greatly improves performance and reduces network bandwidth.
Discovery services can be divided into two categories: stateful and stateless. The coordination component in Part Two was a stateful server; in other words, it runs continuously and stores all information directly in memory. This approach is fast for an off-the-cuff solution, but it presents a few shortcomings, including the following:
Long-running applications sometimes fail.
If the server needs to be restarted, all the information about active peers will be lost. This may be a minor issue if the peers are able to dynamically log back in, or it may be a more severe disruption.
It's not efficient to store a large amount of information in memory. As the amount of information increases (for large systems, or for systems in which other resources need to be centrally indexed), the performance of the central server worsens.
Stateful applications can be called simultaneously by multiple clients. As you saw in Chapter 5, you can deal with this by using threading code, but the issues are sometimes subtle and mistakes can lead to errors that are difficult to diagnose.
In this chapter we'll use a stateless server, which retains no in-memory information. Instead, information is serialized into a back-end database. This has the advantage of allowing more complex searches and reducing concurrency problems because databases are extremely efficient at handling large volumes of data and large numbers of simultaneous users. The discovery logic is coded using a .NET web service, which springs to life when called and is destroyed immediately after it returns the requested information.
Overall, you'll find that the discovery service is more efficient for large systems. However, it does impose some additional requirements. The central server will need to run a reliable database engine (in our example, SQL Server), and Internet Information Server (IIS), which hosts all web services. Fortunately, IIS is built-in to Windows 2000, Windows XP, and Windows Server 2003.
If you don't have an instance of SQL Server, you can use a scaled-down version for free. It's called Microsoft Data Engine (MSDE), and it's included with all versions of Visual Studio .NET. The key limitations are that it will only support five simultaneous connections, and it doesn't include graphical tools for designing a database. For more information, refer to the Visual Studio .NET Help files.
In the next few sections, we'll present a whirlwind review of web services and then dive directly into a full-scale example by developing the discovery service we'll need to use with the file-sharing application described in the next chapter.