JavaScript EditorJavascript debugger     Javascript examples

Team LiB
Previous Section Next Section

Security Basics

With so many potential threats against Internet applications, it’s often difficult to know just where to start in designing a secure application. This section will discuss some of the strategies you can use to get started.

Server Setup and Application Design

One of the first areas to concentrate on is how you set up your server. Designing an application that implements authentication and authorization properly is futile if someone can simply bypass all that security through some vulnerability that you haven’t applied the patch for. This section will discuss a number of good practices for server setup and application design, but it’s no substitute for a solid understanding of Windows and IIS security. You should consult other sources for server security, such as the Microsoft TechNet site ( and the security guides for Windows 2000 and Windows XP offered by the United States’ National Security Agency at The books of Michael Howard are an especially good source of information on IIS (and Web application) security. Howard has been a security program manager for Microsoft IIS 4.0 and 5.0, was on the Windows XP team, and is currently a Senior Program Manager for Microsoft’s Secure Windows Initiative. His books include Designing Secure Web-Based Applications for Microsoft Windows 2000 (Microsoft Press, 2000) and Writing Secure Code, 2d ed. (Microsoft Press, 2002), Writing Secure Code, in particular, contains information on threat modeling and other important techniques for designing and developing secure applications that are beyond the scope of this chapter.

Choosing an Operating System

When you’re choosing an operating system (OS), the first thing to ask yourself is just how much security you need. As with many issues related to the design of a Web application, there are trade-offs to be made between security and cost. Although client operating systems such as the Windows 9x series might be able to act as Web servers on a limited basis through Personal Web Services (PWS), they are not acceptable when security is an important factor.

Workstation or server operating systems, such as Windows Server 2003, Windows 2000, and Windows XP Professional, offer more robust scalability and better security features, including better access control (see “Access Control” on page 158), logging, and encryption.


One reason to choose Windows 2000, Windows XP, or Windows Server 2003 to develop and/or host a Web application is that these operating systems can use the NTFS file system; the Windows 9x series cannot. NTFS allows you to provide robust access control at both the file and folder level, and it also provides built-in support for file encryption. The FAT and FAT32 file systems (available in the 9x series) are poor choices for Web applications requiring robust security.

Server operating systems offer the most robust security features, as well as better scalability, greater ease of configuration, and better features for developers.

Your evaluation should include an analysis of features (in this case, security features), cost, and the existing environment, with the goal of determining which OS meets your security needs (features), while allowing you to work within the constraints of cost and existing environment (if any).


Windows 2000 and Windows XP provide a number of major security improvements, including built-in support for file system encryption, security policies and templates (discussed later in this section), and a Security Configuration and Analysis tool. This tool can be very useful in determining whether your system will meet your security needs as configured, and can help you easily configure it if it doesn’t. For these reasons, Windows 2000 (or later) should be the default choice for secure Web applications on the Microsoft platform. Additionally, the Web Server edition of Windows Server 2003 contains many security improvements that should make it the first choice for building new ASP.NET Web applications, whenever possible.

Choosing a Purpose

Another important point to consider when choosing your operating system is the purpose of the server. For smaller applications with low scalability requirements, it can be acceptable to run your Web server, database server, and components all on the same machine. Because IIS and most databases make significant demands on both RAM and processor power, this model does not scale particularly well for larger applications. More important, however, placing a database on a Web server that is exposed to the Internet greatly increases the security risks to the data stored in that database. This can also be true for other server and application software, from mail-server software to productivity applications such as Microsoft Office. The important point is that as you add more functionality to a server, you are also adding more security exposure. Keep all of this in mind as you configure your servers and decide the purpose for each one.

Too Much Service

The next important set of decisions comes when you install your operating system of choice (Windows 2000, for example), or when you add on services to be used by your application. You should be very conscious of exactly which services are installed by default with your chosen operating system, and understand the vulnerabilities that can result.

Unused services running on your server can be a significant security risk. For example, if you install the FTP or SMTP services and have not protected the ports those services use, an attacker can detect the services and attempt to use them to compromise your server through various known vulnerabilities. (You can reduce the risk of compromise through diligent application of patches. See “Patching” on page 155 for more information.)

If you are not using a service, you should avoid installing it, or use the Add Or Remove Programs Control Panel applet to remove it. The following steps, which are specific to Windows Server 2003, show how to do so. (The procedure for Windows 2000 and Windows XP is similar.)


The Microsoft Baseline Security Analyzer (MBSA) tool, which is discussed later in this chapter (see “Patching” on page 155), can assist you in identifying some of the more common unnecessary services that you might consider removing.

Remove unused services

  1. Log in using an administrative account.

  2. Click Start, click Control Panel, and then click Add Or Remove Programs. (In Windows 2000 and Windows XP, click Control Panel, and then click the Add Or Remove Programs icon.)

  3. In the left side of the window, click the Add/Remove Windows Components button.

  4. Review the list of installed components (shown in the following illustration).

    Of particular interest is the Application Server node. Select that node and click the Details button. Next, select the Internet Information Services (IIS) node, and click the Details button. (Note that in Windows 2000 and Windows XP, the IIS node is in the first dialog box, so you won’t see an Application Server node.)

    Click To expand
  5. To remove a service or component, uncheck (deselect) the check box next to it, as done for the SMTP service shown in the illustration on the following page. Click OK to close the Details dialog box, and again to close the Application Server dialog box (this is not necessary in Windows 2000 and Windows XP).

    Click To expand
  6. Click Next to apply your changes.

  7. Click Finish to complete the process, as shown in the following illustration.

    Click To expand

If you need to install services that are not used all the time, you should set them to be started manually rather than automatically. This way, you have control over when these services are running. Be aware, however, that in some cases, even services configured for manual startup can be started because other services are dependent on them. If you have a service you really do not want to ever start, you can set the startup type to disabled, and the service will not start, and cannot be started as a dependent service.


Removing unused services is one way of reducing the so-called surface area that a potential attacker has to work with. This technique applies not just to services, but to any software running on a given machine. You should never install unnecessary software, whether file sharing software, productivity software, or third-party utilities on a server unless you absolutely must and you understand the security impact that installing them entails.

Be a Policy Maker

One unheralded feature of Windows 2000 and later is a robust set of tools for setting up a machine’s security settings quickly and relatively painlessly. A full discussion of these tools is beyond the scope of this book (and could take up a book of its own), but let’s look at a couple of them.

The Security Templates tool and the Security Configuration and Analysis tool, when used together, let you create, edit, and apply templates for defining security policies, from minimum password length to file system auditing policy. Both tools, shown in the following illustration, are implemented as Microsoft Management Console (MMC) snap-ins.

Click To expand

You can also define security policies manually by using the Local Security Policy editor, which you can find by opening the Control Panel (and switching to Classic View, if required) and then double-clicking Administrative Tools and then the Local Security Policy icon. This tool allows you to adjust individual local security policy settings as well as apply security templates to the local machine.

Access the Security Templates tool

  1. Either open an existing MMC console or create a new one by clicking Start, and then selecting Run. Type mmc and click OK.

  2. From the File menu (Console menu in Windows 2000), select Add/ Remove Snap-In (Ctrl+M), and then click the Add button.

  3. From the list of available snap-ins, select the Security Templates snap-in, and then click the Add button.

  4. Click the Close button to close the Add Standalone Snap-in dialog box, and then click OK to close the Add/Remove Snap-In dialog box.

  5. If desired, from the File (or Console) menu click Save (Ctrl+S) or Save As to save your new or modified MMC console.


    An infrequently used but helpful feature of the MMC is creating and saving custom consoles that contain the MMC snap-ins you use most frequently. You can use the procedure outlined in the preceding list to add the Security Configuration and Analysis snap-in (or any other snap-in) to a console.

The advantage of using the Security Templates tool to create and edit your security templates is that it allows you to create security policy templates separately from applying the template to the local machine.

Create a new security template

  1. Expand the Security Templates node by clicking the + symbol to its left, as shown in the illustration on the next page.

    This will display the template path folder(s) available on the machine.

    Click To expand
  2. Right-click the template path folder where you want to store your new template, and then select New Template.

    The following dialog box will be displayed.

    Click To expand
  3. Type the name TestTemplate and the description This is a test template for the new template and click OK. The new template will be displayed under the path folder you selected in Step 1, as shown in the illustration on the following page. (You might need to expand the node for the template folder to view the new template.)

    Click To expand
  4. Expand the node for your new template by double-clicking it, or click the + sign next to it to display the options available for configuration with the template.

  5. Expand the policy area you want to customize, such as the Password Policy (which can be found under the Account Policies node). Select the policy to view its available Policy attributes in the right pane of the console window.

  6. Double-click Minimum Password Length policy.

  7. Check Define This Policy Setting In The Template, as shown in the following illustration. Edit the value to your desired setting.

    Click To expand
  8. Click OK to close the Template Security Policy Setting dialog box and apply your change to the template.

  9. After making any additional changes to the template, save it by right- clicking the TestTemplate node and selecting Save.

Of course, once you’ve created your custom template, you might want to use it to configure security for one or more machines. You can do this by using the Security Configuration and Analysis tool, which you can also use to determine which settings on the local machine are not in compliance with the template you’ve defined (or one of the predefined templates).


As with many security-related tasks, you must be logged in as an administrator to successfully analyze the current security settings and apply security templates. Non-administrators can start up the Security Templates and Security Configuration and Analysis tools, but will receive errors if they attempt to perform actions for which they do not have the necessary permissions.

You can also use templates to define security settings for development, staging, and production security requirements. A development or staging server’s security requirements are usually less restrictive than those for a production server environment, where the completed application ultimately will be deployed. Unfortunately, when an application is moved to the more restrictive environment of the production server, security restrictions can prevent the application from working. You can define security templates for development, staging, and production environments and then apply those templates to your development and/or staging servers to test whether the application will work with the greater restrictions of those environments. Once testing is complete, you can restore the previous template to continue development work.


Because the Security Templates tool can affect a large number of security settings on a machine, it is very important that you configure and test your templates on development or staging systems before applying them to production systems. Certain security restrictions can prevent Web applications from functioning properly (for example, by restricting access on accounts used by the application), so you should always make sure your application works properly under the security policy defined by the template before applying the template to a production system.

Apply a security template

  1. Open a console containing the Security Configuration and Analysis MMC snap-in. If prompted, create a test database for this exercise. (For the purposes of this exercise, it doesn’t matter which template you create the test database from.)

  2. Before selecting a new template, you might want to save your current settings as a template so you can restore them after applying a different template. To save a template containing the current settings, right-click the Security Configuration and Analysis node, select Export Template, provide a name for the exported template, and then click Save.


If the Export Template menu option is unavailable on the context menu of the Security and Configuration Analysis node in MMC, you need to create or open a Security Database, Import a template, and then run Analysis. Once you do this, you will be able to run the Export. Follow these steps to enable the Export Template menu option:

  1. Right click on Security and Configuration Analysis in MMC, and select Open Database.

  2. Type the name of a new database, and click Open.

  3. In the Import Template dialog that will appear, select one of the templates and click Open.

  4. Right click on Security and Configuration Analysis in MMC and select Analyze Computer Now from the context menu.

Once the analysis is complete, Export Template and most other options should be available on the context menu of the Security and Configuration Analysis node in MMC.

  1. Right-click the Security Configuration and Analysis node and select Import Template.

  2. Select the desired template from the dialog box and click OK. For this example, select the built-in hisecws.inf template.


Microsoft has made available a template called hisecweb.inf for configuring a high-security Web server. You can use this template as a starting point for creating your own security templates. You can download it from

  1. You can compare the settings in the template to those currently configured on the local machine by right-clicking the Security Configuration and Analysis node, selecting Analyze Computer Now, and then clicking OK in the Perform Analysis dialog box.

  2. Expand the Account Policies node to display the Password Policy node. Select the Password Policy node to view its settings, as shown in the following illustration. A green check mark indicates where local settings match the template; a red X indicates where the settings do not match.

    Click To expand
  3. To configure the local machine with the settings specified in the template, right-click the Security Configuration and Analysis node, select Configure Computer Now, and then click OK in the Configure System dialog box.

    If you want to view the settings modified by the template, you will need to reanalyze the computer as described in Step 5.

Keep in mind that only settings for which a value has been defined in the template will be applied. All other settings will remain as they were previously configured. Also note that when a server that you’re configuring is part of a Windows 2000 or Windows Server 2003 domain, any settings configured in the domain-wide security policy will override settings in the local security policy.

Passwords, Please

One of the most commonly (and dangerously) overlooked areas in Web server security is password protection. Problems in this area include weak or nonexistent passwords for sensitive information or services, and passwords placed in plain-text files such as ASP and ASP.NET pages, Global.asa or Global.asax files, or configuration files in the Web space.

Weak or Blank Passwords

A simple rule of thumb for sensitive information on your Web server is that it should be protected by a password—a strong one. Too often, developers wrongly assume that it’s sufficient to put Web pages that are to be accessed only by certain authorized users (or other sensitive content) in a separate directory that is accessible only by entering the URL directly. And worse, Web developers who are not sufficiently familiar with Microsoft SQL Server might install these databases on a Web server or another server on the network without understanding the security ramifications. The same security ramifications apply to MSDE, which can be installed with the Microsoft .NET Framework Quickstart samples or Microsoft Visual Studio .NET.

SQL Server and MSDE both contain an extended stored procedure called xp_cmdshell that allows command-line commands to be run on the server. Why is this important? Because in many cases, people still install SQL Server or MSDE with a blank password for the sa (system administrator) account. For a server connected to the Internet, this is like begging to be hacked. In fact, one Web-hosting company that hosts a site devoted to ASP and ASP.NET articles fell prey to precisely this problem because of the default configuration of MSDE in the Beta 2 .NET Framework QuickStart samples, which included a blank sa password, a situation which has been corrected in the release version. This highlights the inherent risk of both beta software and of samples, which are rarely designed for high security. The result was that some malcontents used xp_cmdshell to delete much of the content on the affected server. The Web-hosting company was fortunate enough to have backups of the content and was able to restore the server, but the damage could have been much worse.


I cannot emphasize this strongly enough: You must have a strong password on the sa account of any SQL Server or MSDE databases on your network. A malicious user with access through the sa account can do anything to your server that could be accomplished from a command line, including adding or deleting user accounts, installing and executing malicious code, and deleting content or system files.

Change the MSDE sa password

  1. On the machine where MSDE is installed, open a command prompt. (See Appendix C for instructions on installing MSDE.)

  2. Type the following command, which uses the osql command-line utility to connect to the \VSdotNET named instance of MSDE that can be installed by the QuickStart samples, and then press the Enter key:

    osql -S(local)\VSdotNET -E
  3. If the connection is successful, you’ll see the following prompt. (If the login fails, you’ll get a failure message.)

  4. If the login is successful, enter the following commands to modify the password for the sa account. Make sure that you memorize the new password so you won’t forget it. (Avoid writing it down if possible. But if you must, make sure you put it somewhere secure so prying eyes won’t find it.) Replace <new password> with the new password you’ve chosen, and follow the go command by pressing the Enter key:

    1> sp_password NULL, '<new password>', 'sa 2> go
  5. If the result is successful, you should see the message shown in the following illustration. Also note that, in this example, a password containing letters, numbers, and symbols has been chosen.


Although it might seem obvious, you should not use the password shown in this example as the sa password for your SQL Server or MSDE instance. Instead, choose a unique value that will be difficult to guess.

Click To expand
  1. Type exit, and then press the Enter key to exit the osql utility.

You can also use the SQL Server Enterprise Manager utility, if it is available, to modify the login accounts and passwords for an MSDE database.

Almost as bad as blank passwords are passwords that are weak (easily guessable), such as

  • Names or places

  • Dates, such as birthdays or anniversaries

  • Words found in a dictionary

  • Short passwords (8 characters or fewer)

  • Passwords that are all letters, all lowercase, all uppercase, or all numeric

Weak passwords make it much easier for someone trying to hack a server to guess the password for an account. So-called dictionary attacks use a dictionary of common terms or words to rapidly attempt to log in to an account. Brute force attacks attempt every possible value until they find the correct one. While a strong password won’t necessarily eliminate brute force attacks, it can increase the time needed for such an attack to succeed. In combination with appropriate auditing and logging, this can give you time to deal with the attack.

Strong passwords meet minimum requirements for length and complexity. You can require secure passwords using a security template, such as the hisecweb.inf template, or by using the Security Configuration and Analysis tool as described earlier in this chapter. The hisecweb.inf template sets minimum and maximum password age, enforces password history (preventing users from reusing old passwords), sets minimum password length to eight characters, and requires passwords to meet minimum complexity requirements, including requiring passwords to include three of the following four categories of characters:

  • Uppercase characters

  • Lowercase characters

  • Numeric characters

  • Nonalphanumeric symbols (such as punctuation and special characters such as *, #, and $)

Note that these settings control passwords only for NT security accounts. If you use your own authentication credentials in your application, you will need to implement your own solution for enforcing strong passwords, such as using the RegularExpressionValidator Server Control to perform pattern matching. (See Chapter 8 for more information on validation controls.)


One of the best things an application developer can do to encourage users to use strong passwords is to help reduce the number of passwords that users must remember. Too many times, we design applications without any consideration for the host of passwords users must remember for other applications or Web sites. The more passwords users have to remember, the more likely they are to choose simple (and easily guessed) passwords or to write passwords down on sticky notes attached to their monitors.

So what should you, as an application developer do? Find ways to reduce the number of passwords you require your users to remember. A couple of ideas for this include:

  • Consolidate authentication for multiple applications under your control, whenever possible. Allowing a single sign-on to multiple applications can reduce the overhead of managing logins (authentication), while application rights can still be assigned on a per-user basis (authorization).

  • Use Windows Authentication whenever possible. Using Windows authentication means that users do not need to remember a separate username and login for your application. Of course, if you’re going to use Windows authentication, you should encourage the use of strong password policies in your organization so your application will be less vulnerable to easily guessed passwords.

Other practices, such as limiting the length of passwords or making certain characters invalid in a password, can discourage users from reusing passwords that they have memorized for other applications. Here, there’s a balancing act between the desirable practice of having unique passwords and the reality that more passwords to remember often means shortcuts like writing down passwords. In cases like these, ask yourself which is worse: users reusing existing passwords (assuming these meet your requirements for strong passwords) or users writing down new passwords because they aren’t going to be able to remember them. Just remember that while security is a very important goal, if you make security too hard for your users to deal with, you might end up encouraging practices that make your application less secure in the end.

Unsafe Storing of Passwords

Another common problem in Web applications is the storing of passwords or other sensitive data in unsafe locations: plain-text files such as ASP and ASP.NET pages, Global.asa or Global.asax files, and configuration files in the Web space (files that are accessible by HTTP requests to the Web server). The problem usually occurs when developers place database login information, including the password, in a plain-text file on the Web server. These developers assume that since users are prevented from viewing the source of .asp, .aspx, .asa, and .asax files by default, this information will be safe. Unfortunately, this is true only if the Web server is not compromised by a security vulnerability. If a server is compromised, a malicious user might be able to read any file in the Web space for an application and gain the password(s) stored there.

You can protect sensitive information such as database passwords by using one of the following options.

  • If you’re using SQL Server or MSDE, use a trusted connection to connect to your database. This method uses the Trusted_Connection attribute of a connection string to tell SQL Server to use the current user’s NT login information to log in. This is most useful in intranet scenarios where users log in to your application via NTLM with an NT username and password. This method gives you the advantage of not needing to store a password at all.

  • Store the connection string information in the machine.config file, which is not directly in the Web space of the application, using the appSettings configuration section (described in Appendix B). Although this method is still not ideal because password information is still being stored in plain text, the fact that the machine.config file is stored outside of the Web space makes it that much harder for a malicious user to get to this file. For better security with this method, the directory containing machine.config and the directory containing your Web application should reside on different drives.

  • For passwords used with Forms Authentication, you can use the aptly (if awkwardly) named HashPasswordForStoringInConfigFile helper method of the FormsAuthentication class to hash a password for storing in a configuration file or in a database or XML file. You can then use the same method to hash the password entered by the user at run time and compare the two hashes to determine whether to allow the login to succeed. You’ll learn how to store hashed passwords later in this chapter.

In addition, you can use such techniques as encryption (using the classes in the System.Security.Cryptography namespace) to make a would-be hacker’s job more difficult. The bottom line is that there is really no 100-percent-secure place that you can store passwords, but some methods are more secure than others. Balance your need for security against other factors when choosing how and where to store sensitive information.


Using a trusted connection with SQL Server requires either using Windows authentication and impersonation, as described in “Using Impersonation” on page 181, or setting up the default ASPNET account (the account used to run the ASP.NET worker processes) as a login account in the SQL Server database being accessed. This process is described in Chapter 9.

Limit Those Accounts

Account limitations are an important security strategy from the standpoint of Windows accounts, database accounts, and any custom accounts you might create for your application. You should configure each account to have only the capabilities necessary for the type of user it represents. For example, it is usually a good idea to set up a database account with read-only access for pages (or components) in your application that only need to read and display data.

This concept is sometimes referred to as the principle of least privilege. Central to this concept is the notion of performing a given task with only the security privileges required for that task, and no more. Using an account with more privileges than necessary can result in unfortunate consequences if you have a security bug in your code.

A good example of this practice is Microsoft’s decision to change the default account for running ASP.NET worker processes. In the early betas of ASP.NET, the default account was the SYSTEM account, which has numerous privileges and can perform almost any action on a machine. Had this setting remained the default, then any vulnerability found in ASP.NET or in your code could provide an attacker with SYSTEM-level access to the vulnerable machine.

In ASP.NET 1.0 under IIS 5.0, the default is the ASPNET account (specified by the MACHINE value for the username attribute of the <processModel> configuration element in machine.config), which has very few privileges on the system. This change makes certain techniques more difficult to use, but it also reduces the likelihood that a single compromised application can compromise an entire system or an entire network.


When running IIS 6.0 on Windows Server 2003 in native mode (the default), the IIS 6.0 process model settings (accessed through the Properties dialog box of the application pool containing your application) are used and the settings contained in the <processModel> element in machine.config are ignored.

Since the default identity of an application pool process in IIS 6 is the new Network Service built-in account, rather than the ASPNET account, any applications that use Impersonation and grant rights to the ASPNET account would need to be modified to grant those same rights to the Network Service account to run on IIS 6 in native mode.

Another example of a major violation of the principle of least privilege is running database code using the SQL Server sa account. Never do this, ever! If you run your database code (whether raw SQL or stored procs) using the sa account, it takes only one coding error on your part for your entire database server to be compromised. And once an attacker has control over one machine in your network, it is far easier for the same outlaw to break into others. Don’t let this happen to you. Always use accounts with the minimum necessary privileges to access and update database data.

No Samples, Thank You

Another area of danger for the unwary IIS administrator or Web developer is sample applications. By default, some versions of IIS are installed with a set of sample applications designed to help developers learn to develop applications on an IIS server. The .NET Framework SDK also has samples that can be installed to help developers learn to develop .NET applications. These and other sample applications have their place, but not on a production Web server.

Sample applications are not designed to run on production servers, so typically they do not use best practices to prevent servers or applications from being compromised. For example, samples installed with one version of IIS included a utility that allowed users to view the source of ASP pages. Unfortunately, if this utility was installed on a server containing production applications, it could be used to view the source code and make those applications vulnerable to attack.

As mentioned earlier, the .NET QuickStart samples install an instance of the MSDE database software. In the Beta 2 release, this instance was installed with a blank password for the sa account. Installing these samples on a server that is exposed to the Internet would result in a major vulnerability.

Additionally, some sample applications demonstrate extremely poor practices when it comes to security. Early versions of the ASP.NET QuickStart samples used the sa account with a blank password for database connections. This is an extremely bad practice. Not only does it reinforce the bad habit of leaving the sa account with no password, but it also uses an account for data access that has much wider permissions than are necessary for data access alone. The good news is that the current versions of the QuickStart samples follow the best practice of using an MSDE account created especially for the sample applications. This account has permission only to access the databases used by the samples, and only the permissions on those databases necessary for the samples.

Although the changes in the .NET Framework QuickStart samples indicate that Microsoft is committed to demonstrating better security practices in sample applications, you should never install them on a production server (or any other server that is exposed to the Internet, for that matter) without a very clear understanding of the risks entailed, and without undertaking efforts necessary to mitigate those risks.


The MBSA tool (mentioned earlier in “Too Much Service” and discussed later in “Patching”) can assist you in identifying samples commonly installed by default in some versions of IIS. It does not, however, identify the .NET Framework QuickStart samples as a security risk. Nonetheless, to reduce the available attack surface you present to potential attackers, you generally should avoid installing the QuickStart samples on a production server exposed to the Internet.

You Need Validation

Validation is another area that is often overlooked in Web application security. For example, developers might include validation code to ensure that users enter an e-mail address or phone number in the correct format. But they might not consider that other text fields, particularly large text fields or those that might be used for display elsewhere in the application, can expose the application to unacceptable risks.

The problem with not validating all text input by the user is that a malicious user can enter text, such as script commands, ASP.NET <% %> render blocks, or other text that, in the wrong context, could be allowed to execute rather than being treated as plain text to be displayed. Additionally, input fields used to construct SQL statements for data access can be vulnerable to unexpected commands. Validation can help you prevent this problem, as well as prevent nuisance problems such as users attempting to post profanity in guest book or discussion list applications built with ASP.NET. In fact, ASP.NET makes it very easy to implement robust validation using a set of server controls designed specifically for this purpose.

The most important thing to take from this discussion is that you should always treat any input from a user as suspect until it has been proven otherwise. Validate that the input from the user is what you’re expecting before you use, store, or display it. You can do this in a variety of ways, but the most effective is by using regular expressions to test for valid input. In ASP.NET, you can use the RegularExpressionValidator control to validate input using a regular expression of your choice. The following code snippet shows the declaration for a multi-line TextBox control, and an associated RegularExpressionValidator control that limits input to a small subset of HTML tags and punctuation characters. The regular expression used in the code example is taken from Chapter 13 of Michael Howard and David LeBlanc’s Writing Secure Code, 2d ed. (Microsoft Press, 2002).


You should always check for valid input, rather than checking for invalid input. The inherent problem with checking for invalid input or data is that it is far too easy to miss a particular type of invalid input. Checking for valid input according to rules you define and rejecting any other input is more likely to ensure that you don’t accept malicious input by accident.

<form id="Form1" method="post" runat="server"> <!-- &lt; and &gt; represent < and > --> <asp:Label id="Label1" runat="server">Enter text to display (&lt;b&gt;, &lt;i&gt;, &lt;hr&gt; acceptable):</ asp:Label> <asp:Button id="Button1" runat="server Text="Display"/> <asp:TextBox id="TextBox1" runat="server TextMode="MultiLine" Height="112px" Width="432px"></asp:TextBox> <asp:RegularExpressionValidator id="RegularExpressionValidator1" runat="server" ErrorMessage="Invalid Input Found! <!-- This validation expression will allow only <i>, <b>, and <hr> tags spaces, any text A-Za-z0-9, and the following punctuation: ?!,.'". All other input will cause the validation to fail Note that this expression does not validate for well-formed HTML --> ValidationExpression="^([\s\w\?\!\,\.\'\&quot;]*|(</?(i|I|b|B|hr|HR)>))*$ ControlToValidate="TextBox1"></asp:RegularExpressionValidator> <asp:Label id="Label2" runat="server"></asp:Label> </form>

You’ll learn more about using the ASP.NET validation controls in Chapter 8. You can learn more about using regular expressions for validating user input in Chapter 11 and Chapter 13 of Writing Secure Code, 2d ed., by Michael Howard and David LeBlanc (Microsoft Press, 2002). A good source for regular expressions for a variety of purposes is, a site run by Steven A. Smith, a noted leader in the ASP.NET community.

ASP.NET Request Validation

Cross-site scripting (XSS) attacks are a type of attack in which a variety of techniques are used to attempt to execute malicious script code by injecting it into form input, querystrings, or cookies. If an attacker can successfully inject script into one of these areas, and your code processes it without validating or filtering the data, the script code can be executed, exposing your application data and more. (A detailed overview of XSS can be found at To protect against XSS attacks, the ASP.NET team added a new feature to ASP.NET 1.1 called Request Validation. Request Validation checks the query string, form input, and other input data for indications of HTML elements, script blocks, or other potentially dangerous data. If such data is found, an exception of type HttpRequestValidationException is thrown.

Request Validation is enabled by default, and can be disabled either at the application level using the validateRequest attribute of the <pages> configuration element:

<page validateRequest="false"/>

or at the page level using the validateRequest attribute of the @ Page directive:

<% @ Page ValidateRequest="false" %>

It is highly recommended that you do not disable Request Validation unless you have first ensured that all input to the page or application is being appropriately validated and/or filtered for potentially dangerous data. Failure to heed this recommendation can result in data loss or other serious security problems.

Mind Those Ports!

Internet applications communicate via the TCP/IP protocol, which is the basis for all communication between computers on the Internet. TCP/IP uses two pieces of information to find the endpoints for a given communication: the IP address, which is a unique number assigned to a given machine, and a TCP (or UDP) port number, which for most applications is a well-known number used consistently by all applications of that type. For example, all Web servers use TCP port 80 as the default port for HTTP (Web) communications. The well-known nature of these port numbers and the services found on them makes it much easier for Web servers and clients to find one another.

Other well-known ports include File Transfer Protocol (FTP, port 21), Simple Mail Transport Protocol (SMTP, port 25), and POP3 (port 110). The full list of well-known ports and the services assigned to them can be found at

Why is this important to the discussion of security and Web server setup? Because the very ease of discovery that well-known port numbers make possible also presents a security risk for the unwary. For example, on Windows 2000, the ports for services such as FTP, SMTP, and POP3 (among others) are left open by default. This gives hackers an engraved invitation to probe these ports and see if the software behind them is vulnerable to attack. Given the many vulnerabilities discovered in FTP, SMTP, and other Internet-based services, it is essential to close all ports that are not in use by your application.

Closing ports can be accomplished in a number of ways, the most common being through the use of firewall software or a hardware router with firewall functionality, or using the IP Security Policy Management MMC snap-in in Windows 2000 or Windows XP. For many applications, the preferred solution is to set up a hardware firewall between the Web server and the Internet that allows traffic only on port 80 (HTTP), and optionally port 443 (HTTPS) if secure sockets Web traffic is required. (See “Using SSL to Protect Communications” on page 161 for more information on Secure Sockets Layer communication.) Then a second firewall is added between the Web server and the internal network that allows traffic only on ports necessary for the Web server to reach other servers (such as the database server), and also blocks ports 80 and 443 (and any other ports open in the other firewall). This method places the Web server in what is referred to as a DMZ (demilitarized zone), which is designed to prevent direct communication between the Internet and an internal network. This protects servers on the internal network from attack.


Whichever method you use to close unused ports on your Web server, it is imperative that you block traffic to any ports that your applications do not use. Remember, however, that the ports that remain open are still a security risk. Effective logging of server activity, frequent monitoring of logs, and prompt patching of vulnerabilities in software operating on the open ports are all important means of defending your server(s) from attacks.

A full discussion of packet filtering, routing, and IPSec management is beyond the scope of this chapter. Consult the manual for your firewall or router, or the Windows 2000/XP Help files, for more information on implementing these solutions.


Once your server is set up correctly and securely, and you’ve considered how the design of your application affects security, you might think you’re home free. Not so! Even the most securely configured server and securely designed application can be compromised by a lax attitude toward ongoing maintenance. One of the most important aspects of ongoing maintenance of servers and applications is staying on top of patches released by vendors of any software you’re using.

In an ideal world, the software we use would be perfect from the start. However, the reality is that there are few, if any, programs that do not contain vulnerabilities. When these vulnerabilities are discovered, typically the vendor of the program will issue a patch designed to correct the problem. Unfortunately, many server administrators do not apply these patches consistently, leaving their servers vulnerable to attack.

This is inexcusable. Most patches can be applied easily, and there are many ways that you can be notified automatically about new ones for Microsoft software, including the following sources of information about patches:

  • The Windows Update site ( lets you analyze the updates available for a given system and determine which of them are currently installed. Microsoft Windows 98 and later and Windows 2000 and later install a link to Windows Update in the Start menu by default.

  • The Microsoft Product Security Notification service lets you sign up for e mail notification of vulnerabilities and available patches. This method is useful for administrators who need to keep track of patches without visiting multiple machines. You can sign up for this service at

  • The Microsoft Baseline Security Analyzer (MBSA) is a utility that will scan the local machine, or machines on the local network, for uninstalled patches for Microsoft Windows NT 4.0, Windows 2000, IIS 4.0 and IIS 5.0, Microsoft SQL Server 7.0 and Microsoft SQL 2000 (and MSDE), and Microsoft Internet Explorer 5.01 and later, as well as checking for numerous other potential security problems, such as weak passwords, installed samples for IIS, and security issues related to Internet Explorer and Microsoft Office products. MBSA 1.1 is downloadable from

Scanning for Missing Patches with MBSA 1.1

Once installed, MBSA is quite easy to use. Simply follow these steps to start MBSA 1.1 and scan your local machine.

  1. Click the Start button, choose All Programs (Programs in Windows 2000), and then select Microsoft Baseline Security Analyzer. The MBSA welcome screen will be displayed, as shown in the following illustration.

    Click To expand
  2. Click Scan A Computer. The resulting interface, shown in the following illustration, allows you to select a computer to scan, as well as to specify the types of vulnerabilities to scan for.

    Click To expand
  3. Leave the settings as they are, and click Start Scan. When the scan is complete, MBSA will display a report of the vulnerabilities it has found, as shown in the illustration on the next page.

    A green check mark next to an entry indicates that no problems were found for that issue. A yellow X indicates a non-critical security warning. These are issues that warrant further investigation, but do not necessarily indicate a vulnerability. A red X indicates a critical security issue requiring action to correct an identified vulnerability. The other two icons you might see are a blue asterisk, indicating recommended best practices, and an i icon, indicating that additional information is available for a given issue. Also, each issue includes links indicating what was scanned, any additional details on the results, and most important, instructions for how to correct a vulnerability found by the tool.

    Click To expand
  4. Locate an entry with a yellow or red X icon.

  5. Click How To Correct This Link and follow the instructions provided to correct the issue, including links to required patches, and so on.

  6. Repeat Step 5 for all warnings or critical issues.

  7. Once you have finished correcting issues (rebooting when necessary), start MBSA again, and rescan the machine to confirm that the issues have been corrected.


    Starting with MBSA 1.1, there is an issue with nonsecurity-related patches installed by the Windows Update functionality built into Windows XP, or from the Windows Update Web site. Because the XML file used by MBSA to identify the file versions to scan for does not include nonsecurity-related patches, you might run into a situation in which MBSA highlights a file whose version is higher than expected as a security warning because that file has been updated by Windows Update.

    For this reason, keep track of the patches that are installed from Windows Update so that you know when the warning given by MBSA is indicative of a potential problem rather than of a nonsecurity-related patch.

Access Control

Access control is the process of determining who can access resources on your server. This includes both authentication (determining the identity of a user making a request) and authorization (determining whether that user has permission to take the action requested). For an ASP.NET application, there are several different authentication and authorization methods. You’ll learn how to implement authentication and authorization in ASP.NET later in this chapter.

It is important not to forget that access control also includes physical access to the machine being secured. The best authentication, authorization, and password practices won’t help you a bit if someone can gain physical access to a machine and circumvent your security barriers, or simply damage the machine beyond repair. Any machine that has value to you should be secured physically, as well as via software, from unauthorized use.

Auditing and Logging

As mentioned earlier in this chapter, even the best security practices and strongest passwords cannot provide 100-percent protection against attacks. For this reason, it is imperative to enable auditing and logging on exposed servers.

Auditing is the process of monitoring certain activities, such as login attempts, for success or failure, and then logging the results of this monitoring to the Windows event log. Auditing (and proper monitoring of the logs) allows you to determine when someone is attempting to attack your machine, such as by trying numerous incorrect passwords. In this case, by auditing login failures, you would be able to see the failed login attempts in the Security event log and take appropriate action to foil the would-be intruder (such as disabling the account being attacked). As with many of the other settings discussed in this chapter, the auditing policy for a Windows 2000, Windows XP, or Windows Server 2003 machine can be set using a security template via the Security Configuration and Analysis MMC snap-in.

Logging is the process of writing information about the activities being performed on a machine to a known location for later review. For our purposes, the most important logging (after the logging of audit information) is done by IIS for the Web, FTP, and SMTP services. Logging is performed at the site level. The following steps are for enabling logging for a Web site, but the steps for FTP, SMTP, and other IIS services are similar.

Enable logging in IIS

  1. Open Internet Information Services MMC snap-in (known as the Internet Services Manager MMC snap-in in Windows 2000) by clicking Start, and then clicking Control Panel (in Windows 2000, the Control Panel is under the Settings menu selection). In Windows XP, switch to classic view, if that is not already selected. In the Control Panel folder, double-click the Administrative Tools icon. In the Administrative Tools folder, double-click the Internet Information Services icon (the Internet Services Manager icon in Windows 2000). In Windows Server 2003, click Start, choose Administrative Tools, and then select Internet Information Services (IIS) Manager. If you are not logged in as an administrator you will need to right-click the icon and select Run As, and then specify an administrative account.

  2. Select the server you want to manage, and expand the tree to find the site you want to manage. Right-click the desired site and select Properties. The following dialog box will be displayed (the illustration is from IIS 6.0 on Windows Server 2003).

    Click To expand
  3. In the Web Site tab of the <sitename> Properties dialog box, ensure that the Enable Logging check box is checked.

  4. Modify the format of the logs by using the Active log format drop- down list You can click the Properties button to modify where logs are kept, how frequently a new log file is created, and the specific information that is logged.


    It is considered a good practice to modify the location of the Web server logs from their default of %WinDir%\System32\LogFiles because that makes it more difficult for hackers who gain access to your system to cover their tracks. If the log files are in their default location, hackers can more easily alter them or delete them to hide their activity. Make sure to set appropriate ACLs on the log file location, since it might still be possible for an attacker to locate these files.

Using SSL to Protect Communications

By default, information sent via HTTP requests and responses is sent as clear text. This means that someone could capture the packets that make up these requests and responses and then recreate them, including any data passed from form fields on a submitted page. If such a form contained sensitive data, such as passwords or credit card numbers, this data could be stolen.

Encryption is an important tool for keeping this kind of information secure. The most commonly used form of encryption in Web applications is the Secure Sockets Layer (SSL) protocol. Sites requiring secure communications between the Web server and the browser use SSL to create and exchange a key used to encrypt communications between the client and server, thereby helping to protect this information from prying eyes. This is typically done by e-commerce sites to protect credit card information, for example. SSL can also be useful for protecting other information, including SessionID cookies and login information when using basic authentication, or ASP.NET Forms-based authentication. (See “Enabling Authentication” on page 165 for more information on basic authentication.)

By default, SSL communications occur on port 443 (as opposed to non-SSL Web communications, which are on port 80 by default), using the https:// prefix for URLs that use SSL. Enabling SSL for your Web server requires obtaining a server certificate and binding that certificate to the Web sites on which you want to use SSL. Certificates are issued by several companies, including VeriSign (, that are presumed to be trustworthy.

Request an SSL certificate

  1. Open Internet Information Services (Internet Services Manager in Windows 2000). (If you are not logged in as an administrator, you will need to use the Run As feature of Windows 2000 or Windows XP to run the Internet Services Manager using an administrative account.)

  2. Right-click the site you want to protect via SSL and then select Properties.

  3. In the Directory Security tab, shown in the illustration on the next page, click the Server Certificate button. This will start the Web Server Certificate Wizard.

    Click To expand
  4. Use the Web Server Certificate Wizard to create a new certificate request. This method is useful if you want to create a certificate request to send to a third-party certificate authority, such as VeriSign or Thawte, who will verify the information in the request and send you a server certificate. Alternatively, if you have Certificate Services installed on a machine on your network, you can use that service to create a certificate. (Note that your clients must have your Certificate Services CA listed in their browser as a trusted certificate authority in order to use this method.) See the Certificate Services documentation for more information on creating and installing your own certificates.


    No matter which method you use to generate a certificate request, the common name of the certificate (identified by the Certificate Services Web request pages as Name) must match the fully qualified domain name of the site the certificate will be installed on. Otherwise, users will get a warning that the server’s certificate is valid but the name on the certificate doesn’t match the requested URL.

Once you’ve received the response from the certificate authority containing your certificate, follow these steps to install the certificate for use in IIS.

Install an SSL certificate

  1. Locate the certificate file you received from the certificate authority. Right-click the file and select Install Certificate.

  2. Open Internet Information Services (or Internet Services Manager on Windows 2000). (If you are not logged in as an administrator, you will need to use the Run As feature of Windows 2000 or Windows XP to run the Internet Services Manager using an administrative account.)

  3. Right-click the site you want to protect via SSL and select Properties.

  4. On the Directory Security tab, click the Server Certificate button. This will again start the Web Server Certificate Wizard.

  5. Click Next to advance to the second page of the wizard.

  6. On the Server Certificate page of the wizard, shown in the following illustration (IIS 6.0 version shown), select Assign An Existing Certificate and then click Next.

    Click To expand
  7. The Available Certificates page should list all of the certificates on the current machine, including the certificate you just installed. Select the desired certificate and then click Next.

  8. Review the Certificate Summary information to ensure you are assigning the correct certificate. Then click Next.

  9. Click Finish to complete the wizard.

Once you’ve installed the certificate for a given site, you can use SSL to encrypt communications on any of the virtual directories under that site by having users request pages with https:// rather than http://. To ensure that pages cannot be viewed without SSL, however, you must require SSL for the page or directory you want to protect.

Require SSL

  1. Open Internet Information Services (or Internet Services Manager if you are using Windows 2000). (If you are not logged in as an administrator, you will need to use the Run As feature of Windows 2000 or Windows XP to run the Internet Services Manager using an administrative account.)

  2. Right-click the site, virtual directory, or file you want to protect and then select Properties.

  3. On the Directory Security or File Security tab, in the Secure Communications section, click the Edit button, which should now be available. This will open the Secure Communications dialog box, shown in the following illustration.

    Click To expand
  4. In the Secure Communications dialog box, check the Require Secure Channel (SSL) check box. You could check the Require 128-bit Encryption check box for greater security, but this requires that the client browser support 128-bit encryption.

  5. Click OK to close the Secure Communications dialog box, and then click OK again to close the Properties dialog box for the site, directory, or file you are protecting. The resource should now be accessible only by using https://.


Using SSL encryption is an important tool for protecting information sent from the browser to the server, but it is only one of many tools you need to use to secure your application. SSL alone is not enough. If the only security tool you use is SSL, your site is almost guaranteed to be vulnerable in some other area.

Team LiB
Previous Section Next Section

JavaScript EditorJavascript debugger     Javascript examples