JavaScript EditorFreeware javascript editor     Javascript code


Main Page

Previous Page
Next Page

Design

As explained in the "Problem," there are many issues to solve and a few critical design decisions that must be taken before proceeding with the rest of the site, as they create the underpinnings used for all further development. It would be too late to design architecture to support multiple data stores once you've already developed half the site. Similarly, you couldn't decide on a strategy for handling configuration settings, such as database connection strings, after you wrote an entire module that uses the database. If you ignored these issues and started coding main modules first, you would end up with a poor design that will be difficult to correct later. As with this entire book, many of the new ASP.NET 2.0 classes and controls are utilized, as they apply to each section of code we develop, and sometimes these new features will provide a complete and ready-to-go solution to a particular problem.

Designing a Layered Infrastructure

If you've been doing any software development in recent years you should be familiar with the multi-tier software design (also called n-tier design). To recap it in a few words, it divides the functionality, components, and code for a project (not just a web-based project, but also WinForm and other types of projects) into separate tiers. There are generally four of them:

  • Data Store: Where the data resides. This can be a relational database, an XML file, a text file, or some other proprietary storage system.

  • Data Access Layer: The code that takes care of retrieving and manipulating the raw data saved in the data store.

  • Business Logic Layer: The code that takes the data retrieved by the Data Access tier and exposes it to the client in a more abstracted and intuitive way, hiding low-level details such as the data store's schema, and adding all the validation logic that ensures that the input is safe and consistent.

  • Presentation Layer (User Interface): The code that defines what a user should see on the screen, including formatted data and system navigation menus. This layer will be designed to operate inside a web browser in the case of ASP.NET, but some applications might use Windows Forms at this layer.

Depending on the size of the projects, you might have additional tiers, or some tiers may be merged together. For example, in the case of very small projects, the Data Access and Business tiers may be merged together so that a single component takes care of retrieving the data and exposing it in the UI in a more accessible way.

When discussing multi-tier architecture and design, the terms tier and layer are frequently used interchangeably, but there's actually a subtle difference between them: Tiers indicate a physical separation of components, which may mean different assemblies (DLL, EXE or other file types if the project is not all based on .NET) on the same computer or on multiple computers; but layers refer to a logical separation of components, such as having distinct classes and namespaces for the DAL, BLL, and UI code. Therefore, tier is about physical separation and units of deployment, but layers are about logical separation and units of design.

In the first edition of this book we created many separate assemblies for the site, and each assembly was compiled by a separate Visual Studio project added to the main solution. While that's fine for large enterprise sites, it is often overkill for small and mid-size sites, because it's more difficult to handle many Visual Studio projects (we had dozens!) and the relationships and dependencies between them. In many small to mid-sized sites, a good multi-layered design is usually all you need. ASP.NET 2.0 adds another incentive to use a simplified deployment model: Any code file you put into a special App_Code folder (and its subfolders) under your main application folder will be automatically compiled at runtime and referenced by the rest of the application. This automatic and on-demand compilation makes it easier and faster to test and debug your pages, because you can change something in a source code file while the site is running (even if the debugger is attached) and the changes will be automatically compiled on the next request. This process is called edit and continue.

While all the older options for compiling assemblies are still supported, this new option makes it easier to work with ASP.NET 2.0 sites at development time, and makes deployment easier as well. The web site we're developing in this book will have the DAL, BLL, and UI code together in a single project. If you prefer to have greater separation (for example, because you want to put the DAL code into a dedicated server separate from the web server, for security reasons), you can still create separate projects, move the DAL files into those projects, and compile them to produce separate deployable assemblies, but I won't cover that option in this book.

Although we'll use only one project, we'll use different folders to organize the different layers' files on the file system (App_Code/DAL,App_Code/BLL, and the root folder with its subfolders), and we'll use different namespaces to help organize the classes logically in the site's object model (MB.TheBeerHouse.DAL, MB.TheBeerHouse.BLL, and MB.TheBeerHouse.UI). In other words, we'll have various kinds of separation despite the fact that it will all be part of the same project.

Choosing a Data Store

In this book you'll learn how to develop a flexible data access layer that enables you to support different data stores and quickly switch between them without any change on other layers. However, it will only be easy to switch between different data stores after you've developed the low-level data storage structures and a corresponding DAL for each data store you want to support. Therefore, it will still involve a considerable development effort if you want to support more than one data store, but the multi-tier design we're using should ease the pain if that's the route you take.

What kinds of data stores would be possible for this kind of application? We could use only XML files if we expect a fairly static site that doesn't change often, or we might use an Access database format. Access is better than XML files in many ways, but Access is mostly a desktop database and would not be appropriate for situations where we might have even a few concurrent users, or a large database. It doesn't scale at all, and because of this its use for web sites is strongly discouraged. Moving up from there we could consider any of the modern RDBMSs (Relational Database Management Systems), such as SQL Server, Oracle, DB2, MySQL, Postgress, and so on. Any of these would be great choices for our needs, but for the purposes of this book we'll choose only one data store as a target for our sample site.

In the real world your customer may want you to use a specific RDBMS, or they may want you to change to a different RDBMS after the site has been deployed. Your customer may have reasons to use a specific RDBMS because they may already have corporate knowledge of a given platform/engine that would be necessary to maintain the site after you've sold it to them, or maybe they already have an installed/licensed copy of a particular RDBMS and want to use it instead of purchasing a license for different software, or maybe they just have a preference for a specific RDBMS. In this case you might try to explain your reasons for choosing something else, if you think that would be best for them (this is what a consultant is supposed to do, after all), but the customer may insist that their choice be honored.

Most .NET software developers chose Microsoft's SQL Server as their RDBMS for many reasons. These might include the excellent integration of SQL Server tools in Visual Studio, or the fact that it's easier to buy your IDE and RDBMS from the same company (you may have a MSDN software subscription that includes everything), but the reasons I personally favor are the relatively low cost and the high performance of SQL Server running on Windows, coupled together with easy administration. Having decided to use SQL Server as our data store, we now have to select one particular version. This edition of the book revolves around Visual Studio 2005, so it makes a lot of sense to couple this with the newest version of SQL Server, which is named SQL Server 2005. Microsoft has released a free variation of SQL Server 2005 that can be used on developers' computers, and on production servers, called Microsoft SQL Server 2005 Express Edition. This is the 2005 replacement of their older free RDBMS called the Microsoft Data Engine (MSDE), but this one has fewer constraints (no worker threads limit) and a new GUI administration tool, and many new features inside the engine.

I will make the assumption that you will use either SQL Server 2005 Express Edition or one of the full editions of SQL Server 2005 (Workgroup, Standard or Enterprise) for this sample web site. For our purposes, these editions are functionally equivalent, with the main differences relating to the GUI administration tools, and not to the underlying RDBMS engine. All editions of SQL Server 2005 have great integration with Visual Studio 2005, whose Server Explorer tool enables developers to browse registered servers and databases, retrieve all details about a database's schema, add, delete and modify tables, records, stored procedures, functions, triggers, views, types, and relationship. In fact, you can develop the entire database for this web site from inside Visual Studio 2005, without using any of the SQL Server administration tools! The new Diagramming tool in Visual Studio 2005 even enables us to visually relate tables and set constraints and foreign key relationships without leaving the diagram.

Some of the new features of the 2005 version include tight integration with the .NET runtime (SQL Server acts as a host for the CLR), which enables you to write UDFs (user defined functions) and UDTs (user defined types) in C# or VB.NET code; the introduction of XML as a native data type (which means you can chose XML as the type for a column, and you can use special functions and operations to run very fast queries and filters on this column and its index); new functions (such as ROW_NUMBER, which makes it very easy to implement custom pagination, as you will see in Chapter 5); the Service Broker technology, which enables building message-based asynchronous database applications; much finer security permissions; and much more. The limitations of the Express version are that it supports only one CPU, only 1GB of RAM, and the maximum database size is 4GB. Advanced features such as database partitioning, database mirroring, notification services and full text search are also not supported. However, for most small to mid-size sites, SQL Server 2005 Express is an acceptable choice. You can start with it, and should you need more performance on high-end servers, you can upgrade to one of the other editions (Workgroup, Standard, or Enterprise) with no code changes required.

A final advantage of the Express edition that I want to highlight (and one that I'm sure will make many developers happy) is the new XCopy deployment: You can just put the .mdf database file on a local subfolder (there's a special /App_Data folder for web sites) and attach it dynamically by specifying the file path in the connection string by using a new attribute named AttachDBFilename. This makes it possible to XCopy the site's whole folder structure to the remote server, and you're ready to run without having to do any configuration work to the database server!

Designing the Data Access Layer

The Data Access Layer (DAL) is the code that executes queries to the database to retrieve data, and to update, insert, and delete data. It is the code that's closest to the database, and it must know all the database details, i.e., the schema of the tables, the name of the fields, stored procedures, views, etc. You should keep database-specific code separated from your site's pages for a number of reasons:

  • The developer who builds the user interfaces (i.e., the pages and user controls) may not be the same developer who writes the data access code. In fact, for mid- to large-size sites, they are usually different people. The UI developer may ignore most things about the database, but still provide the user interface for it, because all the details are wrapped into separate objects that provide a high-level abstraction of the table, stored procedure and field names, and the SQL to work with them.

  • Some queries that retrieve data will be typically used from different pages. If you put them directly into the pages themselves, and later you have to change a query to add some fields or change the sorting, you'd have to review all your code and find every place where it's used. If, instead, the data access code is contained in some common DAL classes, then you'd just need to modify those, and the pages calling them will remain untouched.

  • Hard-coded queries inside web pages would makes it extremely difficult to migrate to a new RDBMS, or to support more than one RDBMS.

Using the Provider Model Design Pattern to Support Multiple Data Stores

One thing that's important to consider for your site is that you may need to support different data stores. Our sample site is pretty generic, and as such could easily be adapted for different pubs, bars, and other places. However, as I said before, different clients may have different constraints about which data store to use, and may force you to install and configure the site to work with Oracle or MySQL instead of the SQL Server 2005 database you initially chose. If you don't plan for this possibility up front, you'll have a great deal of trouble trying to retrofit your code later. Different RDBMSs support different functionality and SQL dialects, stored procedures and the parameters for SQL statements are passed in with a different syntax, the data types are different, and so on. For a real-world site of medium complexity, it is impossible to have common data access code that works the same for all possible RDBMSs.

If you tried to write a common code base using the OleDb provider, you would soon start filling your code with an endless number of "if I'm using this database do this, else if I'm targeting this other database do something else"-style blocks. Even if you could put up with the mess in your code, that approach may not be workable in some cases. Say, for example, that you're developing a web site that will be sold by other software companies, which would then integrate your code into a larger solution. You don't know what they'll be using as a data store, because you're developing a commercial product that could be sold to thousands of clients. As such, you can't implement support for all possible data stores. You can't even give them your source code so that they can customize it to add support for their own database because you want to protect your investment. This is exactly the situation with many of the new ASP.NET 2.0 modules, such as membership, profile, personalization, session storage, and more (you'll be introduced to some of these in the next chapter). Microsoft provided a built-in DAL for the SQL Server RDBMS and a few other data stores, but not all possible storage media; however, they wanted you to be able to add this support through the use of the provider model design.

Instead of writing the DAL class directly, you should first write a base abstract class that defines the public interface of the class (the signature of the data access CRUD methods), and the implementation of some helper methods if necessary. The real data access code is inside secondary classes that inherit from the base class, and provide the concrete implementation for its abstract methods. These classes are called providers, and are usually specific for one type of data store; when you implement one of them you don't have to worry about compatibility with any other data store.

Different providers are placed in different folders (e.g., App_Code/DAL/SqlClient) and namespaces (e.g., MB.TheBeerHouse.DAL.SqlClient) for organizational purposes. The base class has an Instance static property (or a GetInstance method, if you prefer) that creates and returns an instance of the provider class according to what's specified in the configuration file. This property/method is used by the business classes to get a reference to the concrete DAL object to use to retrieve/modify data. The business classes, in turn, are used by the user interface layer, which never accesses the lower-level DAL classes directly. This pattern enables the developer to write superfast code, because you can take advantage of all optimizations available for the specific data store. The code is also much cleaner, because you don't have to write confusing IF...ELSE blocks to manage all the differences of various RDBMSs. It also allows you to compile the whole site and give your customer the compiled assemblies: If they need to target a different data store from those you have implemented, then they can create their own providers that inherit from your base abstract DAL classes, and they can point to them in the configuration file.

You may argue that if you implement this pattern then you'll have to re-implement the whole DAL code for each data store. That's mostly true, of course. The question becomes whether you want to re-implement one DAL module or all the modules of your site. Also consider that you may only need this pattern for those methods that really must be implemented differently for different data stores. Simpler methods that would work fine on all data stores without any modification could use the OleDb provider with common code in the DAL base class so that all the providers inherit from it so they don't need to re-implement those simple methods. You can make them virtual to allow new providers to override this functionality if they need to. Figure 3-1 provides a graphical representation of the relationship between the user interface, BLL, and DAL layers and the data stores.

Image from book
Figure 3-1

We'll talk about this pattern further in subsequent chapters. In Chapter 4 you'll see how the ASP.NET 2.0 team implemented this pattern for the new authentication, membership, and profiling modules. In Chapter 5 you'll design and implement a DAL based on this pattern from scratch, to support the article management module.

The Eternal Question: DataSet or Custom Entities?

When the BLL classes call some methods from the DAL to retrieve some data, how do you think they should receive this data? Perhaps as a DataSet/DataTable, or as a collection of custom entity class objects that wrap the fields read from the data store? If there were an award for the most lengthy and hotly contested debate among .NET architects and developers, this question would doubtlessly win first prize. Just search on Google for something like "DataSet vs. custom entities", "DataSet vs. custom collections" or "DataSet vs. domain objects", and you will see a lot of opinions on this subject! Both approaches have their pros and cons, and both have their place in different architectures.

If you choose DataSets/DataTables for passing data between the DAL and BLL layers, you will need to access the data in the BLL using ADO.NET methods. But if you choose custom entities, all the data will be wrapped in custom classes and collections of classes, so you would access data in the BLL in a more natural manner that has been customized for the particular data in question.

Most people agree that DataSets/DataTables are clearly the best choice for desktop-based smart-client applications, but the case is not as strong when it comes to scalable high-performance web sites. When I say DataSet, in reality I mean a typed DataSet, because the plain untyped DataSet has many disadvantages: It's just too easy to mistype a table, field or relationship name, or the type of a field when you set it, and thus spend a lot of time debugging and fixing these insidious errors. Typed DataSets are easier to use because you get IntelliSense on field names, they provide built-in sorting and filtering facilities, they fully support data-binding for both WinForms and ASP.NET applications, and they have great integration with Visual Studio's IDE. The DataSets and the DataTables are serializable (DataTables did not support serialization in .NET 1.x; you had to add them to a DataSet in order to serialize them), and .NET 2.0 now supports true binary serialization, unlike .NET 1.x in which they were always serialized in XML format even when you used a binary formatter (it only gave you a binary header, but the real serialized content was in XML). Finally, DataSets can easily support different strategies for handing concurrency, as the DataTable's rows store both the original value read from the data store, and the current value (which the user may have modified).

The disadvantages of DataSets/DataTables fall into three categories: performance and scalability limitations, representation of data, and business rule validation. If you just need to pass a single row of data, you still need to create and pass an entire DataTable/DataSet (which is a lot of overhead); and since it's an in-memory mini-database, it can be a fair amount of overhead. It is tightly bound to the relational tabular model of the RDBMS metaphor, without having a clear and customizable object-oriented representation of the data. Despite the nice integration with the IDE, every time the database schema changes a little (a field is renamed, added, or deleted), you have to rebuild the typed DataSet, which is harder than modifying a custom collection class. But the thing .NET developers lament the most is the fact that you can't easily add your custom business and validation logic to a DataSet, so you have a lot of ugly code to write to enforce all those business rules for both new and current records' values before they are persisted on the database, or before some other statement is run. I've seen some libraries that allow you to add information to the DataSet's ExtendedProperties collection, or the DataSet's XSD schema, which allows you to associate simple validation rules to tables and fields: things like ensuring that a string is at least 10 characters long or must match a given pattern, an integer or a date value is inside a given range, and the like. This is all useful, but in the real world there is much more to do than just input validation. You also have to write code for constraint checks and validation that must be performed to ensure that a value, or a record, is valid and meaningful within its context — for example, you can't approve an order if the ordered item is out of stock, and you can't complete the transaction if the credit card information wasn't validated by an external payment gateway or if the customer is marked in your database as a bad risk. These domain validations require some custom business logic, which can't be automated by a generic framework or library.

Having made the case against using DataSets to pass data from the BLL to the UI, this doesn't mean the DataSet will be totally useless in our design. In fact, we'll occasionally pass data from the DAL to the BLL using a DataSet, but not from the BLL to the UI. We need to use ADO.NET's classes within the DAL because that's where we interact with the database, so sometimes it makes sense to utilize a DataSet/DataTable within the DAL layer, and once we have a populated DataSet/DataTable there would be no additional overhead to pass it up to the BLL. However, it's the BLL that will add all the validation logic, and the BLL can translate data from the general-purpose DataSet/DataTable into a customized collection of business objects as it does the validation; and these custom collections can be passed up to the UI, where they will present a powerful set of data-specific OOP classes that will be data-bindable and simple for the UI developer to use.

The approach of using custom collections of business entity objects is not new to .NET; it has been around for many years, and long before .NET was even developed. A custom entity object is a class that wraps the data retrieved from the database in an object-oriented way, so that it abstracts the data store's schema and other details. If the DAL and BLL are two separate layers (which is usually the case for mid-to large-size projects), there are two levels where these custom collections can be used: between the DAL and BLL, and between the BLL and UI. In the first case the entity classes are very simple, as they typically just wrap the data retrieved from the DB, with a one-to-one relationship with the DB tables (or the DB views, which may join multiple tables), and without methods for inserting, updating, deleting, and retrieving the data. In this context they are just utilized as a container to hold data that is transported between layers. In the second case the classes are more complex: They wrap the data, but also have other properties that reference parent or child objects, and instance methods to manipulate the data. These classes are often called domain objects and are not merely entity classes, and one instance should completely represent an element of the domain (an order, an employee, a product, a news article, etc.). Creating entity classes and domain objects is definitely more complex and requires more work than just using a typed DataSet automatically created by the designer, which can sort and filter right out of the box, but they're better able to abstract the database's schema, and they're more elegant, easier to use, and more intuitive for use by the UI developer. Domain objects are also easier to maintain by their developer, faster to load from a database and lighter in memory usage, because you can load only the records you really need by utilizing patterns such as the lazy-load, which loads data only when it's actually needed by the user, instead of loading everything at the same time from a single query. A custom serialization mechanism can also be implemented, so that only the data you want to save is saved with the exact schema you want it to use. It's also feasible to add custom validation logic to domain objects, because the data is wrapped in properties that you can extend with as much custom logic as you need.

My personal conclusion is that, even though I've seen DataSets used in some scalable enterprise applications (even having tens of thousands of users), custom objects are still more flexible and elegant, and I generally prefer them over DataSets in web applications, especially between the BLL and UI layers. But the use of custom objects will require you to spend a little extra time and effort to design and code the BLL layer in order to get these advantages. One of the main goals of this book is to teach you how to do these kinds of things, and to show you many best practices in the process. Therefore, this book will present a complete layered project with a separate DAL and BLL, and using custom entity classes and collections to pass data from the DAL to the BLL, utilizing custom domain objects to return data from the BLL and to support data retrieval and modification that can easily be used in the UI layer.

Note

Some people have developed tools called ORM (Object Relational Mappers) utilities, which can make it easier to map database tables, views, and other relational objects to an OOP class. They can dynamically explore the database's schema and create the wrapper custom entity and domain object classes for you. Some of them work at design time to automatically produce C# classes you can compile into your application, and others work at runtime to perform this mapping function on-the-fly. But sometimes they don't work as well as we'd like them to, and there can be a considerable learning curve to set up these tools to work with your particular database and usage requirements. Some of the most popular tools for .NET developers are NHibernate (nhibernate.sourceforge.net [open source]), EntityBroker (www.thona-consulting.com/content/products/entitybroker.aspx [free community version and commercial versions available]), and LLBLGen Pro (www.llblgen.com[commercial]). A simpler tool for small to mid-size systems is Paul Wilson's ORMapper (www.ormapper.net). I decided not to use these tools in this book because I generally prefer a custom-designed mapping solution specific to each application, but I can see that these tools may be your best alternative in some cases. If you have been contracted to develop a new system that will later be maintained by corporate developers, you'll have to get their buy-in before selecting any third-party tools because some companies have policies that prohibit the use of "unapproved" tools they haven't officially sanctioned.

Stored Procedures or SQL Text Queries?

There are two different design models for database access. One is to store all your SQL statements inside stored procedures, and only use the SqlCommand object to execute those stored procedures by name. The other model is to store your SQL statements as text to be executed with SqlCommand objects. Which methodology is best has been strongly debated for years. We already looked at the question of DataSets vs. Entity objects, which I suggested was perhaps the most widely debated question. The second most widely debated question is likely: "Should I use user-stored procedures or SQL text queries inside my DAL layer?" As with the first question, different people will give you different answers and opinions, and both of these options have their pros and cons.

If you read a lot, you'll find many sources saying that stored procedures provide better performance than SQL text queries because they are "pre-compiled" (I don't mean compiled to a binary program, but parsed and used to generate an execution plan) and cached in-memory by SQL Server. However, many articles and books miss the fact that this happens for SQL text queries as well (one of the enhancements that was added to SQL Server some years ago), as long as you use parameterized SQL statements (which you should definitely do in all cases, to avoid SQL-injection security attacks). Therefore, the performance of stored procedures and SQL text queries is similar in most cases. However, the name of a stored procedure is shorter and invariant compared to a SQL statement, so it may be easier for the SQL Server engine to find the cached execution plan, but this shouldn't make a significant difference in most cases.

Stored procedures allow finer-grained security access control of the data because you can give your web SQL user account the permission to execute only certain procedures, while not giving full control over the entire underlying tables. Without stored procedures, you can choose to give a user the INSERT permission (for example) on one table but not on another table, and you can limit access to some table fields by creating a view that doesn't include them. Still, stored procedures are better than this because they enable you to add row-level security: If the user account doesn't have any access to the underlying table but only to a stored procedure that only retrieves rows matching a specific filter, then there's no way the user can retrieve/work with the other rows. However, row-level security is admittedly a pretty rare requirement.

Another argument in favor of stored procedures is that they usually include a batch of statements, not just one. If you had to execute them with SQL text queries, you would have to execute multiple separate commands, which would mean much more network traffic for sending data back and forth. When you call a stored procedure, the network traffic is minimized because you call a single procedure with a short name, not dozens of SQL statements, each of which may be hundreds of characters long.

Still another advantage of stored procedures is that they provide a further layering of code. By using them, the DAL code is merely connecting to the database, creating a command that references a stored procedure, sets its parameters, and then executes the procedure. There's no SQL code in the DAL layer. This is often desirable because it allows us to deploy the compiled DAL, and later adjust something in the stored procedures' SQL code without any impact to the DAL; you don't have to recompile or redistribute the DAL. Because I'm not a DBA, I'm not familiar with all the rules, tricks, and best practices of writing the most optimum queries against certain specific tables, so sometimes I like to have a DBA review my SQL code before deploying it in a final release. If I use stored procedures, it's easy to have a DBA check them, reviewing and fine-tuning them without requiring me to change my DAL code. She may not even know what C# is (many DBAs are not programmers), and if that's the case it's better if she only focuses on the T-SQL code and doesn't try to edit the DAL's C# code.

However, stored procedures are not always the best solution. The biggest advantage to using SQL text queries with SqlCommands, instead of stored procedures, is that they are more flexible. Sometimes you want to implement advanced search and filtering forms in the UI, whereby the user can partially fill in some data in text boxes to filter the content against some database fields and order the results by other fields. Often, the results must also be paginable. If you have many rows in your results, you don't want to kill performance and scalability, or bore the user by making her wait a long time to download hundreds of records. The query to support all these features would be different according to the fields the user filled in and the sorting option she chose in the UI. If you had to build the query from inside a stored procedure, you would need to write a lot of IF...ELSE statements, which could become unmanageable if you have a lot of optional fields. By using a dynamic type of query in which SQL text is generated at runtime based on UI selections, and then executed in a SqlCommand as SQL text, the overall code is far more elegant and maintainable than any attempt to do this with stored procedures would be.

One concern about using stored procedures instead of SQL text is that they make your code tightly bound to the SQL Server RDBMS, so that if you ever need to support another data store you would need to rewrite most of the SQL stored procedure code. This would be needed because various RDBMSs have a much different stored procedure language, even though their SQL dialect is usually pretty close to the standard ANSI SQL specifications (there are no widely accepted standards for stored procedure languages — they are always highly proprietary). However, this concern only affects the developer who is trying to build a single DAL using only standard SQL, and other portable features that can target with all data stores. I've already explained in the previous section why I don't like this approach: The code will end up being very complex anyway, with a lot of IF...ELSE blocks at the C# level to handle all the differences in data types and other SQL constructs; and even with this huge effort you won't be achieving the best performance possible because portable code can't take advantage of advanced RDBMS-specific features or functions because those aren't portable. Instead of this "one size fits all" approach to portability, I prefer using the provider model design pattern and the creation of different RDBMS-specific DAL classes that will make your application run faster and ease maintenance. The provider model design is not necessarily the easiest option to develop if you have many RDBMSs to support, but its advantages outweigh its disadvantages, even in that case.

The result of all these arguments is that I often use stored procedures for retrieving and working with database data, except for those cases where the query would be too dynamic and complex for a stored procedure; and in that situation I use dynamically built SQL text queries (using SqlCommand and SqlParameters, of course).

A Base Class for All Data Access Classes

For those of you not familiar with the concept of an abstract base class, this is an OOP class that provides common functionality needed by other classes. Because it's abstract you do not need to provide an implementation for each method, but you can provide some method implementations — and if you make them virtual, then they can be overridden in your subclasses (subclasses are child classes that descend from this class). An alternative to using an abstract base class is to use a C# interface to specify the common signature required for a provider class, but this method doesn't allow you to provide any method implementations at this level. C# interfaces are the best choice if you want to allow the DAL class to be coded to the interface, by using only the properties and methods exposed by the interface, which would allow various DAL implementations to be used at runtime (you may have heard the term "plug-in" to refer to this kind of functionality).

In our case we'll use the abstract class, which gives us reusability vertically in the inheritance tree, while the use of interfaces would have given us lateral replaceable functionality across the tree (called polymorphism). If you're not familiar with these OOP terms, I urge you to consult a good book on C# because these are powerful concepts you need to master in order to create the best architectures for particular systems.

Each site module will have its own abstract DAL base class, and one or more provider classes that provide the concrete RDBMS-specific implementation. However, all abstract base providers will inherit from a base class themselves. This class, generally called DataAccess, provides a few properties that wrap settings read from the web.config's custom settings, and wrappers for the basic methods of a DbCommand object (SqlCommand, OleDbCommand, OracleCommand, etc.): ExecuteNonQuery, ExecuteReader, and ExecuteScalar. You may prefer to add helper methods that simplify the creation of parameters and the construction of command objects (the Enterprise Library mentioned before does such things), but I find it simple enough to do it the standard way, and I believe that having the code in the methods that use the command makes it more readable. Figure 3-2 represents the inheritance relationship between this DataAccess base class, the abstract provider class and its concrete implementation, and how these wrapper commands work.

Image from book
Figure 3-2

The first abstract class, DataAccess, is the overall base of all DAL classes, and it contains helper methods that apply to various DAL implementations. The main DAL interface is defined by another abstract class that descends from that one: MyDalObj, which defines the functionality (properties and methods) required by all specific DAL implementations. As mentioned before, for this book we'll only have one DAL implementation for SQL Server, so our SQL Server-specific DAL is shown above as MySqlDalObj.

The ExecuteNonQuery function in the DataAccess base class is a special helper method that can be used to translate the name of a stored procedure, or table names in a SQL text query, into the proper naming convention for a particular database used by one site (but in our case we're not going to take it this far, in order to keep things simple). To explain the reason for having this helper method I'd like to relate an experience I had recently that resulted in the need for this kind of helper method. I had developed a web-based Content Management System, or CMS (for the record, it's the one I use behind www.dotnet2themax.com, a web site for .NET developers), whose DAL code used hard-coded references at the C# level to the database's table names and stored procedure names. One day I needed to expand the architecture to support other similar sites on the same shared hosting provider using the same shared SQL Server database (a separate hosting account and database for each site would have made my job easy, but would have cost much more on a monthly basis). The DAL and BLL of these new sites would be the same as that used for my first site, and the only differences would be in the UI layer (the pages), and, of course, the content stored in the tables would be different. My idea was basically to clone the existing site to create a couple of new copies of the site, and then I'd go in and modify them as needed. The problem I encountered was that the database structure was not designed to support multiple sites: there wasn't a SiteID column in each table that would let me keep the data separate for each site, and there was no easy way to leverage the existing DAL code with only minimal changes.

I debated adding a SiteID column to every table to allow existing tables to hold data for different sites, or just developing a new naming convention that would use separate tables for each site. In the end I decided that separate tables gave me a cleaner solution and it would easily allow data for different sites to be backed-up separately, and possibly migrated to a different database or web hosting provider in the future. I decided to use a small prefix on all the table names and stored procedure names to identify the site to which they pertained. To accomplish this change in a simple way at the DAL level, I decided to use a replaceable parameter in the SQL code and stored procedure names that could be changed in one place to customize it for a given site. Therefore, instead of having usp_DoSomething for stored procedures, and Table1 for tables, I should have had something like usp_Site1_DoSomething and usp_Site2_DoSomething for user stored procedures, and Site1_Table1 and Site2_Table1 for tables (usp is the system prefix that identifies the overall CMS that uses the tables, so it's the same for all the sites). To simplify the coding changes to the DAL, I added a replaceable parameter placeholder that could be changed by a helper function to swap out the placeholder name with the site name. However, this meant reviewing and modifying the whole DAL code (several thousand lines of code) to find all occurrences of table names and stored procedure names, adding a placeholder prefix to their name (such as usp_{InstanceName}_DoSomething), and then modifying the code that executes SQL text and stored procedures to have them call a helper method that would replace all occurrences of the placeholder with a value for a specific site as identified in a configuration setting. I wished that I started out using helper methods such as ExecuteNonQuery, ExecuteReader, and ExecuteScalar in a base class from the beginning, so now I do create this type of architecture at the outset!

As mentioned earlier, we won't fully implement the replaceable table name and stored procedure name substitution in this book in order to keep the SQL statements and stored procedures easier to understand and work with. If you need to deploy many similar sites and you want to use the same database (which may be the case if you're an application service provider, and you will install sites for different clients on the same web server and database), then you would be better off designing the data store to support this architecture from the beginning. However, the site we're developing in this book will not encounter that situation, and there's no reason to make the design overly complex.

There's another real-world example of why this base class architecture is a good idea. As I was testing an early version of the sample site for this book, it occurred to me that it might be a good idea to put this site online on the Internet to enable people to see how it works. This might help someone decide whether they want to buy the book, and it might help them follow along in the book if they can view a real site as they read it. A large part of this site involves the administration console, which enables users to add, delete, and edit any dynamic content present on the site: articles, pictures, forums posts, newsletter, polls, etc. Therefore, it would be important for a reader to explore that part as well, and not only the end-user interface. However, I didn't want people to modify the sample content I prepared. In my situation, the administration user interface should be completely browsable, but clicking the buttons to edit/delete/insert content should have no effect, and I certainly didn't want to create a separate version of the UI to disable those buttons. Since I had already created the DataAccess base class with those methods, and all provider classes use it, it sufficed for me to add some code to the ExecuteNonQuery method (which executes all the insert, update, and delete statements), which could avoid executing the command if the site's current user were named "SampleEditor" (an account created for these demonstration purposes). It took only one minute, and the result is exactly what I wanted! Later in the chapter you'll see how this is actually implemented, and in the next chapter I'll cover the membership system, which is used to handle user accounts and member registration.

Designing the Business Logic Layer

The DAL discussed in the previous section is made up of a number of classes that retrieve data from the database by running stored procedures and returning as a collection of custom entity classes that wrap the fields of the retrieved data. The data returned by the DAL is still raw data, even though it's wrapped in classes, because these entity classes do not add anything; they are just a strongly typed container used to move data around. The BLL consumes this data and exposes it to the UI layer, but the BLL also adds validation logic and calculated properties, making some properties private or read-only (while they are all public and writeable in the custom entity classes used between the DAL and the BLL), and adds instance and static methods to delete, edit, insert, and retrieve data. For a domain object named Employee that represents an employee, there may be a property named Boss that returns a reference to another object of type Employee that represents the first object's boss. In middle- to large-size projects there are usually dozens, hundreds, or maybe even thousands of such objects, with relationships between them. This object-oriented and strongly typed representation of any data provides an extremely strong abstraction from the database, which merely stores the data, and it provides a simple and powerful set of classes for the UI developer to work with, without needing to know any details about how and where the raw data will be stored, how many tables are in the database, or which relationships exist between them. This makes the UI developer's job easier, and makes it possible for us to change low-level database structures without breaking any of the UI code (which is one of the primary reasons for using a multi-tier design). This definitely requires more development time initially, and more talented and experienced developers to create this design (which is why you're reading this book) than would be required if we just used a DataSet to pass data around, but in the long run it pays off in the form of easier maintainability and increased reliability.

Once you have a well-designed, completed BLL using domain objects, developing the user interface will be very easy, and could be done by less experienced developers in less time, so some of the up-front time you spend early in the project can be recovered later when you develop the UI. The diagram in Figure 3-3 represents the relationship between the DAL's providers and entity class, and the BLL's domain objects. As you can see, the Customer business class has the same instance properties of the CustomerDetails DAL's entity class, plus a CompleteName calculated read-only property that returns the concatenation of FirstName and LastName. Additionally, it has a couple of instance methods to delete and update the data represented by a specific object, and a number of static methods to retrieve a list of Customer objects, or a single Customer with a specific ID, or to create, update or delete a customer. The following snippet of code shows how a UI developer could retrieve a Customer object filled with some data from the database, update some fields, and then save the changes back to the database:

Customer cust = Customer.GetCustomerByID(3);
cust.FirstName = "Marco";
cust.LastName = "Bellinaso";
cust.Update();
Image from book
Figure 3-3

Alternately, in this case it would be simpler to use the UpdateCustomer static method because you wouldn't need to create a class instance:

Customer.UpdateCustomer(3, "Marco", "Bellinaso");

Which method you use depends on the situation, of course. If you just need to update an existing record but don't need to fill a Customer object with the current data in the database, use the static method. If, instead, you need to read and display the current data, and then update the data, then the first example using the instance method is best.

When you design the BLL of your application, the layering and class design are not the only things you need to think about. There are a number of other questions you need to consider:

  • Can I, and how would I, avoid querying for the same, unchanged data every few seconds in order to increase performance and boost the user's experience?

  • How can I handle transactions for methods made up of multiple sub-operations that must be executed atomically yielding consistent results, so that either all sub-actions complete successfully or they are all rolled back?

  • How can I log any unhandled exception thrown by the application, so that an administrator can later review it and correct the problem, and also log other events of particular interest, either standard ASP.NET events (application started or shut-down) or custom events (record deleted)?

The following sections consider these questions and suggest a possible solution for them.

Caching Data for Better Performance

In every site or web-based application there is some data that doesn't change very often, which is requested very frequently by a lot of end users. Examples are the list of article categories, or the e-store's product categories and product items, the list of countries and states, and so on. The most common solution to increase the performance of your site is to implement a caching system for that type of data, so that once the data is retrieved from the data store once, it will be kept in memory for some interval, and subsequent requests for the same data will retrieve it from the memory cache, avoiding a round-trip to the database server and running another query. This will save processing time and network traffic, and thus produce a faster output to the user. In ASP.NET 1.x, the System.Web.Caching.Cache class was commonly used to cache data. The cache works as an extended dictionary collection, whereby each entry has a key and a related value. You can store an item in cache by writing Cache.Insert("key", data), and you retrieve it by writing data = Cache["key"]. The Insert method of the Cache class has a number of other overloads through which you can specify either the cached data's expiration time or how long the data will be kept in the cache, and whether it is a sliding interval (a sliding interval is reset every time the cached data is accessed), plus a dependency to a file or other cached items. When the dependent file is changed, the expiration time is reached, or the interval passes, the data will be purged from the cache, and at the next request you will need to query the data directly from the database, storing it into the cache again.

The New Caching with SQL Dependency Support

One limitation of the ASP.NET 1.x cache is that when the expiration time or caching interval is reached, the data is removed from the cache and you have to read it again from the DB even if it hasn't actually changed in the database. Conversely, if you cache the data for 30 minutes, and the data changes the second after you cache it, you'll be displaying stale and out-of-sync data for almost 30 minutes. This could be unacceptable for some types of information, such as the price of a product or the number of items in stock. The Cache class has been enhanced for ASP.NET 2.0; it now supports dependencies to database tables, in addition to files and other cached items. In practice, you can cache the data for an indeterminate period, until the data in the source database's table actually changes. This cache invalidation mechanism works for all versions of SQL Server (version 7 and later), where it is based on polling and triggers. SQL Server 2005 adds another type of cache invalidation based on receiving events from the database, so it's more efficient if you know you'll be deploying to SQL Server 2005. In addition, the polling method only watches for table-level changes, but the SQL Server 2005 event method enables you to watch individual rows to see if they've been changed.

SQL Server 7+ Support for Table-Level SQL Dependencies

When using the polling style of cache invalidation, ASP.NET 2.0 checks a counter in a support table every so often (the interval being configurable), and if the retrieved value is greater than the value read on the last check, then the data was changed, and thus it removes it from cache. There is one counter (and therefore one record in the AspNet_CacheTablesForChangeNotification support table) for each table for which you want to add SQL-dependency support. The counter is incremented by a table-specific trigger. You create the required table, trigger, and stored procedure needed to support the SQL dependency system by executing the aspnet_regsql.exe command-line tool from Visual Studio's command prompt. Run it once to add the support at the database level to create the AspNet_Cache TablesForChangeNotification table and the supporting stored procedure, as follows (this assumes your database is a local SQL Server Express instance named SqlExpress):

aspnet_regsql.exe -E -S .\SqlExpress -d aspnetdb -ed

The -E option specifies that you're using Windows integrated security and thus don't need to pass username and password credentials (you would need to use the -U and -P parameters, respectively, otherwise). The -S option specifies the SQL Server instance name (specifying localhost\SqlExpress is the same). SqlExpress is the default instance name you get when you install SQL Server 2005 Express. The -d option specifies the database name (aspnetdb), and the -ed tells it to "enable database."

The next step is to add support for a specific table, which means you must create a record in the AspNet_CacheTablesForChangeNotification table, and a trigger for the table to which you're adding support:

aspnet_regsql.exe -E -S .\SqlExpress -d aspnetdb -t Customers -et

In addition to the first command description given earlier, the -t parameter specifies the table name, and the -et parameter stands for "enable table." For the preceding commands to work, the aspnetdb database must be already attached to a SQL Server instance. This is already the case for SQL Server 7/2000 and for the fully featured editions of SQL Server 2005; however, with SQL Server 2005 Express, you typically have the database attach dynamically at runtime, so that you can do the XCopy deployment for the database as well as for the rest of the site's files. If that's your situation, you must first temporarily attach the database file, run aspnet_regsql.exe, and then detach the database. The attachment/detachment can be done by running the sp_attach_db and sp_detach_db system stored procedures. You can execute them from SQL Server Management Studio Express (downloadable from Microsoft if it didn't come with your SQL Express installation), or from the sqlcmd.exe command-line program, run from the VS 2005's command prompt. Many of the SQL commands used as examples in this book use the sqlcmd program because everyone should have this program. It is started from a Visual Studio command prompt as follows (the command-line options are similar to those of aspnet_regsql as explained above):

sqlcmd -E -S .\SqlExpress

Once you are in the sqlcmd program, you run the following command to attach the database:

sp_attach_db "aspnetdb", "c:\Websites\TheBeerHouse\App_Data\AspNetDB.mdf"
go

Then run the two aspnet_regsql commands listed above followed by "go" on a separate line to end the batch, and finally detach the database as follows:

sp_detach_db "aspnetdb"
go

Note

To close the sqlcmd shell, just type quit or exit and press Enter. Note that if you were running the stored procedures from SQL Server Management Studio, you would need to replace the double quotes with single quotes, and the GO command would not be needed.

The last thing to do to complete the SQL dependency configuration is to write the polling settings in the web.config file. You can configure different polling profiles for the same database, or different settings for different databases. This is done by adding entries under the system.web/caching/sqlCache Dependency/databases section, as shown below:

<configuration>
   <connectionStrings>

      <add name="SiteDB" connectionString="Data Source=.\SQLExpress;Integrated   Image from book
         Security=True;User Instance=True;AttachDBFilename=|DataDirectory|       Image from book
         AspNetDB.mdf" />
   </connectionStrings>

   <system.web>
      <caching>
         <sqlCacheDependency enabled="true" pollTime="10000">
            <databases>
               <add name="SiteDB-Cache" connectionStringName="SiteDB"
                  pollTime="2000" />
            </databases>
         </sqlCacheDependency>
      </caching>

      <!-- other settings here... -->
   </system.web>
</configuration>

As you see, there is an entry named SiteDB-cache that refers to the databases pointed to by the connection string called SiteDB (more about this later) and that defines a polling interval of 2 seconds (2,000 milliseconds). If the pollTime attribute is not specified, the default value of 10 seconds (in the sample above) would be used.

Now that everything is configured, you can finally write the code to actually cache the data. To create a dependency to a Customers table, you create an instance of the System.Web.Caching.SqlCache Dependency class, whose constructor takes the caching profile name defined above, and the table name. Then, when you insert the data into the Cache class, you pass the SqlCacheDependency object as a third parameter to the Cache.Insert method, as shown below:

SqlCacheDependency dep = new SqlCacheDependency("SiteDB-cache", "Customers");
Cache.Insert("Customers", customers, dep);

Let's assume that you have a GetCustomers method in your DAL that returns a list of CustomerDetails objects filled with data from the Customers table. You could implement caching as follows:

public List<CustomerDetails> GetCustomers()
{
   List<CustomerDetails> customers = null;

   if (Cache["Customers"] != null)
   {
      customers = (List<CustomerDetails>)Cache["Customers"];
   }
   else
   {
      using (SqlConnection cn = new SqlConnection(_connString))
      {
         SqlCommand cmd = new SqlCommand("SELECT * FROM Customers", cn);
         customers = FillCustomerListFromReader(cmd.ExecuteReader());

         SqlCacheDependency dep = new SqlCacheDependency(
            "SiteDB-cache", "Customers");

         Cache.Insert("Customers", customers, dep);
      }
   }

   return customers;
}

The method first checks whether the data is already in cache: If it is, then it retrieves the data from there; otherwise, it first retrieves it from the database, and then caches it for later use.

Not only can you use this caching expiration mechanism for storing data to be accessed from code, you can also use it for the ASP.NET's Output Caching feature, i.e., caching the HTML produced by page rendering, so that pages don't have to be re-rendered every time, even when the page's output would not change. To add output caching to a page, add the @OutputCache page directive at the top of the .aspx file (or the .ascx file if you want to use fragment caching in user controls):

<%@ OutputCache Duration="3600" VaryByParam="None"
   SqlDependency="SiteDB-cache:Customers" %>

With this directive, the page's output will be cached for a maximum of one hour, or less if the data in the Customers table is modified.

The problem with this implementation of the SQL dependency caching is that the dependency is to the entire table; it invalidates the cache regardless of which data in the table is changed. If you retrieved and cached just a few records from a table of thousands of records, why should you purge them when some other records are modified? With SQL Server 7 and 2000 whole-table monitoring for cache dependencies is your only choice, but SQL Server 2005 adds row-specific cache dependency tracking.

SQL Server 2005-Specific SQL Dependency Support

The counter- and polling-based SQL dependency implementation just described is fully supported by SQL Server 2005, but this latest version of SQL Server also has some new features and technology built into it that further extend the capabilities of the Cache class. The new engine is able to create an indexed view (a view with a physical copy of the rows) when a query for which the client wants to create a dependency is executed. If after an insert, delete or update statement the results returned by a query would change, SQL Server 2005 can detect this situation and send a message to the client that registered for the dependency, by means of the Service Broker. These Query Notification events sent from SQL Server back to an application program enable a client to be notified when some data it previously retrieved was changed in the DB since it was originally retrieved, so that the client can re-request that data for the latest changes. A new class, System.Data.SqlClient.SqlDependency, can create a dependency tied to a specific SqlCommand, and thus create a logical subscription for change notifications that are received by its OnChange event handler. The following snippet shows how to create such a dependency:

using (SqlConnection cn = new SqlConnection(_connString))
{
   cn.Open();
   SqlCommand cmd = new SqlCommand(
      "SELECT ID, CustomerName FROM dbo.Customers", cn);
   SqlDependency dep = new SqlDependency(cmd);
   dep.OnChange += new OnChangeEventHandler(CustomersData_OnChange);

   SqlDataReader reader = cmd.ExecuteReader();

   while (reader.Read())
   {
      Trace.Write(reader["CustomerName"].ToString());
   }

}

Below is the specified event handler for OnChange, raised when the underlying data returned by the preceding query changes in the database:

void CustomersData_OnChange(object sender, SqlNotificationEventArgs e)
{
   Trace.Warn("Customers data has changed. Reload it from the DB.");
}

Important

Note that in order for this code to work, you must first enable the Query Notifications support in your client by calling the SqlDependency.Start method once, somewhere in the application. If you're using it from within a web-based application, the right place to put this call would be in the Application_Start global event handler in global.asax. For a WinForms application, it may be the Main entry-point method, or the main form's Form_Load event.

The preceding code just shows that we're being notified when the underlying data in the database has changed, but we normally want to go one step further and purge data from the cache when changes are detected in the database. The ASP.NET's SqlCacheDependency has other overloaded versions of its constructor, and one of them takes a SqlCommand instance. It creates a SqlDependency object internally behind the scenes, and handles its OnChange event to automatically remove the data from the cache when data for the specific SELECT query would change. Here's all you have to do to cache some data with a dependency to a SqlCommand:

SqlCommand cmd = new SqlCommand("SELECT ID, CustomerName FROM dbo.Customers", cn);
SqlCacheDependency dep = new SqlCacheDependency(cmd);
Cache.Insert("keyname", data, dep);

The sample GetCustomers method shown above would then become the following:

public List<CustomerDetails> GetCustomers()
{
   List<CustomerDetails> customers = null;

   if (Cache["Customers"] != null)
   {
      customers = (List<CustomerDetails>)Cache["Customers"];
   }
   else
   {
      using (SqlConnection cn = new SqlConnection(_connString))
      {

         SqlCommand cmd = new SqlCommand(
            "SELECT ID, CustomerName FROM dbo.Customers", cn);
         SqlCacheDependency dep = new SqlCacheDependency(cmd);

         customers = FillCustomerListFromReader(cmd.ExecuteReader());
         Cache.Insert("Customers", customers, dep);
      }
   }

   return customers;
}

This technology has the obvious advantage that the dependency is at the query level, and not at the entire table level like the implementation for SQL Server 7/2000, and the event method is much more efficient than using a polling mechanism. However, it has a number of serious limitations that drastically reduce the number of occasions when it can be used, so sometimes the whole-table polling method is your only choice. Here are the most important constraints:

  • You can't use the * wildcard in the SELECT query; instead, you must explicitly list all the fields. This is a good practice anyway, because you should only request fields that you actually need and not necessarily all of them. Listing them explicitly also puts you in control of their order in the returned DataReader or DataTable, something that can be important if you access fields by index and not by name (although access by index is not itself a good practice).

  • You must reference any table with the full name, e.g., dbo.Customers. Just Customers wouldn't be enough. This is a significant issue because most of us aren't used to fully qualifying table names, but it's a simple matter to handle if you remember that you need to do it.

  • The query can't use aggregation functions such as COUNT, SUM, AVG, MIN, MAX, etc.

  • The query can't use ranking and windowing functions, such as the new ROW_NUMBER() function, which is tremendously useful for implementing high-performance results pagination to be used, for example, in the DataGrid, GridView, or other ASP.NET server-side controls. (This function will be explained in Chapter 5.)

  • You can't reference views or temporary tables in the query.

  • The query can't return fields of type text, ntext, or image (blob types). Consider that many tables will have such columns, for containing the description of a product, the content of an article or a newsletter, etc.

  • You can't use DISTINCT, HAVING, CONTAINS and FREETEXT.

  • The query can't include subqueries, outer-joins or self-joins. This is one of the biggest limitations, as subqueries are commonly used.

  • All of the preceding limitations exist regardless of whether the query is run directly from the client as a SQL text command, or from inside a stored procedure. For stored procedures, however, there's a further limitation: You can't use the SET NOCOUNT ON statement, which is often used (and suggested) to reduce the quantity of information sent across the network, for cases where you don't need counts.

If you consider that most of the modules developed in the following chapters will need to implement custom pagination to be fast (and thus windowing functions or temporary tables, COUNT, subqueries and other prohibited features), and that many columns will be of type ntext, you can easily understand why you may not be able to use to this form of SQL dependency often.

Note

If you want to know more about the Service Broker and Query Notifications, the technology behind Sql Dependencies, I recommend a whitepaper written by Bob Beauchemin, titled "Query Notifications in ADO.NET 2.0" and available on MSDN Online at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnvs05/html/querynotification.asp.

Choosing a Caching Strategy That Fits Your Needs

The Cache class has been greatly improved in its latest implementation. However, more is less sometimes, and using your own code to handle the expiration of data and manual purging may be better than using some form of automated SQL dependency. The polling-based approach is done at the table level, so it will often invalidate your data when unrelated data in the same table has been changed. The SQL Server 2005's Service Broker/Query Notification technology is very intriguing, and will be very handy in some situations, but as I said earlier it suffers from too many limitations to be used often. Additionally, both approaches are bound to SQL Server, and should only be used in the DAL provider specific to SQL Server. Therefore, if we used the SQL dependencies, different providers (for different RDBMSs) should implement a different caching strategy, rewritten from scratch. This is something I don't like, because I want my caching code in the BLL (not in the DAL), so that it's executed the same way regardless of which DAL provider is being used. For all these reasons I won't be using any form of built-in SQL dependency for the modules developed in the rest of the book. Instead, I use the good old ASP.NET 1.x caching features based on time/interval expiration. To avoid displaying stale data, we'll implement some simple methods that purge data from cache when it actually changes in the database. To call the methods you won't need to implement triggers at the database level, or use some other connection and notification service between the data store and the application's code. You'll just call them from the BLL methods that add, edit and delete data. Because the site contents will be managed by a web-based custom administration console and your own BLL classes, there won't be a need to intercept the changes at the database level. Instead, you just add some code to the BLL methods themselves. This gives you complete control of what data you need to purge, and when you must actually purge it (e.g., when you remove data if a specific row field changes, but not if another field changes).

Transaction Management

A very important issue you must tackle when designing a business layer is how you plan to manage transactions. Many business methods call multiple DAL methods internally to update, insert, or delete multiple records, potentially in multiple tables. You must ensure that multiple calls run within a transaction, so that if one fails, all actions performed by previous methods are rolled back. If you don't do this, you'll end up having inconsistent, and wrong, data. Managing transactions would be complicated if you had to do everything yourself, but fortunately there are several technologies and frameworks that can do the plumbing for you.

ADO.NET Transactions

In the simplest case you can use explicit ADO.NET client-side transactions. You should already be familiar with them, but here's some sample code that runs a couple of commands inside a transaction to refresh your memory:

using (SqlConnection cn = new SqlConnection(connString))
{
   cn.Open();
   SqlTransaction tran = cn.BeginTransaction();

   try
   {

      SqlCommand cmd1 = new SqlCommand(cmdText1, cn, tran);
      cmd1.ExecuteNonQuery();

      SqlCommand cmd2 = new SqlCommand(cmdText2, cn, tran);
      cmd2.ExecuteNonQuery();

      tran.Commit();
   }
   catch(Exception e)
   {
      tran.Rollback();
   }
}

The preceding code is simple and works fine in many situations. However, the transactions managed by ADO.NET are connection-bound, which implies the following limitations:

  • You have to use them from the DAL, where you have the connection objects, and not from the BLL where you'll typically (but not necessarily) want to manage transactions. This is not a problem if you're employing a lighter architecture with the DAL and BLL mixed together in a single layer, which may actually be a valid choice for small sites, but it does pose a problem for multi-tier systems.

  • The transaction is bound to a single connection, which means it can't span multiple databases. This may be required if, for example, you store all data for the forums module in one database and the articles data on some other database, and you need some business method that updates some records in both systems within one logical transaction.

  • All commands you execute must use the same connection. If these commands are executed from different methods, wrapping them into a single transaction means that you must find some way to pass the connection object through all the methods. This could be done with an additional method parameter, but this leads to an ugly and inflexible design.

COM+ and SWC Transactions

All of the problems presented can be solved by using COM+ as the application server for your components. COM+ can handle transactions that include multiple commands and multiple connections (also to different data stores), and generally any action that the DTC (Distributed Transaction Coordinator) can manage, such as sending a MSMQ message. By using COM+ you can easily write atomic and isolated procedures; once you start the transaction in a method, you can have all submethods enlist into the transaction automatically, and have the transaction be committed if no exception is thrown or otherwise rolled back. If your class inherits from System.EnterpriseServices.ServicedComponent, the transaction handling is configurable by means of .NET attributes (e.g., AutoCompleteAttribute, for the automatic transaction completion just outlined) that you apply at the assembly, class, and method level. Here's a sample transactional class that uses attributes to configure automatic transactions:

[Transaction(TransactionOption.RequiresNew)]
public class SampleBO : ServicedComponent
{
   [AutoComplete]
   public void UpdateDate()
   {
      MyBizObject1 obj1 = new MyBizObject1();

      obj1.DoSomething();

      MyBizObject2 obj2 = new MyBizObject2();
      obj2.DoSomethingElse();
   }
}

Note

If you don't like declarative transactions, you can still handle them imperatively in code through the ContextUtil helper class, and its EnableCommit, DisableCommit, SetAbout, and SetComplete methods.

COM+ transactions add a lot of overhead to simple ADO.NET transactions, and as a result they are much slower (even 50%). However, COM+ doesn't just mean transactions, but also object polling, just-in-time activation, queued components and much more — all features that can make your application more reliable and scalable, which is often more important than plain performance statistics. And, after all, if your application shuts down after a high load, how useful is it to know that it was extremely fast with a few users, when it was started? The problem with using COM+ in .NET is that your business objects must inherit from System.EnterpriseServices.ServicedComponents and must respect a number of rules (for example, you can't define static methods), and you must make this decision early on; otherwise, adding transaction support later will require a lot of additional work. Another problem is that deploying .NET Enterprise Services is not as easy as deploying a normal assembly, as you must generate a COM type library from the assembly, and register it into the Windows registry and the COM+ catalog. These are operations that only a system administrator for the remote server can do, so you won't be able to do this if you're planning to deploy your site using an inexpensive shared hosting provider service.

Fortunately, though, if you're hosting the site on Windows Server 2003, you can take advantage of a new feature of COM+ 1.5 called Services Without Components (SWC). This should also work on Windows XP, but you should never deploy a production web application to a client version of Windows for performance and reliability reasons. This feature allows you to configure, start, and manage a distributed transaction without actually writing a typical COM+ component that must be registered in the COM+ catalog. With the .NET Framework 1.1, you can do everything with the ServiceConfig, ServiceDomain, and ContextUtil classes that you find in the System.EnterpriseServices.dll assembly, under the System.EnterpriseServices namespace. You can configure the transaction on-the-fly by creating an instance of ServiceConfig and setting the transaction type (Transaction and IsolationLevel properties), specifying whether tracking is enabled (TrackingEnabled property), the application and component name (TrackingAppName and TrackingComponentName properties), and other options. Finally, you start the transaction by calling the Enter static method of the ServiceDomain class, which takes the ServiceConfig object that specifies the configuration. You use the SetComplete and SetAbort static methods of the ContextUtil class to commit or roll back the transaction, respectively. Here's an example:

// configure and start the transaction
ServiceConfig svcConfig = new ServiceConfig();
svcConfig.TrackingEnabled = true;
svcConfig.TrackingAppName = "TheBeerHouse";
svcConfig.TrackingComponentName = "MB.TheBeerHouse.BLL.SampleBO";
svcConfig.Transaction = TransactionOption.RequiresNew;
svcConfig.IsolationLevel = TransactionIsolationLevel.ReadCommitted;
ServiceDomain.Enter(svcConfig);

try
{

   MyBizObject1 obj1 = new MyBizObject1();
   obj1.DoSomething();

   MyBizObject2 obj2 = new MyBizObject2();
   obj2.DoSomethingElse();

   ContextUtil.SetComplete();
}
catch (Exception e)
{
   // rollback the transaction
   ContextUtil.SetAbort();
}
finally
{
   ServiceDomain.Leave();
}

This code wraps the calls to two different business objects into a single distributed transaction controlled by the DTC. You only have to start the transaction, catch exceptions that may be thrown, and commit or roll back the transaction. You don't have any special deployment needs: a simple XCopy is enough. Also, SWC is good because you can easily add transactions to business objects that weren't originally designed to handle distributed transactions — namely, objects that don't inherit from ServicedComponent, and that call DAL methods that are not ADO.NET transaction-aware (that don't pass Transaction objects as parameters). SWC, however, doesn't completely replace traditional Enterprise Services components, as they don't allow you to use other features such as object pooling, just-in-time activation, queued components, and all other COM+ functionality; you can use them just to add transaction support with the least development and deployment effort. All in all, this is a very welcome facility that should be used thoughtfully.

The New System.Transactions Namespace

SWC transactions are definitely good, but version 2.0 of the .NET Framework introduces something even better: a new System.Transactions namespace that provides a modern, managed interface to handle transactions that can not be handled by ADO.NET's SqlTransaction class. Two new transaction managers were introduced: Lightweight Transaction Manager and OleTx Transaction Manager. The former manages transactions bound to a single durable resource (i.e., a single connection to a single data store), while the latter can manage distributed transactions, whereby multiple connections to different data stores come into play. You don't have to choose between the two yourself; a proper transaction manager will be automatically chosen according to the type and number of resources that you wish to use in a transaction scope.

The basic class that you'll be using is System.Transactions.TransactionScope. When an object of this type is created, it starts a lightweight transaction. Then you start creating your connections and other transaction-aware resources (such as MSMQ queues and messages). As long as you use a single resource that supports lightweight transactions, the transaction will be handled by the resource manager itself. SQL Server 2005 (including the Express Edition) has this capability, so if you create a single connection to one of its databases, it will take care of everything internally, consuming as little resources as possible, and with very good performance. As soon as you introduce a second connection/resource into the transaction scope, however, the transaction manager will automatically be promoted to an OleTx Transaction Manager, which can handle distributed transactions by means of the COM+ DTC technology under the covers (it dynamically configures a temporary Enterprise Service through SWC). This also happens when you have a single connection to a resource manager that doesn't support lightweight transactions, such as SQL Server 7/2000, Oracle, and other RDBMSs.

Here's an example that starts a transaction and runs a couple of different commands within it:

using(TransactionScope scope = new TransactionScope())
{
   using (SqlConnection cn = new SqlConnection(connString))
   {
      cn.Open();
      SqlCommand cmd1 = new SqlCommand(cmdText1, cn);
      cmd1.ExecuteNonQuery();

      SqlCommand cmd2 = new SqlCommand(cmdText2, cn);
      cmd2.ExecuteNonQuery();
   }
   scope.Complete();
}

Because the two commands share the same connection, a lightweight transaction will be created if connString points to a SQL Server 2005 database. As I mentioned before, though, transactions are often run from the BLL, and must wrap calls to several other methods, which internally may create separate connections and target different databases. The code is still as simple as the preceding code, though:

using(TransactionScope scope = new TransactionScope())
{
   MyBizObject1 obj1 = new MyBizObject1();
   obj1.DoSomething();

   MyBizObject2 obj2 = new MyBizObject2();
   obj2.DoSomethingElse();

   scope.Complete();
}

When this code is run, a distributed COM+ transaction will probably be created under the covers (it depends on whether the two methods use two connections or share a single one), but the developer doesn't need to know this, and doesn't have to do anything special at design time or deployment time. Another advantage of using System.Transactions is that you can create transactions only in methods where you really need them, and you don't have to make a whole class transactional. Given how simple it is to work with TransactionScope and related classes, you don't even need to build a framework or some sort to base service to simplify things; it's all already there! However, I would not recommend COM+, SWC, or System.Transactions for use with shared web hosting because the servers are out of your control and it's not clear whether this could be used reliably in an environment where server reconfiguration is commonplace. Also, in the sample web site for this book, we won't make use of these advanced technologies.

Note

If you want to know more about System.Transactions, refer to Juval Lowy's whitepaper entitled "Introducing System.Transactions in the Microsoft .NET Framework version 2.0," downloadable from www.microsoft.com/downloads/details.aspx?FamilyId=&displaylang=en.

Health Monitoring and Exception Handling

Runtime exceptions and errors happen sometimes, but your end user doesn't need to know it. Or if there is such an important error that prevents the site from working, then a simplified error should be shown to the user — you should never display technical info in an error message (for security reasons). Every site should catch exceptions, log them for future review by an administrator, and output a friendly error page to the user. In the "old" days of ASP.NET 1.1, you had to take care of just about everything, including implementing your own logging system that would save exception information to a database, a text file, the Event Viewer, or some other data store, writing and managing some sort of custom configuration to allow administrators to enable or disable the logging, etc.

Note

Exception handling, logging, and configuration management can be implemented quite easily using the framework provided in the Enterprise Library, which includes some of the building blocks developed in recent years by the Microsoft Patterns & Practices Team. However, it's an external library and there's a fair amount of work involved to set it up and integrate it into a site. My experience is that it seems like overkill in many situations, and the heavy architecture may lead to new opportunities for a site to fail, but it's quite helpful to study their code, and you can learn a lot from their efforts. You can read more about Enterprise Library and the building blocks that compose it from http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag2/html/entlib.asp.

ASP.NET has always had events and configuration settings that allow you to easily implement custom error pages, so that the user doesn't see those yellow pages with raw details about the exception (something that not only was ugly to see, but if you had left debug information in the assemblies uploaded to the server, could even be dangerous, as it could expose small but potentially critical pieces of your source code, SQL table names, etc.). Instead of going to the IIS administration console to specify the path for the custom error pages for different error codes (something you often can't do if you're on a shared hosting site), just use the customErrors section in the web.config file, as follows:

<customErrors mode="RemoteOnly" defaultRedirect="~/FriendlyError.aspx">
   <error statusCode="404" redirect="~/PageNotFound.aspx" />
</customErrors>

As you see, you can have error pages for a specific error code, and a default error page that covers all other cases. mode="RemoteOnly" means that the custom error page will only be shown to remote clients; if you browse the site from the local machine (i.e., if you have an in-house server and work directly from there, or if you can do it through a Remote Desktop connection), you will still see the raw error information and stack trace.

Besides simply showing the user a friendlier message for errors, you could also log all details about an exception by writing some custom code in the page's Error event handler. Even better, you could just override the OnError method of a custom page base class (if you had it) so that you write the code only once and the logging behavior would be inherited by all other pages:

public class BasePage : System.Web.UI.Page
{
   protected override void OnError(EventArgs e)
   {
      // log last exception, retrieved by Server.GetLastError()
   }
   ...
}

Alternatively, if you didn't have a custom base page class, you could do the same from the Application_Error event handler, in the Global.asax.cs code-behind file:

void Application_Error(Object sender, EventArgs e)
{
   // log last exception, retrieved by Server.GetLastError()
}

These logging techniques still work, of course, but now with ASP.NET 2.0 there's something much better, and it saves us a lot of time and effort: the instrumentation and health monitoring system. This system takes care of logging a number of things: unhandled application exceptions, lifetime events (application started, application stopped, etc.), audit events (failed or successful logins, failed request validation, etc.) and more. In some ways this is similar to the Enterprise Library, but it's built in to ASP.NET. You can also create your own custom events, which you can then raise to log particular actions, such as the deletion of a record. This framework is extensively customizable through the new <healthMonitoring> section in web.config, where you can define rules for logging events (such as the maximum number of times the same event can be logged in a specified interval), register custom events, and register and select the provider class you want to use to log events. In fact, the health monitoring system — like many other ASP.NET 2.0 parts (such as the Membership system, described in the next chapter) — is based on the provider model design pattern: This means that several classes inherit from a common abstract base class, and provide different concrete implementations. The various providers allow to us save the logged data on different media: The built-in providers can log to a SQL Server database, the Windows Event Log, the WMI Log, or to e-mails. You can also create your own logging providers by inheriting from the System.Web.Management.WebEventProvider base provider class, but the built-in providers cover most of the logging needs I can think of (unless you want to log to Oracle, or something like that). I'm not fond of logging onto a database as the only logging mechanism because you might be trying to log an error regarding a database problem, and you may not be able to log the message because the database may be down. However, if you're in a shared-hosting scenario, you probably won't have access to the Event Viewer, and using text files is not good because of potential locking and concurrency issues, so using a database is still the best option.

As an example of what information is logged, here's a case where a DivideByZero exception is thrown and not caught, and the following text is logged by the health monitoring framework, if is properly configured:

Event code: 3005
Event message: An unhandled exception has occurred.
Event time: 10/10/2005 3:52:47 PM
Event time (UTC): 10/10/2005 10:52:47 PM
Event ID: 82cfbe7544f54a10abfb31fcabc1d466
Event sequence: 221
Event occurrence: 1
Event detail code: 0

Application information:
    Application domain: 71f34b0a-1-127734569896956416
    Trust level: Full
    Application Virtual Path: /TheBeerHouse
    Application Path: C:\Websites\TheBeerHouse\
    Machine name: VSNETBETA2

Process information:
    Process ID: 5064

    Process name: WebDev.WebServer.EXE
    Account name: VSNETBETA2\Marco

Exception information:
    Exception type: DivideByZeroException
    Exception message: Attempted to divide by zero.

Request information:
    Request URL: http://localhost:14376/TheBeerHouse/Default.aspx
    Request path: /TheBeerHouse/Default.aspx
    User host address: 127.0.0.1
    User: SampleEditor
    Is authenticated: True
    Authentication Type: Forms
    Thread account name: VSNETBETA2\Marco

Thread information:
    Thread ID: 4
    Thread account name: VSNETBETA2\Marco
    Is impersonating: False
    Stack trace:    at _Default.Page_Load(Object sender, EventArgs e) in
c:\Websites\TheBeerHouse\Default.aspx.cs:line 17
   at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o,
Object t, EventArgs e)
   at System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender,
EventArgs e)
   at System.Web.UI.Control.OnLoad(EventArgs e)
   at MB.TheBeerHouse.UI.BasePage.OnLoad(EventArgs e) in
c:\Websites\TheBeerHouse\App_Code\BasePage.cs:line 46
   at System.Web.UI.Control.LoadRecursive()
   at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint,
Boolean includeStagesAfterAsyncPoint)


Custom event details:

For more information, see Help and Support Center at
http://go.microsoft.com/fwlink/events.asp.

As you see, in addition to the exception error message, a lot of other information is included in the logged message, such as the URL and physical path of the page that threw the exception, the server name and IP address, the application's trust level, the current date and time, the current thread's and current process' username, the name of the site's user if they are currently logged in, the exception's stack trace, and more.

Configuring the Health Monitoring System

Looking at the framework in more details, the events raised and logged are instances of some classes you find under the System.Web.Management namespace, or custom classes:

  • WebBaseEvent: A generic base event class from which all other events inherit

  • WebManagementEvent: A base class for all the application events, including request errors, life-time events (application start and end) and audit events

  • WebHeartbeatEvent: An event raised periodically by the ASP.NET engine, containing information representing the health of the application at that time: number of active threads, queued and executing requests, and memory usage

  • WebRequestEvent: The base class for all events related to web requests, such as failed cross-site scripting validation, exceeded maximum post length, or anything else that stops the current request from completing

  • WebAuditEvent: This event is raised on failed or successful authentications, and failed decryption and validation of the page's ViewState, cookies, etc.

  • WebBaseErrorEvent: The base class for all error events, including exceptions and request/infrastructure errors

  • WebErrorEvent: The event for infrastructure errors, such as compilation, parsing and configuration errors

These are the built-in provider classes (all located under the System.Web.Management):

  • EventLogWebEventProvider: Logs events to the Windows Event Log

  • SqlWebEventProvider: Logs events to a SQL Server (7, 2000 or 2005) database, as long as they have the expected tables and stored procedures (explained below)

  • TraceWebEventProvider: Sends events to the current trace listener collection (typically the tracing is saved in memory to be displayed by trace.axd, or on the page's output itself, but additional listeners can be added to save information on a file or other media)

  • WmiWebEventProvider: Forwards the web events to the Windows Management Instrumentation (WMI) subsystems

  • SimpleMailWebEventProvider: Sends the web event's information by e-mail to the specified administrator

  • TemplatedMailWebEventProvider: Like SimpleMailWebEventProvider, but with an additional property that points to a text file to be used as template for the mail, so that it can be completely customized and translated.

The configuration settings stored in the web.config file have the following format:

<healthMonitoring>
   <eventMappings>...</eventMappings>
   <providers>...</providers>
   <rules>...</rules>
   <profiles>...</profiles>
   <bufferModes>...</bufferModes>
</healthMonitoring>

The eventMapping subsection registers the event classes by specifying their full type. You must manually register both built-in and custom classes that you want to be logged. You assign a name to the event class you register, and you'll use it in other sections to identify the event type you want to configure. Following is an excerpt from the default web.config that registers all built-in events:

<eventMappings>
   <add name="All Events" type="System.Web.Management.WebBaseEvent,System.Web,    Image from book
      Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a"

         startEventCode="0" endEventCode="2147483647" />
   <add name="Heartbeats" type="System.Web.Management.WebHeartbeatEvent,System.Web, Image from book
      Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a"
         startEventCode="0" endEventCode="2147483647"  />
   <add name="Application Lifetime Events" type="System.Web.Management              Image from book
      .WebApplicationLifetimeEvent,System.Web,Version=2.0.0.0,Culture=neutral,      Image from book
      PublicKeyToken=b03f5f7f11d50a3a"
         startEventCode="0" endEventCode="2147483647" />
   <add name="Request Processing Events" type="System.Web.Management                Image from book
      .WebRequestEvent,System.Web,Version=2.0.0.0,Culture=neutral,                  Image from book
      PublicKeyToken=b03f5f7f11d50a3a"
         startEventCode="0" endEventCode="2147483647" />
   <add name="All Errors" type="System.Web.Management.WebBaseErrorEvent,System      Image from book
      .Web,Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a"
         startEventCode="0" endEventCode="2147483647" />
   <add name="Infrastructure Errors" type="System.Web.Management.WebErrorEvent,     Image from book
      System.Web,Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a"
         startEventCode="0" endEventCode="2147483647" />
   <add name="Request Processing Errors" type="System.Web.Management                Image from book
      .WebRequestErrorEvent,System.Web,Version=2.0.0.0,Culture=neutral,
      PublicKeyToken=b03f5f7f11d50a3a"
         startEventCode="0" endEventCode="2147483647" />
   <add name="All Audits" type="System.Web.Management.WebAuditEvent,System.Web,     Image from book
      Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a"
         startEventCode="0" endEventCode="2147483647" />
   <add name="Failure Audits" type="System.Web.Management.WebFailureAuditEvent,     Image from book
      System.Web,Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a"
         startEventCode="0" endEventCode="2147483647" />
   <add name="Success Audits" type="System.Web.Management.WebSuccessAuditEvent,     Image from book
System.Web,Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a"
         startEventCode="0" endEventCode="2147483647" />
</eventMappings>

The providersll subsection registers the providers. If you register multiple providers, then you'll be able to use different providers for different types of events. Here's an example of how to register the providers for SQL Server, the Event Log, and the WMI subsystem:

<providers>
   <add name="EventLogProvider" type="System.Web.Management                       Image from book
      .EventLogWebEventProvider,System.Web,Version=2.0.0.0,Culture=neutral,
      PublicKeyToken=b03f5f7f11d50a3a" />
   <add name="SqlWebEventProvider" type="System.Web.Management                    Image from book
      .SqlWebEventProvider,System.Web,Version=2.0.0.0,Culture=neutral,            Image from book
      PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="LocalSqlServer"
      maxEventDetailsLength="1073741823"
      buffer="false" bufferMode="Notification" />
   <add name="WmiWebEventProvider" type="System.Web.Management                    Image from book
      .Wm1WebEventProvider,System.Web,Version=2.0.0.0,Culture=neutral,            Image from book
      PublicKeyToken=b03f5f7f11d50a3a" />
</providers>

Note that the element for registering the SQL Server provider has an additional attribute, connectionStringName, that refers to an element of the <connectionStrings> section with the connection string for the database that will store the logging information (more about this shortly). The rules section is where you actually define which events you want to log, which registered provider, and how often (if a program may get stuck in a loop that generates thousands of exceptions, then you don't want to log them all). Here's an example that configures all errors to be logged to the Windows Event Log, while the failed audit events will be logged to a SQL database:

<rules>
   <add name="All Errors Default" eventName="All Errors"
      provider="EventLogProvider" minInstances="1" maxLimit="Infinite"
      minInterval="00:01:00" />
   <add name="Failure Audits Default" eventName="Failure Audits"
         provider="EventLogProvider" profile="Critical" />
</rules>

Note that in the first element, the minInstances, maxLimit, and minInterval attributes specify, respectively, how many times the event must occur before it is logged, how many times it can be logged, and the minimum interval of time that must elapse before the next logged event. In my second rule element I'm not setting these values directly, but showing you how to reference to a registered profile to get these settings. An entry of the profiles subsection defined below contains this configuration data and allows you to apply it to multiple rules instead of repeating these items for all rules individually:

<profiles>
   <add name="Default" minInstances="1" maxLimit="Infinite"
      minInterval="00:01:00" />
   <add name="Critical" minInstances="1" maxLimit="Infinite"
      minInterval="00:00:00" />
</profiles>

The last section, bufferModes, contains a list of possible buffering profiles that can be applied to providers:

<bufferModes>
   <add name="Critical Notification" maxBufferSize="100" maxFlushSize="20"
      urgentFlushThreshold="1" regularFlushInterval="Infinite"
      urgentFlushInterval="00:01:00" maxBufferThreads="1" />
   <add name="Notification" maxBufferSize="300" maxFlushSize="20"
      urgentFlushThreshold="1" regularFlushInterval="Infinite"
      urgentFlushInterval="00:01:00" maxBufferThreads="1" />
   <add name="Logging" maxBufferSize="1000" maxFlushSize="200"
      urgentFlushThreshold="800" regularFlushInterval="00:30:00"
      urgentFlushInterval="00:05:00" maxBufferThreads="1" />
</bufferModes>

The only attributes (shown at the end of this chapter) for the parent <healthMonitoring> section are enabled (true by default, so it only has to be specified if you want it to be false) and heartbeat Interval, which is the interval in seconds that specifies how often the heartbeat event is to be raised (zero by default; set it to a positive number to activate this useful feature).

Our site will use a configuration section that specifies that generic and failed audit errors are logged to the SQL Server database used for the rest of the site's data. The Windows Event Log would be a good storage medium for this information, but this cannot be used if you deploy your site to a shared hosting environment. Logging audit failures (e.g., invalid logon attempts, etc.) to SQL Server is a viable plan because we can assume the database will be up.

In the "Solution" section we'll also create a custom event that will be raised every time a record is deleted by someone. The ID of the record and the name of the editor who deleted the record will be stored, so that if some important record is deleted by mistake the administrator can retrieve its ID and restore it from a backup copy, and know who to talk to concerning this error.

Setting Up the Database for the SQL Server Provider

The database used by the SQL Server Web Event Provider must have a particular schema (i.e., a certain table and stored procedure). The data is stored into a table named aspnet_WebEvent_Events, by the aspnet_WebEvent_LogEvent stored procedure. To create these objects, you should execute the aspnet_regsql.exe command-line tool from the VS2005 command prompt. If executed with no parameters, the tool will open a Windows wizard that will guide you through the configuration of an attached SQL Server database for the health monitoring system, and for other systems such as membership, role management, profiles, and personalization. However, if you just want to create the objects needed for the event logging, and not all the other tables and procedures, you can just run the tool with the following parameters (this assumes your database is a local SQL Server Express instance named SqlExpress):

aspnet_regsql.exe -E -S .\SqlExpress -d aspnetdb -A w

If you're using a detached SQL Server 2005 Express database, refer to the section above about caching to see how to temporarily attach to the database to run aspnet_regsql.

A Base Class for All Business Classes

Many business objects need access to a lot of shared information, such as the name of the current user, the IP address, and a reference to the current context's Cache object. This information can be placed into a BizObject base class that all other domain objects will inherit from. This base class also has helper methods to do things like encoding the HTML input by escaping special characters (such as < and >), and purging items from the cache. This cache purging method will be very useful for implementing our custom caching system. This is shared among all business classes, and it removes items from the shared Cache store whose key begins with a specific prefix. By choosing the proper key when entering data into the cache, you can later purge exactly those items that you want to refresh. For example, in Chapter 5 we will be building an Article Management module that handles categories of articles and article comments. You can retrieve the article and comments with an ID equal to 4 and cache them with a key equal to Articles_Article_4. When a new comment is inserted, or a parent article is deleted, you can just purge the Articles_Comments_4.

The base class written in this chapter is not big and complex — quite the opposite, actually. It's a good design practice to keep base classes small by including only the functionality that will be common to all child classes you will write that will descend from them. Even if you don't have any particular code you want to put in a base class initially, it's always a good idea to make the class anyway. Later you might decide that you need a common property or method, and you'll be glad that you already have a base class in your architecture.

Storing Connection Strings and Other Settings

So far, while discussing the DAL and the BLL, I've mentioned that the site will have a number of configurable settings, such as the name of the DAL provider to be used, whether the caching is active, the cache duration, and the connection string to the database. Managing the configuration is something that doesn't exactly fit into any of the other three layers, but rather something that all layers will use. For example, the DAL layer will need the name of the concrete database provider to instantiate, and the connections string; the BLL will need the caching settings; and the user interface will need other settings, such as the recipient e-mail address for e-mail sent from the site's Contact Us form, the prefix for their subject lines (so that it's easy to spot them among all the other e-mails, and to set up a rule to move them to a dedicated folder in your favorite e-mail client program). All these settings are saved into the web.config file, so it's easy for an administrator to change them using only a text editor. Anytime the web.config file is changed, ASP.NET will automatically reload it, and its modified settings will be used for new users, and will not require you to restart the IIS application (which would have terminated any users already online).

In the days of ASP.NET 1.x, the connection string was typically stored into the <appSettings> section. Now this setting has its own dedicated section: <connectionStrings>. Here's an example that shows how to store a connection string and give it a shorter and friendlier name that will be used to retrieve it later from code, or from other configuration elements of web.config:

<configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0">
   <connectionStrings>
      <add name="TheBeerHouse" providerName="System.Data.SqlClient"
         connectionString="Data Source=.\SQLExpress;
         Integrated Security=True;User Instance=True;
         AttachDBFilename=|DataDirectory|TheBeerHouse.mdf" />
   </connectionStrings>

   <system.web>
      <!-- some settings here... -->
   </system.net>
</configuration>

Notice that the connection string does not specify the typical Initial Catalog attribute, which would have been set to the database name. Instead, there is an AttachDBFilename parameter, which points to the path of the SQL Server 2005 MDF file that contains the database. This database will be dynamically attached to a SQL Server instance at runtime. This is what permits the database XCopy deployment mentioned earlier. To further simplify deployment, we're using a |DataDirectory| placeholder in front of the filename that will be replaced at runtime with the path of the site's App_Data folder in which the database files reside.

These connection strings settings are referenced by many other configuration elements — for example, the element that configures the SqlWebEventProvider of the health monitoring system, or the SQL-dependency caching settings. By default, all these elements have a connectionStringName attribute set to LocalSqlServer, which refers to a connection string pointing to a local SQL Server 2005 database called ASPNETDB.MDF — for convenience we'll use that same filename for our database. If you choose to rename the file, you can create a new connection string element under <connectionStrings>, and change all elements' connectionStringName attribute to your new connection string name. A more drastic option would be to remove the LocalSqlServer entry from machine.config, and then register it again with the new connection string. Here is what you would need to write (in machine.config):

<connectionStrings>
   <remove name="LocalSqlServer"/>
   <add name="LocalSqlServer" providerName="System.Data.SqlClient"
      connectionString="Data Source=.\SQLExpress;
      Integrated Security=True;User Instance=True;
      AttachDBFilename=|DataDirectory|TheBeerHouse.mdf" />
</connectionStrings>

By doing this, all modules pointing to the LocalSqlServer setting will take the new connection string, and you won't have to change their individual connectionStringName attribute. However, I generally don't recommend changing machine.config because it creates deployment issues, and any syntax error in that file can render the whole web server (not just that site) inoperable. And, of course, a web hosting provider company is not going to let you make this change. I mention it only for completeness, and because it might be the right solution on a tightly controlled corporate intranet web server, for example.

To retrieve the connection strings from code, there's a new class called System.Web.Configuration.WebConfigurationManager, which has a ConnectionStrings dictionary property to retrieve the connection string by name, as follows (note the square brackets used to index into the dictionary):

string connString = WebConfigurationManager.ConnectionStrings[
   "LocalSqlServer"].ConnectionString;

This class also has an AppSettings dictionary property that lets you read the values stored in the <appSettings> section, like ASP.NET 1.x's ConfigurationSettings class does. In ASP.NET 1.x, it was common to store all site settings in that section because it was easy to read. However, if you had pages or sub-applications developed by other developers, and you had to share the <appSettings> section with them, there was always a chance for a conflict whereby two different settings may have the same name. A better option would be to create a class that reads from a custom section in web.config, so each sub-application would have its settings isolated from one another. Now with ASP.NET 2.0, it's much easier to use custom configuration sections, and this is now the preferred method. You just write a class that inherits from the System.Configuration.ConfigurationSection class, and decorate its public properties with the ConfigurationProperty attribute to indicate that they need to be filled with settings read from the web.config file, and the actual reading will be done for you when your getter reads that setting from your base class! For elements nested under a parent custom section, you need to create a new class that inherits from ConfigurationElement (instead of ConfigurationProperty), and again define your properties with the ConfigurationProperty attribute. Here's an example:

public class SiteSection : ConfigurationSection
{
   [ConfigurationProperty("title", DefaultValue="Sample Title")]
   public string Title
   {
      get { return (string)base["title"]; }
      set { base["title"] = value; }
   }
   
   [ConfigurationProperty("homePage", IsRequired=true)]

   public HomePageElement HomePage
   {
      get { return (HomePageElement)base["homePage"]; }
   }
}

public class HomePageElement : ConfigurationElement
{
   [ConfigurationProperty("showAdBanners", DefaultValue="true")]
   public bool ShowAdBanners
   {
      get { return (bool)base["showAdBanners "]; }
      set { base["showAdBanners "] = value; }
   }
}

This SiteSection class will be mapped to a custom configuration section. It has a property named Title that maps a "title" attribute, which has a default value of "Sample Title". It also has the HomePage property of type HomePageElement, which maps to a sub-element named homePage with the showAdBanners Boolean attribute. Note that the ConfigurationProperty attribute of the HomePage property has the IsRequired option set to true, meaning the element is required to be present in the web.config file. The other properties do not have this constraint because they have a default value.

Once the class is ready you must register it in the web.config file and define the mapping to a section named "site", as follows:

<configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0">
   <configSections>
      <section name="site"
         type="Company.Project.SiteSection, __code"/>
   </configSections>

   <site title="Some nice title for the site">
      <homePage showAdBanners="false" />
   </site>

   <!-- other configuration sections... -->
</configuration>

To read the settings from code, you use the WebConfigurationManager's GetSection to get a reference to the "site" section, and cast it to the SiteSection type. Then, you can use its properties and subproperties:

SiteSection site = (SiteSection)WebConfigurationManager.GetSection("site");
string title = site.Title;
bool showAdBanners = site.HomePage.ShowAdBanners;

This book's sample site requires a number of settings for each module, so there will be a single custom configuration section with one sub-element for each module. Each module will have its own connection string, caching, and provider settings. However, it's useful to provide some default values for these settings at the section level, so that if you want to use the same connection string for all modules, you don't have to specify it separately for each module, but just once for the entire site. In the "Solution" section of this chapter you'll see the custom section class, while the module-specific elements will be added in their specific chapters later in the book. Also in this chapter, we'll develop a class to map a sub-element named contactForm with settings for sending the e-mails from the Contact Us page (the subject line's prefix, and the To and CC fields).

User Interface

With the data access and the business logic layers covered, it's now time to examine the user interface layer. ASP.NET 1.x already had some highly functional server-side controls, such as the DataGrid,DataList and Repeater, and it could databind almost any visual control to a back-end data source such as a DataSet,DataReader,DataTable,DataView, custom collection classes, and any object implementing the IList interface. However, if you've used those controls a lot, you probably ended up writing some dozens of lines to load the data and bind it to the control, handle the UpdateCommand event to update the data in the database with the new value entered in the grid, handle the DeleteCommand event to delete a record, handle SortCommand to change the sorting, handle PageIndexChanged to move forward or backward between the pages of results, and more. The structure of this code is almost always the same, with the changes mostly pertaining to SQL statements or the BLL methods you call, and the field names you work with.

New Binding Features and Data-Bound Controls

All the controls from ASP.NET 1.x are still there, sometimes with a few changes and improvements, but ASP.NET 2.0 adds some new and powerful bindable controls that can make all this much easier, and enables you to create the same pages you had before, but with a lot less code! These controls are as follows:

  • GridView: This greatly extends the DataGrid control, with new types of columns that can automatically display and handle images and checkboxes, in addition to displaying simple textual data, command buttons, hyperlinks, and template-based data that was already available. It also supports pagination, two-way sorting, editing, and deletion by writing very little code, or no code at all!

  • DetailsView: This shows the details of a single record in a tabular layout (a list, as opposed to a grid) using two columns: a name and a value. The control also supports editing and the addition of new records (in these cases the data of the second column is replaced by some input controls). It also supports paging, by means of links that enable users to move forward or backward to other records.

  • FormView: This is similar to the DetailsView, but it is completely template-based, which gives it a lot more formatting flexibility. It supports editing, insertion, and paging as well.

Of all the new controls introduced in ASP.NET 2.0, the GridView is the single best one, but in order to use it effectively you sometimes have to also use DetailsView or FormView — these controls can work together to provide the type of UI functionality often needed in a typical site. The nonvisual DataSource controls provide the necessary back-end plumbing to make these controls work.

These new controls don't really do any magic. Instead they incorporate a lot of the same code you had to write yourself in ASP.NET 1.x, and they leverage the new DataSource controls to assist them in working with the underlying data store. The DataSource controls act as a bridge between the data and the data-bound controls, providing all the information necessary to access and work with the data directly in the UI mark-up code. There are various kinds of DataSource controls that are named with a different prefix: a SqlDataSource,ObjectDataSource,XmlDataSource, and a SiteMapDataSource. As their name suggests, the SqlDataSource control specifies a SELECT statement used to retrieve the data to be shown, and the INSERT, UPDATE and DELETE statements to insert or modify it; it can also reference stored procedures instead of using SQL text in command objects (in this case, the Sql prefix does not refer specifically to SQL Server; this control also works with other RDBMSs such as Oracle). The ObjectDataSource calls methods in a business class to retrieve data and manipulate it. The XmlDataSource refers to a XML file and allows you to specify XPath queries to retrieve the data; and the SiteMapDataSource was already used in the previous chapter to retrieve the content of the Web.sitemap file and bind it to the SiteMapPath and Menu controls on the site's master page. Instead of providing a detailed reference to all the properties and methods of these controls, in this section I'll provide a quick example-driven overview of these controls, as they will be used heavily in most of the upcoming chapters.

The SqlDataSource and the GridView Controls

The Data tab of Visual Studio's Toolbox panel contains all data-bound controls and the data source nonvisual components. Create a simple test ASP.NET application by selecting File New Website ASP.NET Template and C# language. Click OK and then make sure you have the Design button pressed under the editor for default.aspx. Go to the Toolbox, find the Data tab, select the GridView control, and drag it over to the form for default.aspx. As soon as you drop it, a Smart Tasks pop-up window appears, listing the common actions and customizations you can make. This is a new feature of Visual Studio .NET 2005, and it makes it easier to work with controls by helping you configure the control quickly and easily. The IDE should appear as represented in Figure 3-4.

Image from book
Figure 3-4

If the Smart Tasks pop-up closes and you want to re-open it, just select the control and click the arrow that usually appears near the upper-right corner of the control's area. The first step in configuring the control is to set up data binding. In the list of options for Choose Data Source, select New Data Source, and you will be presented with a dialog (see Figure 3-5) that asks you which type of data source control you want to create.

Image from book
Figure 3-5

In this example, we'll use a SqlDataSource (the Database option in Figure 3-5), but later in this chapter and in the rest of the book we'll normally use an object data source. After selecting Database, give it the ID of sqlCustomers and press OK. Once you've created the SqlDataSource, its configuration wizard starts automatically (but you can start it manually again later from its Smart Tasks window). You are then asked whether you want to select an existing database or connection string to use, or create a new one, as shown in Figure 3-6.

Image from book
Figure 3-6

The first time you use this database, you'll have to click the New Connection button and set up a connection for your database — after this has been done once you can just select that connection in the dialog in Figure 3-6. After creating the connection (if necessary) and selecting it, click Next twice.

Now you can specify a query or stored procedure, or you can select columns from a table to let it build a SQL statement for you. From this step's Advanced Options window, you can also decide to have the wizard automatically generate the INSERT,UPDATE and DELETE statements according to the table and fields you selected. In this example I'm selecting the Customers table and checking all the columns, and using the Advanced dialog to have it make INSERT,UPDATE and DELETE statements for me. Figure 3-7 shows a screenshot of this step.

Image from book
Figure 3-7

Once the wizard is completed, the SqlDataSource control is visible on the form, below the grid. The grid already shows the structure it will have at runtime, with the ID,FirstName,LastName, and BirthDate columns that were selected from the previous wizard (it does not show actual data from the table at this time). Now re-open the GridView's Smart Tasks editor, and select the options to enable sorting, editing, deletion, and selection. While you do this, the new command links are added in a column, as shown in Figure 3-8.

Image from book
Figure 3-8

To make the GridView look better, select the Auto Format command from the Smart Tasks panel, and then select one of the color schemes from the dialog that pops up, as shown in Figure 3-9.

Image from book
Figure 3-9

Finally, to customize the order and type of the grid's columns, click the Edit Columns link from the Smart Tasks. From the dialog shown in Figure 3-10, you need to click on each field in the Available Fields section and press Add. Then you can move columns up and down in the Selected Fields section, add or remove commands (and thus enable or disable the automatic editing, deletion, and selection), add or remove columns to show links, images, or checkboxes, convert a simple BoundField column to a TemplatedColumn, and, of course, change the column's properties. For example, the grid has a column bound to a field named BirthDate or type DateTime. By default it shows the date/time, but if you want to show just the date, select the column and set its HtmlEncode to False and DataFormatString to {0:d}.

Image from book
Figure 3-10

After setting up the fields, press OK. Then go to the Properties for the GridView, and set the DataKeyNames to ID. You can finally run the page by pressing F5, and you'll see your fully functional, editable, sortable, and selectable grid on the browser, without a single line of code written in the C# file. The result is visible in Figure 3-11.

Image from book
Figure 3-11

Even for this simple example, a few dozen lines of code would have been required to do the same things with the old DataGrid and no SqlDataSource control. If you now go back to the IDE and switch the editor to the Source View, this is the markup code you will find in default.aspx to define the SqlDataSource control:

<asp:SqlDataSource ID="sqlCustomers" runat="server"
   ConnectionString="<%$ ConnectionStrings:SiteDB %>"
   SelectCommand="SELECT [ID], [FirstName], [LastName], [BirthDate] FROM [Customers]"
   DeleteCommand="DELETE FROM [Customers] WHERE [ID] = @ID"
   InsertCommand="INSERT INTO [Customers] ([FirstName], [LastName], [BirthDate])    Image from book
      VALUES (@FirstName, @LastName, @BirthDate)"

  UpdateCommand="UPDATE [Customers] SET [FirstName] = @FirstName, [LastName] =     Image from book
    @LastName, [BirthDate] = @BirthDate WHERE [ID] = @ID">
  <DeleteParameters>
     <asp:Parameter Name="ID" Type="Int32" />
  </DeleteParameters>
  <UpdateParameters>
     <asp:Parameter Name="FirstName" Type="String" />
     <asp:Parameter Name="LastName" Type="String" />
     <asp:Parameter Name="BirthDate" Type="DateTime" />
     <asp:Parameter Name="ID" Type="Int32" />
  </UpdateParameters>
  <InsertParameters>
     <asp:Parameter Name="FirstName" Type="String" />
     <asp:Parameter Name="LastName" Type="String" />
     <asp:Parameter Name="BirthDate" Type="DateTime" />
  </InsertParameters>
</asp:SqlDataSource>

Note that the ConnectionString property is set to a special binding expression (often called a dollar expression, because of the $ char in place of the # char used in the other binding expressions) that will return the connection string pointed to by the SiteDB entry registered under the <connectionStrings> section of the web.config file. The SelectCommand,InsertCommand,UpdateCommand, and Delete Command are set to the SQL statements required to retrieve the data and modify it. Note that the DELETE statements are parameterized, and will dynamically set the value for the @ID parameter to the key of the GridView's row where the Delete link was clicked. The Update statement is similar and defines parameters for the new values of the record. The <DeleteParameters> and <UpdateParameters> sections list the parameters expected by the statement, and their type. They don't, however, include the value for the parameter, as it will be automatically retrieved from the row's key or the row's input controls if the grid is not in display mode.

Here's the markup code for the GridView control:

<asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False"
   CellPadding="4" DataKeyNames="ID" DataSourceID="sqlCustomers"
   ForeColor="#333333" GridLines="None" AllowSorting="True">
   <FooterStyle BackColor="#990000" Font-Bold="True" ForeColor="White" />
   <RowStyle BackColor="#FFFBD6" ForeColor="#333333" />
   <SelectedRowStyle BackColor="#FFCC66" Font-Bold="True" ForeColor="Navy" />
   <PagerStyle BackColor="#FFCC66" ForeColor="#333333" HorizontalAlign="Center" />
   <HeaderStyle BackColor="#990000" Font-Bold="True" ForeColor="White" />
   <AlternatingRowStyle BackColor="White" />

   <Columns>
      <asp:CommandField ShowDeleteButton="True" ShowEditButton="True"
         ShowSelectButton="True" />
      <asp:BoundField DataField="ID" HeaderText="ID" InsertVisible="False"
         ReadOnly="True" SortExpression="ID" />
      <asp:BoundField DataField="FirstName" HeaderText="FirstName"
         SortExpression="FirstName" />
      <asp:BoundField DataField="LastName" HeaderText="LastName"
         SortExpression="LastName" />
      <asp:BoundField DataField="BirthDate" DataFormatString="{0:d}"
         HeaderText="BirthDate" SortExpression="BirthDate" HtmlEncode="False" />
   </Columns>
</asp:GridView>

The first part defines the visual appearance of the control. Recall from the previous chapter: Style definition should not be on the pages themselves, but in the external CSS files or the server-side skin files. Here we used the AutoFormat option for demo purposes, but in a real project you should remove the column declarations, the ID, and the DataKeyNames and DataSourceID properties, and copy and paste them into a .skin file under a theme folder, so that it is applied on all grids and is easier to modify and maintain.

The rest of the markup declares a number of BoundField columns that can display and edit the data of the specified data source's field, and a CommandField column that creates the link commands to edit, delete, or select a row.

Master-Detail Forms with the DetailsView Control

Grids are usually not well suited for showing all the information of a record. Think, for example, of a grid listing products or articles: You certainly can't show the product's description or the article's content in the grid because you don't have room. What you need is a secondary control that shows the details of the first control's selected record. When you do this, you're creating a Master-Detail form. ASP.NET 2.0's new DetailsView control can be used as this second control to display or edit an existing record, to delete it, or to create new records. To demonstrate this I'll modify the current page by adding a DetailsView control to show the biography of the selected customer, which will also support editing. Drag and drop a DetailsView control from the Data tab in the Toolbox to the form, and run the Smart Tasks editor (upper-right corner) to create a new SqlDataSource named sqlCustomerDetails. Select the same DB and table you chose before for the GridView's Data Source, and make sure that the Advanced option of creating the INSERT, DELETE and UPDATE statements is selected. Then, in the same step where you define the SELECT statement, click the WHERE button and fill the dialog as shown in Figure 3-12.

Image from book
Figure 3-12

After you've selected all the options for Column, Operator, and Source, go over to Parameter Properties and select GridView1, click the Add button, and press OK. This adds a filter on the ID field, and specifies that the source for its value is a control on the same form — namely, the GridView control, which acts as the Master control. The value will actually be retrieved from the GridView's SelectedValue property. As you may guess from the screenshot, you could also have selected QueryString or some other source for the parameter value — which would have been useful if the DetailsView were on a separate page that's called with the ID of the customer record to show what passed on the querystring.

Once this second SqlDataSource is created and fully configured, you can set the DetailsView control's visual appearance, and enable editing, pagination, and the insertion of new records. All this can be done from the control's Smart Tasks editor, as shown in Figure 3-13.

Image from book
Figure 3-13

Again without writing any C# code, you can run the page, with the output and behavior shown in Figure 3-14. Once you select a row in the grid, you see all customer details in the DetailsView.

Image from book
Figure 3-14

When the user clicks the Edit button, the DetailsView's UI will display the input controls that enable you to edit the values, as shown in Figure 3-15.

Image from book
Figure 3-15

Note that because the ID field was recognized as the key field, it was set as a value for the control's DataKeyNames property, and is read-only at runtime. When the control switches to Insert mode, the ID field is completely hidden, as shown in Figure 3-16.

Image from book
Figure 3-16

The markup code for the DetailsView's SqlDataSource is very similar to the code of the first Data Source listed above, with an important difference:

<asp:SqlDataSource ID="sqlCustomerDetails" runat="server"
   ConnectionString="<%$ ConnectionStrings:SiteDB %>"
   SelectCommand="SELECT [ID], [FirstName], [LastName], [BirthDate], [Bio] FROM  Image from book
     [Customers] WHERE ([ID] = @ID)" ...other commands like before...>
   <SelectParameters>

      <asp:ControlParameter ControlID="GridView1" Name="ID"
         PropertyName="SelectedValue" Type="Int32" />
   </SelectParameters>
   ...parameters for inserting, deleting and updating a record like before...
</asp:SqlDataSource>

The difference is in the <SelectParameters> collection, where instead of a general Parameter element a ControlParameter is used instead. This specifies that the ID parameter will be filled with the value of the GridView control's SelectedValue property, as stated above. There could have been other parameter configurations if you had selected one of the other options, such as QueryStringParameter, for the other parameter sources.

The declaration of the DetailsView control is similar to the declaration of the GridView. In fact, the most significant difference is that there is a <Fields> section instead of <Columns>, but their content is quite similar:

<asp:DetailsView ID="DetailsView1" runat="server" AllowPaging="True"
   AutoGenerateRows="False" CellPadding="4" DataKeyNames="ID"
   DataSourceID="sqlCustomerDetails" ForeColor="#333333" GridLines="None"
   Height="50px" Width="500px">

   <FooterStyle BackColor="#990000" Font-Bold="True" ForeColor="White" />
   <CommandRowStyle BackColor="#FFFFC0" Font-Bold="True" />
   <RowStyle BackColor="#FFFBD6" ForeColor="#333333" />
   <PagerStyle BackColor="#FFCC66" ForeColor="#333333" HorizontalAlign="Center" />
   <FieldHeaderStyle BackColor="#FFFF99" Font-Bold="True" />
   <HeaderStyle BackColor="#990000" Font-Bold="True" ForeColor="White" />
   <AlternatingRowStyle BackColor="White" />
   <EditRowStyle Width="100%" />

   <Fields>
      <asp:BoundField DataField="ID" HeaderText="ID"
         InsertVisible="False" ReadOnly="True" />
      <asp:BoundField DataField="FirstName" HeaderText="FirstName" />
      <asp:BoundField DataField="LastName" HeaderText="LastName" />
      <asp:BoundField DataField="BirthDate" HeaderText="BirthDate" />
      <asp:BoundField DataField="Bio" HeaderText="Bio" />
      <asp:CommandField ShowEditButton="True" ShowInsertButton="True" />
   </Fields>
</asp:DetailsView>

Using the FormsView Control

The FormsView control is meant to do exactly what the DetailsView control does, but it has no built-in tabular layout, and you're completely in charge of defining it yourself. Although it takes more work to use this control, it gives you much more control over the appearance, which can be a big advantage. As a demonstration, I'll show you how to use it in place of the DetailsView control already on the page. Drag and drop a FormDetails control onto your page, and choose the SqlDataSource created earlier for the DetailsView as its Data Source (sqlCustomerDetails). Its Smart Tasks editor is less powerful than those shown earlier because it doesn't have options to apply a style, or to change the order and position of the fields — these must be done manually by editing one of the available templates. There are templates to control the appearance for when the control is in display, insert or edit mode, for the header, the footer, and the pagination bar, and for when the control is in display mode but is not bound to any item (which happens when the Master control has no record selected, for example). Figure 3-17 is a screenshot of the control's Smart Tasks pop-up, showing the drop-down list from which you choose the template to edit.

Image from book
Figure 3-17

I'll edit the ItemTemplate, to show the FirstName, LastName, and BirthDate all on a single line, with the customer's bio a line below. Figure 3-18 shows the template after I make my changes.

Image from book
Figure 3-18

Figure 3-19 shows the result at runtime.

Image from book
Figure 3-19

When the control is in edit or insert mode it will be similar to the DetailsView, with one label and input control per line, but without all those colors and the alignment created with the table. You can, however, edit the EditItemTemplate and InsertItemTemplate to control how the input controls are displayed. The control's markup is similar to what you would write when you define a Repeater's ItemTemplate, only this template won't be repeated a number of times, but rather processed and rendered for a single record:

<asp:FormView ID="FormView1" runat="server" AllowPaging="True" DataKeyNames="ID"
   DataSourceID="sqlCustomerDetails">
   <EditItemTemplate>
      ID: <asp:Label ID="IDLabel1" runat="server"
         Text='<%# Eval("ID") %>' /><br />
      FirstName: <asp:TextBox ID="FirstNameTextBox" runat="server"
         Text='<%# Bind("FirstName") %>' /><br />
      LastName: <asp:TextBox ID="LastNameTextBox" runat="server"
         Text='<%# Bind("LastName") %>' /><br />
      BirthDate: <asp:TextBox ID="BirthDateTextBox" runat="server"
         Text='<%# Bind("BirthDate") %>' /><br />
      Bio: <asp:TextBox ID="BioTextBox" runat="server" Text='<%# Bind("Bio") %>' />
      <br />
      <asp:LinkButton ID="UpdateButton" runat="server" CausesValidation="True"
         CommandName="Update" Text="Update" />
      <asp:LinkButton ID="UpdateCancelButton" runat="server"
         CausesValidation="False"CommandName="Cancel"Text="Cancel" />
   </EditItemTemplate>
   <InsertItemTemplate>
      FirstName: <asp:TextBox ID="FirstNameTextBox" runat="server"
         Text='<%# Bind("FirstName") %>' /><br />
      LastName: <asp:TextBox ID="LastNameTextBox" runat="server"
         Text='<%# Bind("LastName") %>' /><br />
      BirthDate: <asp:TextBox ID="BirthDateTextBox" runat="server"
         Text='<%# Bind("BirthDate") %>' /><br />
      Bio: <asp:TextBox ID="BioTextBox" runat="server" Text='<%# Bind("Bio") %>' />
      <br />
      <asp:LinkButton ID="InsertButton" runat="server" CausesValidation="True"
         CommandName="Insert" Text="Insert" />
      <asp:LinkButton ID="InsertCancelButton" runat="server"
         CausesValidation="False" CommandName="Cancel" Text="Cancel" />
   </InsertItemTemplate>
   <ItemTemplate>
      <asp:Label ID="FirstNameLabel" runat="server"
         Text='<%# Bind("FirstName") %>' Font-Bold="True" />
      <asp:Label ID="LastNameLabel" runat="server"
         Text='<%# Bind("LastName") %>' Font-Bold="True" />
      (<asp:Label ID="BirthDateLabel" runat="server"
         Text='<%# Bind("BirthDate", "{0:yyyy}") %>' Font-Bold="True" />)<br />
      <br />
      <asp:Label ID="BioLabel" runat="server" Text='<%# Bind("Bio") %>' /><br />
      <br />
      <asp:LinkButton ID="EditButton" runat="server" CausesValidation="False"
         CommandName="Edit" Text="Edit" />
      <asp:LinkButton ID="NewButton" runat="server" CausesValidation="False"
         CommandName="New" Text="New" />
   </ItemTemplate>
</asp:FormView>

Note another new feature of ASP.NET 2.0 in the preceding code snippet: the Eval and Bind methods use a new shorter notation in the data-binding expressions. Instead of writing something like DataBinder.Eval (Container.DataItem,"FieldName"), now you can just write Bind("FieldName") or Eval("Field Name"). The difference between the two is that Eval should be used for a read-only ASP.NET control, or for one-way binding. Bind is used instead of Eval when you use input controls and want two-way binding. If you want the control to support the Data Source control's insert and update statement, you must use Bind to make it read the new values from the controls.

Important

In this section I've shown how to use template fields only in the FormView control. However, you can also use them in the GridView and DetailsView controls, which is very handy because you will typically want to add validation controls for the input fields, and you must use templates to do this. In the upcoming chapters you'll see many example of template usage in such controls.

The ObjectDataSource Control

You've seen how easy it is to work with the SqlDataSource. It's easy to use it to connect controls with a database, and it makes sense for use in very small applications, but for anything of medium complexity, or more, you shouldn't even consider it because it means you would be mixing UI and data access together, and there would be no business logic at all — this flies in the face of the best practices for multitier designs, and in fact it would not be a multi-tier design. It's not acceptable in most modern applications to make SQL calls directly from the UI, and cut out both the DAL and BLL. The ASP.NET team knows this very well, but they had to consider the needs of small companies with very limited staffs and budgets. It does make sense in that scenario in that they can crank out small applications quickly using the SqlDataSource, but even then there may be a time in the future when they may regret using this control extensively, especially if their small systems scale up into medium or large systems someday.

The ObjectDataSource is the new DataSource control that fits properly into the multi-tier methodology. This can bind the GridView and other visual controls to your BLL domain objects. Let's first create a sample class that wraps the Customer table's data, and provides methods for retrieving and deleting records. The class will be later used as data source by the GridView defined above. The test class we'll use here is something between a DAL and a BLL class, as it exposes the data in an object-oriented way, but is directly bound to the database. This is only being done here to keep this test fairly simple — the code in the following chapters will have a strong separation between the DAL and the BLL. Here's the code of the Customer class we'll test with (which will be hooked to the ObjectDataSource):

public class Customer
{
   private int _id = 0;
   public int ID
   {
      get { return _id; }
      private set { _id = value; }
   }

   private string _firstName = "";
   public string FirstName
   {
      get { return _firstName; }
      set { _firstName = value; }

   }

   private string _lastName = "";
   public string LastName
   {
      get { return _lastName; }
      set { _lastName = value; }
   }

   private DateTime _birthDate = DateTime.MinValue;
   public DateTime BirthDate
   {
      get { return _birthDate; }
      set { _birthDate = value; }
   }

   private string _bio = "";
   public string Bio
   {
      get { return _bio; }
      set { _bio = value; }
   }

   public Customer(int id, string firstName, string lastName,
      DateTime birthDate, string bio)
   {
      this.ID = id;
      this.FirstName = firstName;
      this.LastName = lastName;
      this.BirthDate = birthDate;
      this.Bio = bio;
   }

   public static List<Customer> GetCustomers()
   {
      using (SqlConnection cn = new SqlConnection(
         WebConfigurationManager.ConnectionStrings["SiteDB"].ConnectionString))
      {
         SqlCommand cmd = new SqlCommand(
            "SELECT ID, FirstName, LastName, BirthDate, Bio FROM Customers", cn);
         cn.Open();
         SqlDataReader reader = cmd.ExecuteReader();

         List<Customer> customers = new List<Customer>();
         while (reader.Read())
         {
            Customer cust = new Customer(
               (int)reader["ID"],
               reader["FirstName"].ToString(),
               reader["LastName"].ToString(),
               (DateTime)reader["BirthDate"],
               reader["Bio"].ToString());
            customers.Add(cust);
         }
         return customers;

      }
   }

   public static bool DeleteCustomer(int id)
   {
      // delete the customer record with the specified ID
      ...
   }
}

The GetCustomers method takes no parameters, and returns all customers as a .NET 2.0 generic list collection (generic collections will be covered in more detail in the following chapters). If you enable pagination using the ObjectDataSource.EnablePaging property, the ObjectDataSource will expect to find two parameters named startRowIndex and maximumRows (the names are configurable). You'll see a concrete example of this in Chapter 5.

The BLL object's methods may be either static or instance-based. If they are instance methods, the class must have a parameterless constructor (it can have overloaded constructors, but one of them must be a "default" style constructor with no parameters).

Once you have this class, drag and drop an ObjectDataSource control over the form, and follow its configuration wizard. In the first step, shown in Figure 3-20, you select the Customer business object to use as a Data Source.

Image from book
Figure 3-20

In the second step, shown in Figure 3-21, you choose the methods for retrieving, editing, inserting, and deleting data. In this particular case only the SelectMethod and the DeleteMethod are required.

Image from book
Figure 3-21

Finish the wizard and set your GridView control's DataSourceID property to the ObjectDataSource just created, and you're done; you don't need to modify anything else, and the grid's output and behavior will be absolutely identical to when it used a SqlDataSource. This is the beauty of the DataSource design model! The markup code is as follows:

<asp:ObjectDataSource ID="objCustomers" runat="server"
   DeleteMethod="DeleteCustomer"
   SelectMethod="GetCustomers" TypeName="Customer">
   <DeleteParameters>
      <asp:Parameter Name="id" Type="Int32" />
   </DeleteParameters>
</asp:ObjectDataSource>

If you wanted to define methods for inserting or updating data, you could write them in one of the following ways:

public int InsertCustomer(
   int id, string firstName, string lastName, DateTime birthDate, string bio)
{...}

public int InsertCustomer(Customer cust)
{...}

In the first case the method would be similar to the DeleteCustomer method implemented above. In the second case you use a Customer instance to pass all values with a single parameter. To support this approach, the Customer class must also be referenced in the ObjectDataSource's DataObjectTypeName property.

This section provided you with just enough information for a quick start with these controls, and to understand the code in the upcoming chapter with no problems. However, the new data-bound and Data Source controls are much more complex and feature rich than what you've seen here. In the following chapters you will learn more details while working with them, as a sort of on-the-job training. However, if you want a complete reference on these new controls, you should refer to books such as Wrox's Professional ASP.NET 2.0. There's also a good article by Scott Mitchell about GridViews on MSDN entitled "GridView Examples for ASP.NET 2.0," completely devoted to this topic and spanning 122 pages! Find it on the MSDN web site at http://msdn.microsoft.com/asp.net/community/authors/scottmitchell/default.aspx?pull=/library/en-us/dnaspp/html/gridviewex.asp.


Previous Page
Next Page


JavaScript EditorFreeware javascript editor     Javascript code