Main Page

Previous Section Next Section

Development Lifecycle

The development lifecycle of a component-based service, shown in Figure 3.1, is similar to the lifecycle of any software system. It goes through the following phases: requirements analysis, design, implementation, quality assurance, and maintenance.

Click To expand
Figure 3.1: Development lifecycle of a component-based service.

Requirements Analysis

During the requirements analysis phase of component-based service development, an entire system's functional and nonfunctional requirements are defined. The functional requirements relate to the way the component-based service will fulfill the business need. Credit-card validation is an example of a functional requirement. Nonfunctional requirements are technical in nature. A requirement that states that a response must be returned to the consumer within two seconds is an example of a nonfunctional requirement. Techniques such as looking at existing documentation and conducting user interviews are used to construct a set of artifacts that constitute the requirements for the system.

Functional Requirements

Developing the functional requirements for a service is different from developing functional requirements for a single application. Services support multiple applications. Therefore, services designed and developed for a single application are not likely to meet the needs of other applications that require the same or similar functions.

How do we design services when we don't know what applications will use them? The answer is to develop a business architecture-a description of the business the organization performs-which consists of documents and models. To a service developer, the most valuable part of the business architecture is the functional decomposition of the business into different subject areas. Each subject area describes a single part of the organization and the functions the area contains. For example, a bank has checking-account, savings-account, and customer functions.

Services are coarse-grained structures that should map not to a single application but to the functional areas of the business. This mapping increases the services' reusability, because a service that maps to the business rather than an application is more likely to support the requirements of multiple applications. When designing a service, it's tempting to map services to existing legacy systems.

For instance, if a banking system performs both checking and savings-account functions, many service designers will implement a banking-system service. This is wrong for two reasons. First, the banking system performs two logical functions that should be split into two services, one each for checking and savings. Also, the service is named for a system, not for the functional area the service supports. The advantage of this is that each service uses the banking system as a resource for executing service requests, but if the banking system is ever replaced, service consumers may not have to be updated, because the services map to a function, not a system.

Once the business architecture is created, a conceptual service model is derived from it. Designing services is usually not a green-field exercise. The conceptual service model contains areas of the business implemented in services. It also focuses more on implementation aspects of the services but does not assume a particular implementation technology. The conceptual service model provides the basis for designing service boundaries and interfaces.

However, in true green-field development, a set of artifacts should be created to identify the functionality a service should support. Service development is usually performed within the context of an application, which drives the parts of the conceptual service model to be implemented. Some of the interfaces will be needed and others will not (yet). The application requirements drive the implementation of small pieces of the component service model within the enterprise vision of a services layer.

To identify application requirements, the requirements analyst creates a set of artifacts from existing documentation and user interviews that describes the application to be built. As Figure 3.2 shows, these artifacts include a feature list, use cases, quality scenarios, and an object model.

Click To expand
Figure 3.2: The requirements analyst creates a set of artifacts that describes the application.

The feature list contains all the features the system must support. The use cases define the ways users will exercise those features. The object model describes the structure of the business process. The techniques used to elicit the functional requirements for an application are necessary and are well documented elsewhere and thus are not detailed here.

Along with the functional requirements for a system, the nonfunctional aspects must also be defined, to determine the technical level of service the system will support.

Nonfunctional Requirements

A system's nonfunctional requirements are defined in the quality scenarios. Just as it is necessary to make the functional requirements concrete by developing use cases, it is necessary to make the quality requirements concrete. It is not sufficient for a quality scenario to state that the system should be "highly reusable." A specific reusability requirement would state that the "credit-card validation service will be reused by system X." Another example is when a performance requirement is stated in reference to an overall latency requirement but not in terms of usage patterns, scalability, or the impact on system usability (Clements and Northrop 2002).

Another term for the set of nonfunctional requirements a service supports at runtime is quality of service (QoS). Other nonfunctional requirements not evident at runtime, such as maintainability and reusability, are not part of QoS. Each component-based service in service-oriented architecture supports a specific quality of service (QoS) level. Simply defining the service's functional requirements may not make the service usable if it does not support the quality attributes necessary to deliver a fully functioning system.

Although it can be difficult to estimate the a system's quality requirements, the exercise will greatly enhance the designer's knowledge of expectations for the system. Defining the quality requirements as rigorously as possible greatly reduces the risk that the system will not satisfy them (Clements and Northrop 2002).

In modern component-based development, it is especially necessary to pay attention to nonfunctional requirements such as performance, security, modifiability, reusability, integrability, testability, availability, reliability, and portability.

Once the nonfunctional quality of service attributes have been identified, they can be described using Web Services Endpoint Language (WSEL). WSEL allows a service provider to describe things such as QoS and security characteristics.


The performance-quality attribute requirement must be defined during the requirements analysis phase. Performance is the responsiveness of the system, measured in the time required to execute some function. Some of the questions that must be answered to design the system correctly include

  • What is the expected response time for each use case?

  • Will bad performance dramatically affect usability?

  • How must the system perform?

  • What is the synchronous response time?

  • What is the asynchronous response time?

  • What is the batch execution time?

  • Can the time differ dramatically based on the time of day or system load?

  • What is the expected growth of the system load?

A component-based system uses the network to communicate between components. Any performance requirement must be looked at closely to determine if the system can meet it. When a performance requirement cannot be met, the designer should consider several strategies:

  • Make the request asynchronous.

  • Take advantage of scalable component execution environments.

  • Cache data in the proxy.

  • Execute the request in the proxy.

If actual performance will not meet the requirement for a service request, it might be possible to make the request asynchronous. An asynchronous request returns control immediately to the service consumer after the consumer sends the request to the provider. The consumer continues processing and does not wait for the request to return. The consumer gets the results of the request in one of two ways: by checking periodically to see if the request has executed or by being notified when the results are ready.

If a consumer needs to know immediately when the results are ready, the service provider will interrupt the consumer and give the consumer the results of the service request. Making a request asynchronous is appropriate especially if the service consumer does not require a response. For instance, a consumer who calls the AddCustomer method in a customer service might not require the add customer to happen immediately. The information could be passed in, and control could be immediately given back to the consumer. The add customer transaction would happen at a later time, and the consumer could optionally check the status of the transaction periodically to see if it has executed. In general, to improve consumer performance, any requests that do not return a response should execute asynchronously.

Scalability is also related to the system's ability to respond to increasing load. All components execute in a component execution environment such as J2EE. The design should take full advantage of the component execution environment (CEE) (Herzum 1998), most of which support clustering. This allows the component execution environment to create components on multiple machines, to load-balance their requests across multiple component instances.

As discussed in the previous chapter, a proxy can cache service data, such as reference tables. Robust proxy implementations can cache service results so that subsequent service requests to the proxy can return data cached in the proxy rather than making a network call. In addition, a proxy can execute methods that do not require the state of the service. Rather than incurring a network call, methods such as standard calculations can be made in a local proxy. This approach is problematic, because new proxies need to be redistributed every time the service changes. The best way to implement a proxy is to make it dynamically downloadable. In other words, when a consumer needs to access a service, it downloads a new proxy from a server. The proxy should also have a lease attached to it, such that when the lease expires, the service consumer must download another proxy from the server. This eliminates the problems that occur when services change and proxies are out of date.

When it comes to performance, the best strategy is "Make it run, make it right, make it fast, make it small" (Coplien and Beck 1996). The techniques outlined in this chapter and other performance-enhancing techniques should not cause the designer to sacrifice other quality attributes of the system in the name of improving performance.


The security quality attribute must also be defined during this phase. Some of the questions that must be answered to design the system correctly as it relates to security include

  • How critical is the system?

  • What is the expected impact of a security failure?

  • If there have already been security failures, what was their impact?

  • Are there any known vulnerabilities?

If the system is highly critical, such as electronic control, security is of high concern. Even if the system is not critical but a security breach would cost a large amount of money, time, or resources, it is also of high concern. Creating or adopting specialized components for authorization, using secure transports, and implementing sound security policy can address security. For more on security, see Chapter 13.


A system's modifiability refers to its receptiveness to change. These questions will help identify how modifiable a system should be:

  • How often is it expected that a system change will be required?

  • What is the usual extent of the change?

  • Who is expected to make the changes?

  • Is it necessary for the system to use current platform versions?

The cost of system development is not the only important factor. This cost is low compared to the cost of the system over its lifetime. In many organizations, the budget for application maintenance is larger than for software development. Unless the system has a short lifecycle, building modifiability into it should be a top priority, to reduce the cost of maintenance. Several factors determine modifiability:

  • The extent to which the system is modular and loosely coupled determines to a large degree its modifiability. The determination as to how modifiable the system needs to be will in many ways answer the question as to how modular it should be.

  • The use of layering within components to reduce intracomponent coupling is a technique for increasing component modifiability. Layering within the component separates the component's different technical responsibilities. Functions such as networking, business logic, and data access should be split into separate layers so that they can be maintained as units and reused in other component-based services.

  • As shown in Figure 3.3, systems that are declarative and configurable will be more modifiable. The role expected to make system changes is also a factor in modifiability. It must be determined whether a developer, a business user, an analyst, or some combination of these is responsible. If the system is expected to have frequent business-related changes, a design that is declarative and configurable will respond better to those changes. A developer designing such a system should also consider making it modifiable by businesspeople. For instance, an engine that executes business rules and works from configuration metadata may allow a business user to change the system's behavior. In addition, if these rules change frequently, and changes in the rules do not require the system to be rebuilt or restarted, the system will be more modifiable.

    Figure 3.3: Degrees of modifiability.


Reusability is the ability of a software asset to be used in a different application context without modification. A system's reusability must be determined, because a highly reusable system is more expensive to build and will necessarily be more generic and cover more functionality than necessary for the specific application. Some questions to ask in relation to the system's reusability:

  • Is this the start of a new product line? In other words, will more systems be built that more or less match the design of the system under consideration?

  • Will any other systems use the components, libraries, and frameworks built for this system?

  • Will this system use existing components?

  • What existing framework and other code assets are available to reuse?

  • Will other applications use the frameworks and other code assets created for this application?

  • What technical infrastructure is in place that can be reused?

  • Will other applications reuse the technical infrastructure created for this application?

  • What are the associated costs, risks, and benefits of building reusable components?

An organization can take the following steps to increase reusability.

Establish a Product Line for Building Services

A product line (Clements and Northrop 2002) consists of a set of assets that supports the creation of a specific type of software artifact. Product lines within an organization consist of software assets, execution platforms, processes, and organizations centered on creating a particular class of software system. A product line for reusing component-based services should be considered, due to the large return on investment when multiple services use the same core assets. By establishing a product line, not only will the services themselves be reused, but core assets for building component-based services are reused as well.

To build the services product line, the commonalities and variations between services must be identified. The product line assets are implemented based on the commonalities between services and must support all variations between services. In other words, they must be able to support all the functional and non-functional requirements of the services built using the product line's core assets.

Creating of a product line involves mining, buying, and building assets for creating new services. The common core assets for creating component-based services are reused by each new service built using those assets, greatly improving overall reusability. For instance, if a product line's core assets consist of a persistence framework, an event framework, a set of project management templates, and a J2EE execution platform, those assets are reused by every component-based service built on that product line.

Implementating new services on the product line will also identify requirements it does not support and must be upgraded to support. A downside to upgrading product lines is that it may be necessary to refactor older component-based services to run on the new product line.

Not only will the product line share reusable core assets, it will support a common set of nonfunctional quality attributes for all services. Each service developed using the product line's core assets inherits its level of security, modifiability, and so on. By instituting a product line for services, an organization can realize many benefits, including better productivity, improved time to market, increased project predictability, better quality, and better return on investment.

When initiating a product line for service development, the organization should identify common assets for inclusion in the product line. These common assets include

  • Infrastructure

    • Application servers

    • Database servers

    • Security servers

    • Networks, machines

    • Software tools

      • Modeling

      • Traceability

      • Compilers and code generators

      • Editors

      • Prototyping tools

      • Integrated development environments

      • Version control software

      • Test generators

      • Test execution software

      • Performance analysis software

      • Code inspection and static analysis software

      • Configuration management software

      • Defect tracking software

      • Release management and versioning software

      • Project planning and measurement software

  • Frameworks and libraries

    • Frameworks and libraries are necessary to obtain a level of code reuse. For instance, frameworks and libraries for persistence, transactions, and business rules execution can provide a robust platform for creating component-based services.

  • Patterns, styles, and blueprints

    • The product line is developed according to a set of principles outlined in patterns, styles, and blueprints. For instance, the J2EE patterns, the J2EE blueprints, and a layered architecture are principles that feed into the development of the product line. A product line may also have a custom set of patterns and styles. The proper use of these patterns and styles is demonstrated in the blueprints developed for the component-based services product line.

  • Process

    • The process for using the product line to create usable services must be identified. Iterative and incremental practices for service development should be considered. The process is customized to include project templates specific to the activities that must occur to produce component-based services using the product line. As experience with the product line increases, these processes can be tuned to increase predictibility and reduce project risk for each subsequent project.

  • Organization

    • The organization that develops functional code and the organization that develops product line assets must be determined. Some organizations separate the development of core assets from functional development. Some develop core assets along with a project and then harvest those assets for use on other projects. Each organization needs to figure out which is best for it. In smaller organizations, a single group may both create services and manage core assets. In larger organizations, these functions can be split up and performed by two different organizations. If a single group performs both functions, there is a risk that the separation between core asset and functional code will not be clear. If two distinct organizations are involved, there is a risk that the requirements for the component-based services will not be well understood by the core asset developers and the core assets will not meet those requirements.

There are two additional ways to improve reusability in services: by improving modularity and modifiability.

Improve Modularity

In addition to the reusability of the assets for building services, the services themselves are obvious candidates for reuse. Therefore, the modularity and interface definitions of the component-based services are critical to their potential for reuse. In other words, the more modular a component-based services is, the more likely it will be reused.

In addition to reusing the service in its entirety, individual layers can be reused within other services. For instance, if a service uses a data access layer for accessing a legacy system, this can be designed for reuse in other services that access the same legacy system.

Improve Modifiability

A service's modifiability is also important to determine its reusability. If reusing a component requires it to undergo change, the magnitude of the effort to change the component must be small. The original service designer cannot fully predict all possible requirements for future applications. Therefore, the service will need to be updated from time to time. To reuse the service effectively, the effort involved in modifying the service must be less than the effort involved in writing a new service.

The modifiability of the core assets themselves is also of critical importance. They will support the system's current set of functional and nonfunctional requirements. When those requirements change, the core assets are likely to change as well. Therefore, they must be highly modifiable. The downside is that when the core asset changes, the changes ripple through each service built using that asset. Therefore, core assets must be built such that changes and improvements to them do not have devastating consequences for other component-based services that use them.


A component's integrability is its ability to communicate with other systems. By definition, service-oriented architecture consists of highly interoperable services. Some questions to ask during requirements analysis include

  • Should we use highly interoperable technologies?

  • Are the component interfaces consistent and understandable?

  • How do we version component interfaces?

To improve integrability:

  • Make sure the interfaces are consistent and understandable. The extent to which a component-based service's interfaces are consistent and understandable will determine its integrability. A versioning strategy is also necessary so that as new interfaces are developed, old ones may be easily maintained and eventually retired.

  • Adapt the service for different environments. Although a component-based service supports a service-based interface, the component is also likely to be used in non-service-based environments. Therefore, it is better to separate the layer that participates in the service environment (service façade) from the layers that implement the component. That way, if a legacy system needs to access the service, assuming it supports only a legacy communication standard and not a new service-based integration standard, the legacy client will be able to use the component outside the service environment. This is accomplished by building a legacy protocol adapter that adapts the legacy protocol to the protocol used by the native component. The component can then be used by both service consumers and legacy consumers at the same time, as Figure 3.4 shows. This ability is also related to reusability. If a system cannot connect to and use the service, it will obviously not be reused.

    Click To expand
    Figure 3.4: One way to improve integrability is to adapt the service for use over different protocols.


A service's testability relates directly to its overall quality. A service that is not easily tested will be a low-quality service. The requirements for testability are

  • What kind of process should be in place to certify a component-based service's correctness?

  • Are tools, processes, and techniques in place to test language classes, components, and services?

  • Are tools, processes, and techniques in place to test service federations?

  • What kind of features does the architecture need to test component-based services?

The architecture should support a level of testability that allows the component-based service to be certified for use. This is especially difficult in service-oriented environments, because the organization that certifies the service may not have access to the service consumer. For instance, the consumer could be on the other side of the Internet and not part of the organization that developed the service. Therefore, the testability of the service contract, the objects that make up the service, and the testing of component interfaces are extremely important. Also, even though services are certified individually, they may require retesting when assembled into service federations that support some orchestrated business process.

One of the downsides of a reusable product line for service creation is that a change in a core asset, such as a framework, may require all the services built using that framework to be regression tested and recertified. The extent of reusable code can sometimes have a negative impact on the effort required to test the impact of this change. However, it is almost always preferable to leverage existing assets than to build new ones from scratch.


A system's availability must be established. Availability measures the time between failures and how quickly the system is able to resume operation after a failure (Bass 1998). In addition to system failures that produce downtime, we also include normal maintenance operations that require downtime. Some questions to ask about availability include

  • What are the expected hours of operation?

  • What is the maximum expected downtime per month?

  • How available is the current system?

A system's batch cycles, upgrades, and configuration changes must support the availability expectation. If a batch cycle requires two hours to run but the system is expected to be available 23 hours per day, either the requirement must be changed or the batch process must perform better. If a configuration change is made to the service, does it require restarting the service? If the service is currently in the middle of an orchestrated process, how will the change affect processes that are running? A robust architecture will allow runtime configuration changes and mitigate the impact of those changes on processes currently being executed.

Another aspect to availability is the way service consumers are notified that the service is not available. Do your service consumers simply time out, or will you provide a "Not Available" message? In some implementations, an endpoint description can provide QoS information, such as availability. The QoS description of the endpoint provides a proactive way to inform consumers about the service's availability.


The system's reliability must be established. Reliability has to do with the system's capability to maintain a level of performance for a stated period. Questions related to determining reliability include

  • What is the impact of a hardware or software failure?

  • How quickly must the system become operational after a system failure?

  • When will performance impact reliability?

  • What is the impact of a failure on the business?

Reliability is related to several factors that include the architecture, design, and implementation. Reliability can be enhanced through the configuration of component-based services for rollover in case of hardware failures. In the event of a software or hardware failure, can the system be easily recovered? Does a failure result in an incorrect system state? A highly reliable and correct system will not only guard against system failure but will be easily recoverable when a failure does occur.


A system's portability relates to its ability to run on multiple platforms. Several questions to ask related to how portable to make the system include

  • Do the benefits of a proprietary platform outweigh the drawbacks?

  • Should the expense of creating a separation layer be incurred?

  • At what level should system portability be provided-application, application server, or operating system?

To enhance an application's portability, a separation layer can be developed to create an interface for the service to use, rather than making the interface a feature of the service platform. This is accomplished by creating an adapter that adapts the generic interface the service expects to the particular implementation of the feature the platform provides. For instance, consider the creation of a logging adapter that provides a generic interface for logging to the service and logs messages using the Log4J library. If the library of choice changes to a newer version or a completely different implementation, the service will not change-only the adapter must change.

Another factor affecting system portability is the component execution environment. Some commercial component-execution environments are built to be more portable than others. For instance, a component built for the Microsoft .NET platform can run only on a .NET platform provided by Microsoft. A component built using the J2EE platform can run on J2EE platforms provided by a large number of vendors.

Previous Section Next Section

JavaScript Editor Java Tutorials Free JavaScript Editor