ClubEnsayos.com - Ensayos de Calidad, Tareas y Monografias
Buscar

Cluster Architecture


Enviado por   •  27 de Marzo de 2014  •  1.397 Palabras (6 Páginas)  •  206 Visitas

Página 1 de 6

Introduction

First designed for the Microsoft Windows NT® Server 4.0 operating system, server clusters are substantially enhanced in the Microsoft Windows Server 2003, Enterprise Edition, and Windows Server 2003, Datacenter Edition, operating systems. With server clusters you can connect multiple servers together in order to provide high availability and easy manageability of data and programs running within the cluster. Server clusters provide the following three principal advantages in clustering technology:

• Improved availability by enabling services and applications in the server cluster to continue providing service during hardware or software component failure or during planned maintenance.

• Increased scalability by supporting servers that can be expanded with the addition of multiple processors (up to a maximum of eight processors in Windows Server 2003, Enterprise Edition, and 32 processors in Windows Server 2003, Datacenter Edition), and additional memory (up to a maximum of 8 gigabytes [GB] of random access memory [RAM] in Enterprise Edition and 64 GB in Windows Server 2003 Datacenter Edition).

• Improved manageability by enabling administrators to manage devices and resources within the entire cluster as if they were managing a single computer.

The cluster service is one of two complementary Windows clustering technologies provided as extensions to the base Windows Server 2003 and Windows 2000 operating systems. The other clustering technology, Network Load Balancing (NLB), complements server clusters by supporting highly available and scalable clusters for front-end applications and services such as Internet or intranet sites, Web–based applications, media streaming, and Microsoft Terminal Services.

This white paper focuses solely on the architecture and features of server clusters and describes the terminology, concepts, design goals, key components, and planned future directions. At the end of the paper, the section “For More Information” provides a list of references you can use to learn more about server clusters and the NLB technologies.

Development Background

Computer clusters have been built and used for well over a decade. One of the early architects of clustering technology, G. Pfister, defined a cluster as "a parallel or distributed system that consists of a collection of interconnected whole computers that is utilized as a single, unified computing resource."

The collection of several server computers into a single unified cluster makes it possible to share a computing load without users or administrators needing to know that more than one server is involved. For example, if any resource in the server cluster fails the cluster as a whole can continue to offer service to users by using a resource on one of the other servers in the cluster, regardless of whether the failed component is a hardware or software resource.

In other words, when a resource fails, users connected to the server cluster may experience temporarily degraded performance, but do not completely lose access to the service. As more processing power is needed, administrators can add new resources in a rolling upgrade process. The cluster as a whole remains online and available to users during the process, while the post-upgrade performance of the cluster improves.

User and business requirements for clustering technology shaped the design and development of the cluster service for the Windows Server 2003, Enterprise Edition, and Windows Server, 2003 Datacenter Edition, operating systems. The principal design goal was development of an operating system service that addressed the cluster needs of a broad segment of businesses and organizations, rather than small, specific, market segments.

Microsoft marketing studies showed a large and growing demand for high availability systems in small- and medium-sized businesses as databases and electronic mail became essential to their daily operations. Ease of installation and management were identified as key requirements for organizations of this size. At the same time, Microsoft’s research showed an increasing demand for Windows–based servers in large enterprises with key requirements for high performance and high availability.

The market studies led to the development of the cluster service as an integrated extension to the base Windows NT, Windows 2000, and Windows Server 2003 operating systems. As designed, the service enables joining multiple server and data storage components into a single, easily managed unit, the Server cluster. Server clusters can be used by small and large enterprises to provide highly available and easy-to-manage systems running Windows Server 2003 and Windows 2000–based applications. Server clusters also provide the application interfaces and tools needed to develop new, “cluster-aware” applications that can take advantage of the high availability features of server clusters.

Cluster Terminology

Server clusters is the Windows Server 2003 name for the Microsoft technology first made available as Microsoft Cluster Server (MSCS) in Windows NT Server 4.0, Enterprise Edition. When referring to servers that comprise a cluster, individual computers are referred to as nodes. The cluster service refers to the collection of components on each node that perform cluster-specific activity and resource refers to the hardware and software components within the cluster that are managed by the cluster service. The instrumentation mechanism provided by server clusters for managing resources is the resource dynamically linked libraries (DLLs). Resource DLLs define resource abstractions, communication interfaces, and management operations.

A resource is said to be online when it is available and providing its service to the cluster. Resources are physical or logical entities if they:

• Can be brought online (in service) and taken offline (out of service).

• Can be managed in a server cluster.

• Can be owned by only one node at a time.

Cluster resources include physical hardware devices such as disk drives and network cards, and logical items such as Internet Protocol (IP) addresses, applications, and application databases. Each node in the cluster will have its own local resources. However, the cluster also has common resources, such as a common data storage array and private cluster network. These common resources are accessible by each node in the cluster. One special common resource is the quorum resource, a physical disk in the common cluster disk array that plays a critical role in cluster operations. It must be present for node operations—such as forming or joining a cluster—to occur.

A resource group is a collection of resources managed by the cluster service as a single, logical unit. Application resources and cluster entities can be easily managed by grouping logically related resources into a resource group. When a cluster service operation is performed on a resource group, the operation affects all individual resources contained within the group. Typically, a resource group is created to contain all the elements needed by a specific application server and client for successful use of the application.

Server Clusters

Server clusters are based on a shared-nothing model of cluster architecture. This model refers to how servers in a cluster manage and use local and common cluster devices and resources. In the shared-nothing cluster, each server owns and manages its local devices. Devices common to the cluster, such as a common disk array and connection media, are selectively owned and managed by a single server at any given time.

The shared-nothing model makes it easier to manage disk devices and standard applications. This model does not require any special cabling or applications and enables server clusters to support standard Windows Server 2003 and Windows 2000–based applications and disk resources.

Server clusters use the standard Windows Server 2003 and Windows 2000 Server drivers for local storage devices and media connections. Server clusters support several connection media for the external common devices that need to be accessible by all servers in the cluster. External storage devices that are common to the cluster require small computer system interface (SCSI) devices and support standard PCI–based SCSI connections as well as SCSI over Fibre Channel and SCSI bus with multiple initiators. Fibre connections are SCSI devices, simply hosted on a Fibre Channel bus instead of a SCSI bus. Conceptually, Fibre Channel technology encapsulates SCSI commands within the Fibre Channel and makes it possible to use the SCSI commands which server clusters are designed to support. These SCSI commands (Reserve/Release and Bus Reset) will function the same over standard or non-fibre SCSI interconnect media.

The following figure illustrates components of a two-node server cluster that may be composed of servers running either Windows Server 2003, Enterprise Edition, or Windows 2000, Advanced Server, with shared storage device connections using SCSI or SCSI over Fibre Channel.

Figure 1 - Two-node Server cluster running Windows Server 2003, Enterprise Edition

Windows Server 2003 Datacenter Edition supports four- or eight-node clusters and does require device connections using Fibre Channel as shown in the following illustration of the components of a four-node cluster.

Figure 2 - Four-node Server cluster running Windows Server 2003

...

Descargar como  txt (9.3 Kb)  
Leer 5 páginas más »
txt