Scalability is a targetted requirement of any data warehouse. A system that scales well exploits all of its processors and makes best use of the investment in computing infrastructure. The range of architectural choices (SMP, MPP, Cluster, Grid) offers a complex decision space for data warehouse architects. The significant factor influencing scalability of a decision support system, regardless of the environment, is how the data is partitioned across the disks. Systems that do not scale well may have entire batch queries waiting for a single node to complete an operation. On the other hand, for throughput environments with multiple concurrent jobs running, these underutilized nodes may be exploited to accomplish other work.

There are typically three approaches to partitioning database records:  Range, Round-Robin, Hash

Range Partitioning
Range partitioning places specific ranges of table entries on different disks. To understand range partitionin, consider a long list of value (e.g. A-Z). Range partitioning breaks the long list into several shorter manageable lists (A-D, E-H etc.) and spans them across multiple disks. Another example might be a system managing monthly operations might partition each month onto a different set of disks. In cases where only a portion of the data is used in a query - lets say the M-P range – the database can avoid examining the other sets of data in what is known as partition elimination.This can dramatically reduce the time to complete a query.

  • Range Partioning Drawbacks and Possible Remediation from a DBMS design perspective – The difficulty with range partitioning is that the distribution of data may vary significantly from one partition to another, and the frequency of data access may vary as well. For example, as the data accumulates, it may turn out that a larger number of customer names fall into the M-N range than the A-B range. Databases should, in  theory, be able to identify the characteristics of each range in the partition over time. Thus, the  database should be able to monitor the user access patterns (a certain list of customers gets accessed most often, which probably represent the most valuable customers for a business). Then the database should automatically re-organize the partitions so that the most frequently accessed ranges are consolidated to reside on the same disk. Similarly, the database system should also be able to monitor the difference in the range populations (number of records in each range), and then re-organize the partitions with the most heavy loaded records on a single disk as much as possible to boost query performance.

Round-Robin Partitioning
Round-robin partitioning evenly distributes records across all disks that compose a logical space for the table, without regard to the data values being stored. This permits even workload distribution for subsequent table scans. Disk striping accomplishes the same result – spreading read operations across multiple spindles – but with the logical volume manager, not the DBMS, managing the striping. One difficulty with round-robin partitioning is that, if appropriate for the query, performance cannot be enhanced with partition elimination.

Hash Partitioning
Hash partitioning is a third method of distributing DBMS data evenly across the set of disk spindles. A hash function is applied to one or more database keys, and the records are distributed across the disk subsystem accordingly. Again, a drawback of hash partitioning is that partition elimination may not be possible for those queries whose performance could be improved with this technique.

For symmetric multiprocessors, the main reason for data partitioning is to avoid “hot spots” on the disks, where records on one spindle may be frequently accessed, causing the disk to become a bottleneck. These problems can usually be avoided by combinations of database partitioning and the use of disk arrays with striping. Because all processors have equal access to memory and disks, the layout of data does not significantly affect processor utilization.

For massively parallel processors, improper data partitioning can degrade performance by an order of magnitude or more. Because all processors do not share memory and disk resources equally, the choice of on which node to place which data has a significant impact on query performance.

The choice of partition key is a critical, fixed decision, that has extreme consequences over the life of an MPP-based data warehouse. Each database object can be partitioned once and only once without re-distributing the data for that object. This decision determines long into the future whether MPP processors are evenly utilized, or whether many nodes sit idle while only a few are able to efficiently process database records. Unfortunately, because the very purpose of data warehouses is to answer ad hoc queries that have never been foreseen, a correct choice of partition key is one that is, by its very definition, impossible to make. This is a key reason why database administrators who wish to minimize risks tend to recommend SMP architectures where the choice of partition strategy and keys have significantly less impact.

References -

1. Data Warehousing Performance with SMP and MPP Architectures; Sun Database Engineering Group, Third Edition, August, 1998; http://www.sun.com/servers/white-papers/med-smp.architecture.wp.html

Content Protected Using Blog Protector By: PcDrome.