Written by Sridhar Avantsa, Rolta AdvizeX
Introduction
Oracle introduced the feature of database sharding in 12.2 and has since been refining it. The purpose of this article is for the reader to understanding database sharding. This includes general concepts of sharding as well as the concepts, architecture, and how to use Oracle Sharding.
The Concept of Database Sharding
If we look up the definition of the word “shard” in the dictionary, here is what we would find:
- American English Definition – a sharp broken piece of a hard substance
- The English definition – a piece of a broken glass, cup, container, or similar object
In other words, we can say that “sharding” is the act of breaking something into pieces. There is a computer science design principle called “Horizontal Partitioning” which is really about splitting data in a table into multiple tables, usually within the instance.
From a commercial usage perspective, DB sharding has been supported and implemented in Relational and No SQL Databases. MySQL, Couchbase, MongoDB, Apache HBase IBM Informix, and Oracle to name a few. As a matter of fact, Google built and deployed “Spanner”, their own version of a globally distributed and synchronously replicated database which uses database sharding technologies.
https://research.google/pubs/pub39966/
General Concept of Sharding Databases
Database sharding takes the concept of Horizontal partitioning of data to the next level, by splitting tables across unique databases (See Figure 1 below).
A Sharded Database (SDB) is the logical compilation of multiple individual Shards. Each shard is a complete independent, self-sufficient and self-contained database. The data structures associated with the data being horizontally split by itself that hold a subset of the data. Each shard runs on its own individual and independent resources (Servers/storage etc.).
Database sharding solutions are primarily built to provide extreme scaling at the same time delivering extreme HA/uptime. That combination inherently implies a certain degree of flexibility in terms of growing/adding shards as an example. By adding shards, we introduce more resources available to process data and reduce the points of contention at the data tier.
In order to achieve the desired end goal of extreme scaling/uptime implies that any data sharding technology/implementation must at the very least address the following requirements:
- The algorithm used to define the shards and its data content must balance/ distribute the load equally across the shards. Equally in terms of data volumes as well as usage metrics.
- The sharding algorithm should be able to handle adding/removing shards in an easy manner.
- The sharding solution should provide the application a mechanism to be able to localize its activities to a shard.
In addition to the core features above, real-life use cases and requirements introduce additional complexities. For example, we all know that every application has a certain amount of core base reference data, that is not shard specific, but common across all shards. Furthermore, this core base reference data may change, hopefully not too frequently and not in a transaction (OLTP) nature. Accounting for such a need fundamentally implies embedding a data replication/synchronization tool kit into the core feature set.
In other words, a shared database implementation has the concepts of globalization and localization coexisting simultaneously in the same solution.
Application Design & DB Sharding
Just to be completely transparent and at the risk of stating the obvious, a data sharding solution will be useful and successful only if the application design incorporates shard awareness. Before we go any further, let’s define a little more clearly what we mean by shard awareness.
For an application to be shard aware, it must be able to:
- Be aware of and use the sharding algorithm to direct its data requests and DML: activity to the specific shard where its data resides.
- Be able to redirect activity as shard are added or removed.
- Be aware of and be able to handle up/down states of the shards individually and seamlessly.
The degree to which an application is shard aware will determine the degree to which the holy grail of extreme scaling and extreme uptime requirements can be met. The more shard aware the application design, the higher the capabilities of load distribution for parallelization, scaling while maintaining uptime are. The solution shifts more and more towards just load distribution for parallelization purposes only, when the application design in not shard aware.
As a matter of fact, an application that is not aware of the sharding could have performance, throughput, and scalability issues as workload volumes increase. How far and how drastic is that point of inflection, depends upon the application processing logic and the workload volumes.
Database Sharding in Oracle
With Oracle 12cR1, Oracle introduced “Global Data Service/Global Service Manager” (GDS/DSM) with the primary goal is to provide Routing, Load Balancing, and Service failover capabilities
Oracle introduced Oracle Database Sharding as a new feature in 12cR2, which uses the GDS framework as the Shard Director and provided some basic sharding capabilities. With the release of Oracle 18cR1, Oracle added some significant capabilities within sharding such as User Defined Sharding, PDB aware sharding, RAC aware as well as mid-tier sharding. With the release of 19c, Oracle has introduced further enhancements to the PDB Aware sharding capabilities, mid-tier sharding capabilities.
The Basic Architecture
The diagram below (taken from Oracle’s documentation) is a very good starting point to understand the architecture of the Oracle Database sharding solution. The key components of an Oracle SDB are:
- Sharded database (SDB) – a single logical Oracle Database that is horizontally partitioned across a pool of physical Oracle Databases (shards) that share no hardware or software
- Shards – independent physical Oracle Databases that hosts a subset of the sharded database
- Global service – database services that provide access to data in an SDB
- Shard catalog (SCAT)– An Oracle Database with the express purpose of being the persistent store for SDB configuration metadata.
- Shard directors – network listeners that enable high-performance connection routing based on a sharding key
- Connection pools – at runtime, act as shard directors by routing database requests across pooled connections
- Management interfaces – GDSCTL (command-line utility) and Oracle Enterprise Manager (GUI)
Each shard is a self-contained Oracle database with its own resource pool of CPUs, Storage, and networking. There is no requirement of having shared resources amongst shards. Shards within an SDB can be located in network proximity or be geographically separated. In Cloud terminology, that would be within a region or across regions. The current version of Oracle Data Guard is shard aware. For HA, the standby shards can be placed in the same datacenter (or region) where the primary shards are placed. For DR, the standby shards are located in another region.
The Shard Catalog
The primary purpose of a Shard Catalog (SCAT) is to the repository store for all the configuration-related metadata of an SDB.
Being the single central point that is aware of all the components of SDB, the SCAT also logically becomes the perfect point of centralization of the management of a shared database. Examples of such tasks/activities would include:
- DDL changes that need to be made across shards
- Adding/removing the shards
In addition to management activities, the SCAT is also the logical choice for acting as the query coordinator for multi-shard queries or worse, shard unaware queries. Extending this thought further, the SCAT is also the logical choice of master repository for any shared data that needs to be distributed across shards. Oracle uses Materialized Views from the SCAT to the individual shards to manage this data replication.
In general, having a Data Guard managed standby for the SCAT is a recommended best practice. That being said, loss of the SCAT primarily affects the capabilities to perform SDB wide maintenance activities or multi-shard queries. Well-designed shard algorithm-aware application.
Global Service
A global service is an SDB aware extension to the notion of the traditional database service concept. All of the properties of traditional database services are supported for global services in addition to SDB or shard-specific elements such as database role, replication lag tolerance, shard key mapping to shards, etc.
The Shard Director
As stated earlier, the Shard Director (SD) is a database sharding specific implementation of the Global Services Manager (GSM). The GSM is already able to route connections based on DB role, load, replication delay, and locality. As an SD, it now includes the ability to route connections based on data location, i.e., it is shard key aware.
SDB clients connect to the SD, which acts as a regional listener for SDB. The SD caches and maintains the SDB topology and is used in directing connection requests to specific shards.
In other words, the key responsibilities of the SD are:
- Maintain runtime data about the SDB configuration and shard availability
- Measure and maintain network latency within the configuration
- Connection load balancing and managing global services
SDB Schema Concepts
When working with Database sharding, the following are some of the key concepts/terminologies we will come across from a schema development, design, and deployment perspective.
- The SDB USER: Local users that only exist in the shard catalog database do not have the privileges to create schema objects in the SDB. Therefore, the first step of creating the SDB schema is to create the SDB user, by connecting to the shard catalog database as SYSDBA, enabling SHARD DDL, and executing the CREATE USER command.
- Sharded Tables: This is an extension of a regular table in that table deployed such that the data is split across multiple shards.
- Sharded Table family: A group of tables that follow the same sharding rule. These tables often have a parent/child relationship based on referential integrity (Foreign Keys). Normally, these table sets for a tree structure with one root level table.
- Note: In 19c an SDB can contain multiple sharded Table families.
- Sharded Sequences: The standard Oracle sequence functionality has been extended to include support for sharding and are called Sharded Sequences. Sharded sequences are created on the SCAT and not the individual shards. New options for sequences have been added to support Sharding and enforce key uniqueness across shard SHARD (EXTEND/NO EXTEND) and SCALE (EXTEND/NO EXTEND.
- Duplicated Tables: A table that will always have the same content across shards is called a Duplicate tables/ W have referred to these earlier when we introduced the SCAT. Each individual shard will contain the sharded tables (with data specific to the shard) and duplicate tables (common across all shards).
- Syntax: CREATE DUPKIATED TABLE.
- Under version 12.2 – any data updates had to be made on the SCAT and were then propagated out.
- 18c Onwards – The updates can be executed from either a shard or the SCAT. If the update is issued on a shard, the SCAT is updated via a DB Link and then propagated out to the shards from the SCAT.
- Non Table related Objects: All other schema objects, such as users, roles functions, tablespaces (including tablespace sets) are the same. No change in syntax, but:
- Must be preceded by an ALTER SESSION ENABLED SHARD DDL.
- Must be created across all shards.
- Note: use GSDCTL to make this easier.
Physical Organization of SDB’s
Before we talk about how SDB’s are organized, we need to cover a few basics concepts and terminologies. Keep in mind, that sharding is primarily the task of horizontally partitioning table data across databases (shards).
The first concept specific to SDB’s from a physical layout perspective is a TABLE PARTITION. A TABLE PARTITION refers to the table with the shard-specific data in a shard. Do not confuse this with what we have known for a while as a PARTITIONED TABLE. The difference is subtle but critical.
In a regular DB, implement a local table to hold all the data, but as a partitioned table.
versus
In an SDB, implement a TABLE PARTITION that only holds shard relevant data.
The second concept specific to SDB’s is a TABLESPACE SETS: In an SDB, tablespace sets are the logical storage of sharded data across shards. A tablespace set apples to an SDB (not a shard) and consists of multiple Oracle tablespaces distributed across shards in a shardspace.
The third concept specific to the physical layout of SDB’s is CHUNKS. In an SDB, a CHUNK represents the smallest logical unit for horizontal partitioning and migration between shards. In other words:
- The number of chunks in an SDB is defined when creating the SCAT.
- Post SCAT creation, the number of chunks can be changed only by splitting a chunk.
- Each chunk is logical unit of sharded data in sharded tables and table families.
- These chunks are spread out and distributed across the shards within an SDB.
- The number of chunks also controls the number of tablespaces in a tablespace set within a shard.
Connecting to Oracle Sharded Database
Connecting to an SDB is different in the sense that it needs to translate into connecting to a specific SHARD. Why is that? Because the data is horizontally partitioned and resides in a specific shard.
If the application is aware of the sharding algorithm used by the SDB, the best route to the data would be to connect to the shard directly. This is called “Direct Routing to a Shard”. The Oracle connection pools and drivers are now Sharding aware. Oracle drivers (including OCI / JDBC / ODP.NET as well as Oracle Universal Connection Pool (UCP) now provide APIs to pass sharding keys during DB connection creation. Once the session is established to a shard, all queries and DML’s issued are executed within the particular shard.
A shard topology cache is a map of shard keys to a shard. The shard topology cache is created at the tie of pool initialization or connection creation. The connection pool drivers cache this shard topology, which creates an efficient mechanism to translate the sharding key to the shard, without going to the shard director each time. This way, a shard director outage does not impact the availability of the SDB.
SDB also supports the use case of database connection without being aware of the sharding key or algorithm. This is called “Proxy Routing”, wherein the SCAT assumes the role of the Shard Coordinator (SCOORD).
The SCOORD takes the query, fragments the query into shard-specific queries, based on the shard topology, coordinates the query executions across the shards, collates the results, and presents them back to the client.
Administration with SDB:
Managing shared databases can be a topic of an article on its own. We are just going to cover some major points of not here, but rest assured there is a lot more to managing an SDB.
- Use the SHARDS() clause to query Oracle-supplied tables to gather performance, diagnostic, and audit data from V$ views and DBA_* views.
- To Executing DDL on a shared database you need to enable “ENABLE SHARD DDL” clause “ALTER SESSION ENABLED SHARD DDL”
- “GSDCTL” provides a simple, clean, shard database-aware administration platform.
- Much like the “srvctl” utility is to Oracle RAC
- Database patching:
- Patches are most often shard-aware and can be applied to a single shard at a time.
- Some patches MUST be applied to all shards in an SDB – if they apply to Shard Infrastructure, replication, or multi-hard queries.
- Starting with 19c, “opatchauto” can be used to apply patches in a rolling manner as well as applying the patch to multiple shards at a time.
- With data guard, some patches will support Standby First Patching when possible
- Upgrading an SDB: Upgrading a sharding database itself is not that different from a standard upgrade.
- However, you must keep in mind, an SDB environment has more components than just a database. Components of an SDB environment must be upgraded first, and sequence matters SCAT à SD — Shards
- As of this article, Oracle Sharding does not support Downgrading. So be very careful.
Sharding versus Clustering (RAC) – Not the same
Sharding and Oracle RAC are not at all the same thing. As a matter of fact, they are quite the opposite when it comes to some core features and purpose of the technology. Oracle RAC is a shared everything implementation, with tight integration and management of the cluster and the resources.
- All nodes see all the data at all the time.
- Transaction scope is cluster-wide and transaction impact can be cluster-wide.
- Database and cluster resources are shared across the entire cluster, whether it be locks or blocks.
- In case of a node/instance failure, the master node takes over recovery responsibilities, after which surviving nodes can server up all the data,
Database sharding, on the other hand, is a share-nothing approach and is intended to be very loosely coupled.
- Sharded databases do not have anything similar to a cluster.
- Sharded data contained on a shard is only on that shard. It is not shared.
- If no data is shared, there are no shared resources.
- If there are no shared resources, there are no shared locks/transactions.
- If we lose a shard, then we lose access to the data on that shard.
Oracle Database Sharding Methodologies/Options
There are 3 database sharding implementation algorithms provide within Oracle Sharding. The shard method is defined when the SHARD CATALOG is created and cannot be changed. The actual implementation of sharding uses partition as an underlying technology. As a result, database sharding supports all the standard sub partitioning options.
System Defined & User Defined Sharding
The first method is called “System-Managed Sharding”. As the name suggests, the oracle database itself manages and controls the algorithm. The distribution across shards is based on a HASHING algorithm, called CONSISTENT HASH. Unlike the hashing algorithm used in standard partitioning, consistent hashing algorithm does not take into account the number of buckets. As a result, system-managed partitioning using CONSISTENT HASH algorithm is really Range Partitioning by Hash Values.
Assume:
- Number of Potential values for the HASHING Algorithm : NTotHash
- HashValue (0) to HashValue(NTotHash -1)
- Number of chunks defined at the time of SDB creation is NTotChunks
- Chunk(0) to Chunk(NTotChunks, –1)
- Number of shards in SDB is N
- Shard1 to Shard(N)
Therefore:
- Number of chunks assigned to a shard
- NChunkPerShard = NTotChunks /NTotShards
- Number of hash values assigned to a chunk
- NHashValuePerChunk = NTotHash / NTotHash
Therefore:
- Shard[i] includes
- Chunk[NChunkPeShard * i ] to Chunk[NChunkPeShard * (i+1) -1]
- Chunk[i] includes
- HashValue [NChunkPerShard * I] to HashValue [NChunkPerShard * (i+1)+1]
Results of using the hashing algorithm:
- Adding/removing shards is more about relocating entire chunks and the content to a new shard.
- In the case of splitting a chunk, the contents of only a specific chunk are split, the other chunks are unaffected.
- The sharding partition for each shard is stored in a Tablespace Set.
- SDB topology map cache is a table mapping Chunk# to Shard#
- Direct Connection routing applies the Consistent Hash function to the sharding key provided, which determines the HashValue, which determines the Chunk #, which determines the Shard#.
The second method available to use is called “User-Defined Sharding”. As the name suggests, the user defines and controls the algorithm uses to map shard keys to specific shards.
With user-defined sharding, a sharded table can be partitioned by RANGE or LIST, much as it is done with regular partitioning. With user defined Sharding, each partition is stored in a specific tablespace (cannot use “Tablespace Sets” with User Defined Sharding). The tablespace is created individually and is associated with a shardspace. A shardspace is set of shards that store data that corresponds to a range or list of key values – A collection of shards.
The table below compares users defined sharding and System Managed Sharding, from a pro’s and con’s perspective.
Composite Sharding
Wouldn’t it be great If we merge the benefits system managed sharding with user-defined sharding? Well composite sharding, introduced in 18c, is Oracle’s answer to that. Composite sharding uses 2 sharding keys, A Super Sharding Key and a sharding key.
- Using the Super Sharding key, the table is sharded across shardspaces using List or Range-based sharding.
- Within the shards included in a sharding space, the system managed sharding using the sharding key is implemented.
With this approach, composite sharding provides you with the benefits of system and user defined sharding, while trying to eliminate or minimizing the impact of the negatives. Use the super key based sharding to provide for data localization and control and use the second level sharding key for ease of management /administration and other automation capabilities provided by System Managed sharding.
For example, consider a case of distribution warehouses fulfilling orders. There are three warehouses in all, Reno being the largest, followed by Chicago and New Jersey.
In this case, a possible sharding configuration may be:
- Super shard key would a LIST PARTITION by Warehouse ID
- A shard key within a shardspace based on Order-ID.
To SHARD or Not To SHARD, that is the question
At the end of the day, database sharding is an approach to address linear scalability fault isolation using data distribution for the most demanding applications. Furthermore, when designing Oracle Sharding based solutions, keep in mind that the final design can include other Oracle DB features such a RAC or Data Guard etc.
Sharding is really best used for custom OLTP application with extreme scalability and availability requirements. However, for an application reap the maximum benefit from sharding, the application must be suited to / be aware of sharding, i.e. the application must have a well-defined data model and be able to adapt to the data distribution strategy. In other words, the access to data in a sharded database must always be made using the sharding key.
Done correctly, the benefits of sharding could potentially include:
- Linear Salability by adding capacity at a shard/Shardspace level
- Fault Isolation – Since each shard is a self-sufficient and contained unit.
- Higher availability – By making changes to one shard at a time instead of the entire database in one shit. This would include application changes, rolling patching and upgrades etc.,
- Data Proximity – using a well-designed data distribution strategy, data can be kept close to consumption points.
Take as an example, the possible use case defined in the composite sharding section. A possible shardspace design could be:
- We can scale each DC individually based on its specific growth and load patterns.
- We can perform any maintenance activities in each DC’s local time zone and each DC’s off-peak hours.
- We can perform maintenance on a rolling fashion, one shard at a time, without requiring idling entire shifts.
- Data localization for each warehouse but data centralization for HQ.
Some additional tips to consider when it comes to the architecture, design and deployment of an Oracle Sharding solution are:
- Use of Proxy Based Routing implies a dependency on the SCAT, so consider Oracle MAA design options
- Sizing the SCAT itself is rally dependent upon the size and number of Duplicated Tables.
- If using Proxy Routing, SQL query activities can be demanding on CPU, I/O, and memory resources, so plan accordingly.
- Incorporate Oracle MAA features as required to ensure Shard level resiliency – Oracle RCA, Data Guard or even GoldenGate.
From an application and schema design database design perspective, tips and best practices include:
- Data distribution strategies must be an integral part of the application design
- Applications must be sharding aware for maximum benefits.
- Application design should address database requests that go across multiple shards and not depend upon Proxy Routing.
- Use Oracle connections pools (UCP, OCI, .NET or JDBC) and direct routing for performance, fault tolerance and data isolation
- Joins between tables in a table family should be performed using the sharding key.
- When using SHARDED SEQUENCES, do not specify ORDER on the sequence.
- Since Sharded Sequences are maintained on the SCAT, the SCAT’s becomes critical for overall availability of the SDB. Plan for it.
- If SHARDED SEQUENCES become a bottleneck to scaling, using local sequences with interleaving to ensure unique values across shards.
These are theoretical and lot of nuances need involved would need to be addressed
SaaS Solution Providers – Customer segmentation using schemas
- Replace or extend schema separation with Shard based separation.
- Data Isolation with Fault Tolerance via isolation, yet manage as a single database
- Simplify application & Management – by eliminating schema separation-based application intelligence.
Centralized Global Data with Data Localization
- Single Global database with individual localized shards
- Local entities access local shards for speed
- But still maintain ability to access data as a single entity.
Solving the Trifecta – High Volume / Performance & Availability Requirements
- Use Shards to spread volume/traffic for throughput
- Use Shard level maintenance ability for HA via fault tolerance/isolation